Category Archives: Artificial Intelligence
Facebooks Jerome Pesenti Explains the Limitations of Artificial Intelligence Research – NullTX
Major developments continue to take place in the artificial intelligence industry. Facebooks Jerome Pesenti thinks the current model of deep learning is reaching its limits, however.
Dozens of companies are in the process of exploring the potential of artificial intelligence.
Virtually all of these companies have scientists and engineers pushing the boundaries of deep learning.
The development of new algorithms has allowed for some intriguing insights and developments over the years.
Unfortunately it would appear that the current strategy involving deep learning may hit a glass ceiling sooner rather than later.
Those are the findings of Jerome Pesenti, head of artificial intelligence at Facebook.
In a recent interview with Wired, Pesenti acknowledges how deep learning and current artificial intelligence have severe limitations.
Achieving human intelligence, while still an attainable goal, will not happen any time soon.
Thankfully, there is still progress being made to address some limitations.
Taking into account how the artificial intelligence space is still evolving and growing in 2019, there are still millions of options left unexplored.
One aspect no one can ignore is how the compute power required to research advanced AI continues to increase twofold every three years or so.
Pesenti confirms this problem exists, and highlights the need for scaling if any more progress is to be made.
At the same time, he is convinced the rate of progress for advanced artificial intelligence is not sustainable through this model.
Rising costs make it rather unattractive to conduct these levels of experiments today.
Image(s): Shutterstock.com
Read more here:
Facebooks Jerome Pesenti Explains the Limitations of Artificial Intelligence Research - NullTX
Defining the Scope of an Artificial Intelligence Project – Toolbox
A final consideration in project selection is the determination of the appropriate size of the project. It must be matched with available resources. Will there be enough time, money, people, or development equipment? What would be the attitude of the domain experts? If favorable, would they be able to devote the effort the project would require of them? How many of the modules are available off the shelf in SaaS oropen source?Predicting the availability of required resources realistically is an important aspect of the project selection.
An attraction of Al technology is its effectiveness in solving problems that contain uncertainty, ambiguity, or complexity. However, it is still necessary to put some bounds on these factors to have a successful project. If the bounds cannot be determined accurately, particularly for early AI projects, a different application should be considered. The same comment applies to applications where the knowledge base may be incomplete. In such, would applications with a partial solution be useful or acceptable? On the other hand, it is tempting to incorporate too much knowledge in the system. Even though the addition of knowledge increases the performance of the system, potential problems with redundancies and inefficiencies could be encountered. Either of these circumstances would substantially increase the scope and cost of the project.
As noted previously, there are many good applications of AI technology which do not have the goal of replacing human experts. Rather, the intent is to assist the experts to do a better job or to improve their work environment. Limiting, at least initially, the extent of assistance to the user, enables a more accurate estimate of project size. Another aid in limiting the scope of a system is to prescribe the range of problems that it is intended to solve. For example, a diagnostic system could be designed to handle 20 percent of the potential faults that cause 80 percent of the problems.
Read this article:
Defining the Scope of an Artificial Intelligence Project - Toolbox
Aural Analytics Joins Consumer Technology Association Initiative to Set New Standards for Artificial Intelligence in Healthcare – Business Wire
SCOTTSDALE, Ariz.--(BUSINESS WIRE)--Aural Analytics, Inc., a privately held digital health company developing the worlds most advanced speech analytics platform, today announced its participation in the Consumer Technology Association (CTA) initiative to develop new standards and best practices for the use of artificial intelligence (AI) in healthcare.
The CTA AI in Healthcare Working Group, which comprises more than 45 organizations, from major tech companies to health care industry leaders, aims to ultimately enhance health outcomes, improve efficiencies and reduce health care costs.
Aural Analytics is pleased to be working alongside an impressive roster of innovators from across the ecosystem to define standards governing how all modalities, including voice, will be used in healthcare, said Visar Berisha, Ph.D., co-founder, chief analytics officer, Aural Analytics and a member of the working group.
Aural Analytics proprietary platform tracks and analyzes vocal biomarkers (components of speech) that detect and measure subtle, clinically relevant speech changes in patients with neurological conditions that impact speech and language.
Advancing the tremendous potential of artificial intelligence within healthcare requires a rigorous approach and a common understanding of the challenges such as privacy and confidentiality, said Daniel Jones, co-founder, chief executive officer, Aural Analytics. We support CTA and its strategic approach to setting standards in voice and other important modalities that will have far reaching impact within the context of healthcare.
AI has an increasingly significant role in health care today by improving diagnosis, treatment and care, said Rene Quashie, vice president, digital health policy and regulatory affairs, CTA. Across the sector, we are seeing life-changing tech revolutionize health care and some great examples of that will be seen at CES 2020. We convened this group of industry experts to address the challenges of using AI in health care and build an informed framework. Were excited to have Aural Analytics participate in the initiative and provide their expertise to this important work.
About Aural Analytics, Inc.
Aural Analytics, Inc. is a privately held digital health company developing the worlds most advanced speech analytics platform, built on a foundation of 25 years of speech neuroscience research and data. The Companys platform technology is based on pioneering research from Arizona State University and reinforced by multiple high-caliber peer-reviewed publications. Winners of the 2017 Global SCRIP Award for Best Technology in Clinical Trials, Aural Analytics first-to-market technology platform powers health applications all over the world. The Company maintains headquarters in Scottsdale, Ariz. For more information, please visit auralanalytics.com or follow Aural Analytics on Twitter, LinkedIn, Medium and Facebook.
Read more:
Artificial intelligence will affect Utah more than other states, new study says – Deseret News
SALT LAKE CITY Utahs economy could be more affected by artificial intelligence than other states, according to a new study by the Brookings Institution, a Washington, D.C.-based think tank.
The study predicted that Salt Lake and Ogden-Clearfield would be among the 10 top regions in the United States for workforces impacted by artificial intelligence.
Up to this point, most research surrounding the impact of technology on employment has focused on the effect of automation on blue-collar jobs, like clerical, manufacturing or construction jobs.
But this report predicted that in the future, artificial intelligence will actually have the most profound impact on white-collar fields like law, engineering and science. That includes tech-based economies like Utahs Silicon Slopes.
Among the most AI-exposed large metro areas are San Jose, California, Seattle, Salt Lake City and Ogden, Utah all high-tech centers, the study states.
What explains this shift in the types of jobs affected by artificial intelligence and how will Utah and other states across the country be affected by it?
Who could be affected?
Much of the discussion about artificial intelligences potential impact on the future whether optimistic or apocalyptic lumps it in with other forms of automation, including robotics and software, the report states.
But the role of artificial intelligence in the future economy should be considered on its own, said Mark Knold, senior and supervising economist for the Utah Department of Workforce Services.
Thats because artificial intelligence involves programming computers to perform tasks which, if done by humans, would require intelligence, such as learning, reasoning, problem-solving, prediction or planning, as defined by the report.
This means that artificial intelligence could potentially replace jobs which involve not just what humans can do, but how humans think. Thats why artificial intelligence could threaten jobs that are typically considered white collar because they primarily deal with human reasoning and problem solving, said Knold.
Artificial intelligence will be a significant factor in the future work lives of relatively well-paid managers, supervisors and analysts, the report states.
But Dan Ventura, professor of computer science at Brigham Young University, said that right now, artificial intelligences primary strength is in tasks involving pattern recognition, such as facial recognition or medical diagnostics, he said, and even those advancements have been subject to criticism for inaccuracy and racial bias.
AI is getting really good at pattern recognition and finding patterns in data, better than humans in some cases, said Ventura. I can say with some level of confidence that the types of jobs that involve that kind of work are potentially vulnerable to being displaced by AI.
But artificial intelligence is nowhere near being able to take on such complex tasks as making judgments and complex decisions, said Ventura. For example, artificial intelligence could detect the presence of a tumor but it would take a human doctor to decide whether to operate, perhaps in concert with discussions with the patient or the patients family.
The kinds of jobs where theres a lot more judgment, subjectivity, human impact, they arent even in the ballpark of being able to do something like that right now, said Ventura. I dont think those kinds of jobs are in any kind of danger in the near future, he said.
The upshot
Both Ventura and Knold say AI shouldnt be viewed only through the lens of fear.
While some industries are likely to be disrupted and some jobs will become obsolete, Ventura predicts, artificial intelligence could also actually create new jobs.
Those jobs could be complementary to work performed by artificial intelligence, such as quality control, or could involve making decisions about information produced through artificial intelligence. In some professions, artificial intelligence could speed up or take care of the busy work, said Ventura, leaving the human professionals more time and resources to focus on decision-making or qualitative analysis.
Knold added that demographic trends indicate that as the baby boomer generation ages out of the workforce, the younger generation which is less populous wont supply enough workers to replace the jobs that the older generation has vacated, said Knold.
In the future, when you have less human brains around, artificial brains could become more valuable and more profitable, he said.
Artificial intelligence could help companies thrive even with less human workers, he said.
The fear is that AI will replace workers and youll have higher unemployment, said Knold. But I think what it will do is help replace missing workers and not displace existing work.
Ventura said that while making such predictions is important to help people start thinking about what careers and skills to build to be prepared for the future, as this technology is still rapidly developing, its very difficult to know how it might actually affect the workforce.
Its important to take this kind of analysis with a grain of salt, said Ventura. Predicting the future is notoriously difficult.
Excerpt from:
Artificial intelligence will affect Utah more than other states, new study says - Deseret News
VA launches National Artificial Intelligence Institute to drive research and development – FierceHealthcare
The Department of Veterans Affairs (VA) wants to become a leader in artificial intelligence and launched a new national institute to spur research and development in the space.
The VA's new National Artificial Intelligence Institute (NAII) is incorporating input from veterans and its partners across federal agencies, industry, nonprofits, and academiato prioritize AI R&D toimprove veterans' health and public health initiatives, the VA said in a press release.
VA has a unique opportunity to be a leader in artificial intelligence, VA Secretary Robert Wilkie said in a statement.VAs artificial intelligence institute will usher in new capabilities and opportunities that will improve health outcomes for our nations heroes.
Curating a Higher Level of Personalized Care: Digital Health + Mom
A long-term digital health strategy is needed to respond to the technology demands of the modern patient while thriving as an independent hospital in a fiercely competitive market. In this webinar, Overlake and one of its digital health partners, Wildflower Health, will discuss how Overlake has approached digital health and why it chose to focus early efforts on expectant moms within its patient population.
RELATED:VA taps Googles DeepMind to predict patient deterioration
For its AI projects, the VA plans to leverage itsintegrated health care system and the healthcare data it has amassed, thanks to itsMillion Veteran Program. That program has collected 800,000 veterans' data in a genomic database with the goal of researching how genes, lifestyleand military exposures affect health and illness.
The VA has tapped biomedical informatics leaderGilAlterovitz, Ph.D., to lead the NAII as the institute's director. Alterovitzis afaculty at Harvard Medical School in the Center for Biomedical Informatics. He is the director of the Biomedical Cybernetics Laboratory and core faculty member of the Childrens Hospital Informatics Program.
According to the VA, Alterovitz has led national and international collaborative initiatives for developing novel informatics methods and approaches for integrating clinical, pharmaceutical, and genomic information, from research to point-of-care.
Alterovitz ledthe Office of the National Coordinator for Health IT's "Sync for Genes" effort, an initiative to advancethe use of standards to enable and improve patients ability to share their genomics information.
RELATED:Mayo Clinic taps Google Cloud as strategic partner to accelerate innovation in AI, analytics and digital tools
VA has used AI and machine learning technologies to help reduce veterans wait times and identify those at high risk for suicide. The department's AI initiatives also have helpeddoctors interpret the results of cancer lab tests and choose effective therapies, according to the VA.
NAII is a joint initiative between VAs Office of Research and Development and Secretarys the Center for Strategic Partnerships. The programwill design, execute and collaborate on large-scale initiatives and national strategyand build on the American AI Initiative and the National AI R&D Strategic Plan.
See original here:
Artificial intelligence apps, Parkinsons and me – BBC News
In my work as a journalist I am lucky enough to meet some brilliant people and learn about exciting advances in technology - along with a few duds.
But every now and then I come across something that resonates in a deeply personal way.
So it was in October 2018, when I visited a company called Medopad, based high up in London's Millbank Tower.
This medical technology firm was working with the Chinese tech giant Tencent on a project to use artificial intelligence to diagnose Parkinson's Disease.
This degenerative condition affects something like 10 million people worldwide. It has a whole range of symptoms and is pretty difficult to diagnose and then monitor as it progresses.
Medopad's work involves monitoring patients via a smartphone app and wearable devices. It then uses a machine learning system to spot patterns in the data rather than trying to identify them by human analysis.
In its offices we found one of its staff being filmed as he rapidly opened and closed his fingers - stiffness in these kind of movements is one of the symptoms of Parkinson's.
As we filmed him being filmed, I stood there wondering whether I should step in front of the camera and try the same exercise.
Media playback is unsupported on your device
For some months, I had been dragging my right foot as I walked and experiencing a slight tremor in my right hand.
I had first dismissed this as just part of getting older, but had eventually gone to see my GP.
She had referred me to a consultant neurologist, but at the time of filming I was still waiting for my appointment.
As we left Medopad, I clenched and unclenched my fingers in the lift and reflected on what I had seen. A few days later my coverage of the project appeared on the BBC website.
Three months on, in January this year, I finally met the consultant.
She confirmed what I had long suspected - I was probably suffering from idiopathic Parkinson's Disease. The "idiopathic" means the cause is unknown.
As I got to grips with the condition and started a course of medication, I quickly found out that there are all sorts of unknowns for people with Parkinson's.
Why did I get it? How quickly will the various symptoms develop? What are the hopes of a cure?
There are no reliable answers.
My response has been to take a great interest in how the technology and pharmaceutical industries are investigating the condition.
Developments in artificial intelligence, coupled with the availability of smartphones, are opening up new possibilities, and this week I returned to Medopad to see how far it had progressed.
Media playback is unsupported on your device
I asked the firm's chief executive, Dan Vahdat, whether he had noticed anything that suggested I might have a special interest in Parkinson's when I first visited.
"I don't think we noticed anything specifically," he said.
"But - and that's weird for me to tell you this - I had this intuition that I wanted to get you to do the test."
That, of course, did not happen but over the last year there has been a clinical trial involving London's King's College Hospital.
People with Parkinson's have been given a smartphone app, which their relatives use to record not just that hand-clenching exercise but other aspects of the way they move.
"We think this technology can help to quantify the disease," Dan explained.
"And if you can quantify the disease, it means you can see how the disease progresses.
"It gives you lots of opportunities, in terms of treatment adjustments, interventions at the right time, potentially screening a larger cohort of patients with the technology in ways that were not possible before."
This made me think about my own situation.
Since February, I have been prescribed Sinemet - one of the most common Parkinson's drugs - in the form of two tablets taken three times a day.
While some patients see an instant impact, I cannot say I notice much effect.
If anything my main symptom, dragging my right foot, has got slightly worse. When I see my consultant every four months we discuss whether the prescription should be adjusted, but it is difficult for me to quantify my symptoms.
Dan told me this was exactly the kind of scenario they are trying to address.
"We think you will end up having a more continuous observation via machine and the doctors can look at it remotely. And with that they will be able to adjust your treatment, if needed, because potentially right now you're either overdosing or underdosing."
I am now going to get access to the trial app and look forward to finding out what it says about me.
This is just one of many projects run by a variety of companies where real-time data is collected from people with Parkinson's and other conditions via their handsets.
The search for a cure to Parkinson's goes on. We appear to be a long way off, but in the meantime quantifying a condition like mine could do a lot to improve how I and many others cope with the symptoms.
What is exciting to me is that the smartphone revolution, which I have documented since watching Steve Jobs unveil the iPhone in 2007, now promises to change healthcare just as it has transformed many other aspects of our lives.
And I hope to continue reporting on that revolution for many more years.
Read the original post:
Will the next Mozart or Picasso come from artificial intelligence? No, but here’s what might happen instead – Ladders
As artificial intelligence has been slowly becoming more and more of a mainstream term, there has been a question rumbling in the art community:
Will AI replace creativity?
Its a fantastic question, to tell you the truthand certainly shows what sorts of problems were wrestling with as a society in todays day and age.
First, its important to consider what our definition of art is in the first place. A very broad definition within the art world would be, Anything created by a human to please someone else. Thats what makes something art. In this sense, photography is an art. Videography is an art. Painting, music, drawing, sculpture, all of these things are done to evoke an emotion, to please someone elsecreated by one human, and enjoyed by another.
Stage one:AI became a trendy marketing phrase used by everyone from growth hackers to technologists, with the intention of getting more eyeballs on their work, faster. So the term AI actually made its way into the digital art world faster than the technology itself, since people would use the term to make what they were building seem more cutting-edge than anything else in the spaceregardless of whether or not it was actually utilizing true artificial intelligence.
Stage two:Companies saw the potential artificial intelligence had in being able to provide people (in a wide range of industries) with tools to solve critical programs. For example, we use data science atSkylum, to help photographers and digital content creators be more efficient when performing complex operationslike retouching photos, replacing backgrounds, etc. We use AI to make the process of creating the art more efficient, automating the boring or tedious tasks so that artists can focus more time and energy on the result instead of the process.
Theres a great article in Scientific American titled,Is Art Created By AI Really Art?And the answer is both yes and no.
Its not that artificial intelligence will fundamentally replace human artists. Its that AI will lower the barrier to entry in terms of skill, and give the world access to more creative minds because of what can be easily achievable using digital tools. Art will still require a human vision, however, the way that vision is executed will become easier, more convenient, less taxing, and so on.
For example, if you are only spending one day in Paris, and you want to capture a particular photograph of the Eiffel Tower, that day might not be the best day for your photo. The weather might be terrible, there might be thousands of people around, etc. Well, you can use artificial intelligence to not only remove people from the photograph but even replace the Eiffel Tower with an even higher resolution (from a separate data set) picture of the toweror change the sky, the weather, etc.
The vision is yours, but suddenly you are not limited by the same constraints to execute your vision.
Digital art tools are built to make the process as easy as possible for the artist. If you consider the history of photography, as an art, back in the film days, far more time was spent developing film than actually taking pictures. This is essentially the injustice technologists are looking to solve. The belief in the digital art community is that more time shouldnt be spent doing all the boring things required for you to do what you love. Your time should be spent doing what you love and executing your vision, exclusively.
Taking this a step further, a photographer today usually spends 20-30% of their time giving a photo the look and feel they want. But they spend 70% of their time selecting an object in Photoshop or whichever program theyre using, cutting things out, creating a mask, adding new layers, etc. In this sense, the artist is more focused on theprocessof creating their visionwhich is what creates a hurdle for other artists and potentially very creative individuals to even get into digital art creation. They have to learn these processes and these skills in order to participate, when in actuality, they may be highly capable of delivering a truly remarkable resultif only they werent limited, either by their skills, their environment, or some other challenge.
So, artificial intelligence isnt here to replace the everyday artist. If anything, the goal of technology is to allow more people to express their own individual definition of art.
There may be more Mozarts and Picassos in our society than we realize.
This article first appeared on Minutes Magazine.
Original post:
What Jobs Will Artificial Intelligence Affect? – EHS Today
Its impossible to ignore the fact that advances in artificial intelligence (AI) is changing how we do our current jobs. But what has captured even more interest is how the increasing capability of this technology will affect future jobs.
In trying to determine the specific effects on which jobs and which sectors, many studies have been undertaking but its hard to capture this information.
To add further research to this topic theBrookings Institutionissued a reporton Nov. 20, presenting a new method of analyzing this issue.
By employing a novel technique developed by Stanford University Ph.D. candidate Michael Webb, the new report establishes jobexposure levelsby analyzing the overlap between AI-related patents and job descriptions, the report said. In this way, the research homes in on the impacts of AI specifically and does it by studying empirical statistical associations as opposed to expert forecasting.
The technique Webb used was able to quantify the overlap between thetext of AI patents and the text of job descriptions that can identify the kinds of tasks and occupations likely to be affected by particular AI capabilities.
We find that Webbs AI measures depict a very different range of impacts on the workforce than those from robotics and software. Where the robotics and software that dominate the automation field seem to mostly to involve routine or rule-based, tasks and thus lower-or-middle pay roles, AIs distinctive capabilities suggest that high-wage occupations will be some of the most exposed, the report noted.
Using patents are useful here because they provide timely predictions of the commercial relevance of specific technological applications. Occupational descriptions are also useful because they provide detailed insight into economic activities at the scale of the whole economy."
Findings
Based on these conclusions the report says that we have a lot to learn about AI, and that these are extremely early days in our inquiries.Whats coming may not resemble what we have experienced or expect to experience.
Society should get ready for a very different pattern of impact than those that accompaniedthe broad adoption of robotics and software. While the last waves of automation led to increases of inequity and wage polarization, its not clear that AI will have the same effects.
Read the original:
The next generation of user experience is artificially intelligent – ZDNet
These days, in any discussion about enterprise computing, the action is at the front end -- delivering superior user or customer experiences and user interfaces. Artificial intelligence-based technologies are providing developers and IT teams the power they need to deliver, while reducing the repetitive, manual tasks that have characterized UX, CX and UI.
Not only does enhanced automaton help deliver the right information on demand, but also incorporates natural language processing to get smarter about questions being asked, relates Chris McNabb, CEO of Dell Boomi. I recently had the opportunity to chat with McNabb, who talked about the urgency of focusing on UX as a key part of enterprise computing initiatives. "You can't increase productivity without ease of use, without being smarter, and getting pervasive intelligence into your user experience," he says.
In today's digital era, the challenge has extended well beyond the data and application integration challenges enterprises have been wrestling with over the past two decades. "It's the engagement side as well that matters, he points out. "How do I engage customers, partners, prospects, and employees in a way that gives them world-class services that can make a difference in my business? Successful transformation lives both in data and in engagement."
AI, in all its forms, is taking on the UX experience for enterprises. "I think AI holds tremendous potential," McNabb says. "AI allows computer systems for instance to read human X-rays at a much higher or more granular read than humans can. That's a great use for AI." .The potential is also seen in re-orienting work within enterprises, "predicting and dynamically creating information for people on the fly, based on knowledge that's in your platform," he continues. "How you align that user experience to make it faster and easier for people to get their jobs done? Not, 'what is this component? What is this object? What is all the software engineering terminology?'"
While there has been a lot of concern about the looming AI skills shortage, McNabb believes the inherent automated nature of AI will help mitigate this. The skills are most needed for creating and training data models, he explains. "It is a complicated deal to train models, you need experts to establish the model, to establish the training method for that model. But it you look at how the training occurs, you don't need a tremendous amount of knowledge and experience to do that." With natural language processing, for instance, "if I just keep feeding it phrases, and keep asking it questions, the model will train itself."
In Dell Boomi's own employment of the technology, "what ends up happening for us in our use of machine learning is that the training does occur by our 9,000 customers," McNabb explains. "Every time somebody asks it a question, and we validate whether the response came back good or bad, and that model gets smarter and smarter and smarter."
View original post here:
The next generation of user experience is artificially intelligent - ZDNet
Emotion Artificial Intelligence Market Business Opportunities and Forecast from 2019-2025 | Eyesight Technologies, Affectiva – The Connect Report
The report examines the world Emotion Artificial Intelligence market keeping in mind this the growth & development, trade chain, import & export knowledge of Emotion Artificial Intelligence business, and supply & demand.
The Global Emotion Artificial Intelligence Market report contains a valuable bunch of knowledge that enlightens the foremost imperative sectors of the Emotion Artificial Intelligence. The info on the Emotion Artificial Intelligence within the report delivers comprehensive information regarding the Emotion Artificial Intelligence industry, that is comprehendible not just for associate degree professional however additionally for a common man. The worldwide Emotion Artificial Intelligence market report provides info concerning all the aspects related to the market, which has reviews of the ultimate product, and therefore the key factors influencing or hampering the market growth.
IBM, Microsoft, Eyesight Technologies, Affectiva, NuraLogix, gestigon GmbH, Crowd Emotion, Beyond Verbal, nViso, Cogito Corporation, Kairos
Sample Copy of the Report here: http://www.marketresearchglobe.com/request-sample/1049195
North America, United States, Asia-Pacific, Central & South America, Middle East & Africa
Get Atractive Discount on Report at: http://www.marketresearchglobe.com/check-discount/1049195
Ask Questions to Expertise at: http://www.marketresearchglobe.com/send-an-enquiry/1049195
Customization of this Report: This Emotion Artificial Intelligence report could be customized to the customers requirements. Please contact our sales professional (sales@marketresearchglobe.com), we will ensure you obtain the report which works for your needs.
Here is the original post: