Category Archives: Artificial Intelligence

Artificial Intelligence in Healthcare: the future is amazing …

The role of artificial intelligence in healthcare has been a huge talking point in recent months and theres no sign of the adoption of this technology slowing down, well, ever really.

AI in healthcare has huge and wide reaching potential with everything from mobile coaching solutions to drug discovery falling under the umbrella of what can be achieved with machine learning.

That being said, many healthcare executives are still too shy when it comes to experimenting with AI due to privacy concerns, data integrity concerns or the unfortunate presence of various organizational silos making data sharing next to impossible. Weve covered the main barriers to adopting AI in healthcare here.

However, the future of healthcare & the future of machine learning and artificial intelligence are deeply interconnected.

Following our comprehensive guides on Artificial Intelligence in Pharma and Blockchain in Healthcare, weve decided to take a closer look at how the healthcare industry is positively impacted by the rise in popularity of artificial intelligence.

But first, a definition:

Artificial intelligence in healthcare refers to the use of complex algorithms designed to perform certain tasks in an automated fashion. When researchers, doctors and scientists inject data into computers, the newly built algorithms can review, interpret and even suggest solutions to complex medical problems.

Applications of Artificial Intelligence in healthcare are endless. That much we know.

We also know that weve only scratched the surface of what AI can do for healthcare. Which is both amazing and frightening at the same time.

At the highest level, here are some of the current technological applications of AI in healthcare you should know about (some will be explored further in the article while some use cases have gotten their own standalone articles on HealthcareWeekly already).

Medical diagnostics: the use of Artificial Intelligence to diagnose patients with specific diseases. Check out our roundup report from industry experts here. Also, a report AI platform was announced in March 2019 which is expected to help identify and anticipate cancer development.

Drug discovery: There are dozens of health and pharma companies currently leveraging Artificial Intelligence to help with drug discovery and improve the lengthy timelines and processes tied to discovering and taking drugs all the way to market. If this is something youre interested in, check our report titled Pharma Industry in the Age of Artificial Intelligence: The Future is Bright.

Clinical Trials: Clinical Trials are, unfortunately, a real mess. Most clinical trials are managed offline with no integrated solutions that can track progress, data gathering and drug tria outcomes. Read about how Artificial Intelligence is reshaping clinical trials here. Also, you may also be interested in the Healthcare Weekly podcast episode with Robert Chu, CEO @ Embleema where we talk about how Embleema is using AI and blockchain to revolutionize clinical trials. If Blockchain in healthcare is your thing, you may also be interested in our Global Blockchain in Healthcare Report: the 2019 ultimate guide for every executive.

Pain management: This is still an emergent focus area in healthcare. As it turns out, by leveraging virtual reality combined with artificial intelligence, we can create simulated realities that can distract patients from the current source of their pain and even help with the opioid crisis. You can read more about how this works here. Another great example of where AI and VR meet is the Johnson and Johnson Reality Program which weve covered at length here. In short, J&J has created a simulated environment which used rules-based algorithms to train physicians in a simulated environment to get better at their job.

Improving patient outcomes: Patients outcomes can be improved through a wide variety of strategies and outcomes driven by artificial intelligence. To begin with, check our report on 10 ways Amazons Alexa is revolutionizing healthcare and our Healthcare Weekly Podcast with Helpsys CEO Sangeeta Agarwal. Helpsy has developed the first Artificial Intelligence nurse in the form of a chatbot which assists patients at every stage of the way in their battle with cancer.

These are just a few examples and theyre only meant to quickly give you a flavor of what artificial intelligence in healthcare is all about. Lets dig into more specific examples that every healthcare executive should be aware of in 2019.

Artificial intelligence in the medical field relies on the analysis and interpretation of huge amounts of data sets in order to help doctors make better decisions, manage patient data information effectively, create personalized medicine plans from complex data sets and discover new drugs.

Lets look at each of these amazing use-cases in more details.

AI in healthcare can prove useful within clinical decision support to help doctors make better decisions faster with pattern recognition of health complications that are registered far more accurately than by the human brain.

The time saved and the conditions diagnosed are vital in an industry where the time taken and decisions made can be life-altering for patients.

AI in healthcare is a great addition to the information management for both physician and patient. With patients getting to doctors faster, or not at all when telemedicine is employed, valuable time and money are saved, taking the strain off of healthcare professionals and increasing comfort of patients.

Doctors can also further their learning and increase their abilities within the job through AI-driven educational modules, further showing the information management capabilities of AI in healthcare.

Around $5bn was invested into AI companies in 2016 and its no surprise that healthcare is up there with one of the fastest growing sectors. The healthcare industry is expected to get more than $6.6bn in investments by 2021.

There are 4 main machine learning initiatives within the top 5 pharmaceutical and biotechnology companies ranging from mobile coaching solutions and telemedicine to drug discovery and acquisitions.

Mobile coaching solutions come in the form of advising patients and improving treatment outcomes using real-time data collection. Theres a huge push in telemedicine in recent years too with companies employing AI for minor diagnosis within smartphone apps.

The ability to analyze large amounts of patient data to identify treatment options. The technology is able to identify treatment options through cloud-based systems able to process natural language.

Acquisitions continue to feed to innovation needs of both large and old biotech firms and with the development of AI, theres plenty to offer up when it comes to company control.

With startups combining the world of AI and healthcare, theres more choice for older and larger companies to acquire information, systems and even the people responsible for leaps and bounds in technology.

Drug discovery is another great place for AI to slip in with pharma companies able to include cutting-edge technology into the expensive and lengthy process of drug discovery.

The benefits of AI are instantly apparent with the focus on time-saving and pattern recognition upon testing and identification of new drugs.

In early-stage drug discovery, start-ups such as BenevolentAI or Verge Genomics are known to adopt algorithms which comb through portions of data for patterns too complex for humans to identify, saving both time and innovating in a way that we otherwise may not have been able to.

Insilico, another company with a heavy AI focus, has taken a different approach by using AI to design treatments not yet found in either nature of chemical libraries. An approach of using AI to simulate clinical trials before human trials have also been seen, leaving plenty of scope available for what AI can create.

For more information regarding how AI used in pharma, click here.

Growth opportunities may be hard to come by without significant investment from companies, but a major opportunity exists in the self-running engine for growth within the artificial intelligence sector of healthcare.

AI applications within the healthcare industry have the potential to create $150 billion in savings annually for the United States, a recent Accenture study estimates, by 2026. With AI in healthcare funding reading historic highs of $600m in equity funding (Q218) there are huge projected equity funding deals and equity deals as the years continue.

We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next 10 Bill Gates

Saliently, AI represents a significant opportunity for bottom line growth with the introduction of AI into the healthcare sector, with a combined expected 2026 value of $150bn:

The growth, however, is not unexpected and with the needs of the healthcare industry of which AI fits the gap its a match made in heaven.

With the predicted 2026 value of robot-assisted surgery, virtual nursing assistants and administrative workflow assistance are expected to be valued at $40bn, $20bn and $18bn respectively, its the numbers that come with claims that are the most impressive.

Although AI in healthcare has huge potential, as with most developments in the technological space, there are a number of known current limitation.

Experiencing teething problems with the introduction of any new technology is not rare, but must be overcome for large scale adoption of AI to occur in the healthcare market.

Ultimately, the adoption of AI will attract stakeholders who will invest in AI and successful case studies need to be highlighted and presented for future encouragement. These case studies will require some early adopters of healthcare companies to kickstart the process.

Privacy within healthcare is, by nature, extremely sensitive and thus confidential.

For utmost confidence in the technology, systems should be put in place to ensure data privacy and protection from hackers. Unfortunately, data breaches continue to be a common occurrence as reported before when UW Medicine exposed 1 million patient records or with Missouri Medicaid.

But privacy concerns should not be a deterrent from adopting artificial intelligence in the healthcare space. In fact, last year we did a story on how Artificial Intelligence can actually help healthcare data security.

HIPAA and a number of other patient data laws are subject to the approval of governing organizations e.g. FDA to ensure that federal standards are maintained.

The sharing of data among a variety of databases poses challenges to the HIPAA compliance and care must be taken around these areas if future developments are to succeed. As companies developing software, therefore AI, are also required to comply with Hitrust rules, current rules are regulations are definitely known to be a barrier to AI adoption.

Deep learning, AI and machine learning do not have the ability to ask the question why?. As a result, the logic behind decisions is not justified, meaning mostly guesswork is required to how the decision was made.

How and why the decision has been made is key to the information within the treatment plan. With a lack of reasoning can come a lack of confidence within the decision, potentially rendering the technology as unreliable or untrustworthy by both patients and professionals.

When it comes to the stakeholders within the adoption of AI in healthcare, everyone, including patients, insurance companies, pharma companies, healthcare workers etc. are key.

Resistance to pursue the technology at any of the aforementioned levels would result in issues and potential failure to the incorporation of the technology in the macro. Stakeholdering is one of the top ten reasons why the healthcare industry as a whole is not innovating enough in 2019.

Diagnostic errors account for 60% of all medical errors and an estimated 40,000 to 80,000 deaths each year. As a result, artificial intelligence has been employed in a variety of different areas in a bid to reduce the toll and number of errors made by human judgement.

That said, there continues to be significant pushback when it comes to AI adoption in the clinical decision support process as scientists and medical personnel continue to approach the topic of AI with incredible caution.

With minimal operator training needed and design with common output formats that directly interface with other medical software and health record systems, the system is incredibly easy to use and simple to implement.

A clear output from the system allows 60 seconds to identify whether the exam quality was of sufficient quality, the patient is negative for referable DR or the patient has signs of referable DR. Following signs of referable DR, further action in the form of a human grader over-reading, teleconsultation and/or referral to an ophthalmologist may be suggested.

Despite some setbacks and limitations, Artificial Intelligence in healthcare are virtually announced every day. In this section, we will cover some of the most remarkable and revolutionary uses of AI in healthcare with an understanding that this list is by no means complete and definitely a work in progress.

With the launch of the Apple Watch Series 4 and the new electrodes found within the gadget, its now possible for users to take an ECG directly from the wrist.

The Apple Watch Series 4 is the very first direct-to-consumer product that enables users to get a electrocardiogram directly from their wrist. The app that permits the readings provides vital data to physicians that may otherwise be missed. Rapid and skipped heartbeats are clocked and users will now receive a notification if an irregular heart rhythm (atrial fibrillation) is detected.

The number of accessories and add ons that technology companies are releasing for the Apple Watch is also beginning to crossover into the health industry. Companies, such as AliveCor have released custom straps that allow a clinical grade wearable ECG that replaces the original Apple Watch band. Although the strap may be rendered useless with the Series 4, for any of the earlier watches, the strap may prove a useful attachment to identify AF.

In addition, earlier this year, Omron Healthcare made the news when they deployed a new smart watch, called Omron HeartGuide. The watch can take a users blood pressure on the go while interpreting blood pressure data to provide actionable insights to users on a daily basis.

Last year, Fitbit released their signature Charge 3 wristband which uses Artificial Intelligence to detect sleep apnea.

What all these examples have in common is how wearable technologies are slowly being repurposed or augmented to improve medical outcomes. And in all these examples, artificial intelligence is leveraged, under the hood, to collect, analyze and interpret massive amounts of data which can improve the quality of life of patients everywhere.

Late 2018 marked the announcement from Aidoc that it had been granted its U.S. FDA clearance of its first AI-based workflow solution, the diagnosis of bleeds on the brain.

The systems created work with radiologists to flag acute intracranial haemorrhage (ICH), or bleeds on the brain, in CT scans. With over 75 percent of all patient care involving cardiovascular diseases, the workload on radiologists is massive.

Integration into the health industry is simple and wont require significant IT time and with additional hardware not required, its a simple resource that can be set up and maintained remotely. With a solution to assist workflow optimizations and increase the number of correct and high-quality scans, the demand for this AI-enabled technology is expected to be huge.

IDxhas developed an AI diagnostic system, IDx-DR, that autonomously analyzes images of the retina for signs of diabetic retinopathy. The software has received FDA approval to be used in the US.

1. Using a fundus camera, a trained operator captures two-color, 45 FOV images per eye

2. The images are transferred to the IDx-DR client on a local computer

3. The images are then submitted to the IDx-DR analysis system

4. Inside 60 seconds, IDx-DR provides an image quality or disease output and follow-up care instructions

5. If negative for mtmDR, the patient can be rescreened at a later date. If positive for mtmDR, refers the patient to eye care.

iCAD announced the launch of iReveal back in 2015 with the goal to monitor breast density via mammography to support accurate decisions in breast cancer screening.

With an estimated 40% of women in the US having dense breast tissue that can block the mammography from viewing potential cancerous tissue, the issue is huge and a solution was imperative.

The technology uses AI to assess breast density in order to identify patients that may experience reduced sensitivity to digital mammography due to dense breast tissue.

Ken Ferry, CEO of iCAD stated that With iReveal, radiologists may be better able to identify women with dense breasts who experience decreased sensitivity to cancer detection with mammography.

Mr. also Ferry added that The increasing support for the reporting of breast density across the US, there is a significant opportunity to drive adoption of iReveal by existing users of the PowerLook AMP platform and with new customers, which represents an incremental $100 million market opportunity over the next few years. Longer-term, we plan to integrate the iReveal technology into our Tomosynthesis CAD product, which is the next large growth opportunity for our Cancer Detection business.

Ultimately, the system remains at the forefront of breast cancer identification in women in the U.S. and with so many lives expected to be saved, I think everyone can agree what a fantastic use of AI it is.

QuantX is the first MRI workstation to provide a true computer-aided diagnosis, delivering an AI-based set of tools to help radiologists in assessment and characterization of breast abnormalities.

Using MR image data, QuantX uses a deep database of known outcomes and combines this with advanced machine learning and quantitative image analysis for real-time analytics during scans. A fast comprehensive display is seen with all processing on-demand in real-time with rapid display and reformatting of MPR, full MIPs, thin MIPs and subtractions.

A QI Score, a clinical metric correlated to the likelihood of malignancy is calculated with the images and regions of interest during scans. This is paired with a similar case compare, a tool which allows up to 45 similar cases from a reference library to be displayed for each analyzed lesion.

This information is passed on to radiologists to make accurate clinical decisions, decreasing the number of incorrect diagnoses in high-risk environments.

Coronary calcium scoring is a biomarker of coronary artery disease and quantification of this coronary calcification is a very strong predictor for cardiovascular events, including heart attacks or strokes.

A conventional coronary calcium scoring requires dedicated cardiac, ECG gated CT performed with and without contrast.

However, in recent times, a reliable derivation of coronary calcium score has been found algorithmically with the use of AI from low dose chest CT data. Zebra Medicals scoring algorithm uses these standard, non-contrast Chest CTs and automatically calculates the Coronary Calcium Scores.

The tool is vital for the early detection of people at high risk of severe cardiovascular events that otherwise would not be aware of the risk without extensive testing.

San Francisco-based privately held company Bay Labs gained FDA approval in June 2018 for the fully automated and AI-based selection of the left ventricular ejection fraction (EF). Note that Healthcare Weekly has included Bay Labs is our list of the most promising healthcare startups to watch for in 2019.

With EF noted as the single most widely used metric of cardiac function, used as the basis for numerous clinical decisions, Bay Labs AI based EchoMD and AutoEF algorithms work to reduce the errors and minimise workflow that surrounds the industry. The algorithms eliminate the need to manually select views, choose the best clips, and manipulate them for quantification, which is often noted as a particularly time-consuming and highly variable process.

The algorithms automatically review all relevant information and digital clips from a patients echocardiography study and proceeds to rate accordingly with image quality as the focus criteria. What may be most impressive about Bay Labs artificial intelligence solution is the method that the system learned clip selection in which over 4 million images were used to maximise algorithm success.

Ultimately, EchoMD and AutoEF will strive to maximise workflow efficiency while reducing the error in clinical decision making by helping physicians make correct choices.

Neural Analytics, a medical device company tackling brain health, announced a device for paramedic stroke diagnosis back in 2017, revolutionising the way that paramedics diagnose stroke victims.

Neural Analytic Lucid M1 Transcranial Doppler Ultrasound System tackles the issues of expensive and time-consuming stroke diagnosis for patients that suffer blood flow disorders.

This ultrasound system is designed for measuring cerebral blood flow velocities. This is no joke. Is successful, this technology will change how early doctors can detect stroke and could drastically improve patient outcomes.

The use of Transcranial Doppler (TCD), a type of ultrasound, allows for AI to assess the brains blood vessels from outside the body, preventing the need for more invasive tests. The AI software helps physicians detect stroke and other brain disorders caused by blood flow issues, increasing the capability of correct clinical decisions.

Icometrix is a company with the mission to transform patient care through imaging AI. With MRI brain interpretation used to decrease error in clinical diagnosis, the company is well on the way to changing the way that abnormalities are discovered within the brain.

The system developed objectively quantifies brain white matter abnormalities in patients, decreasing the amount of time taken, increasing the accuracy and improving patient care for those with brain issues. Changes in the brain are confidently evaluated with a focus on the structure with utmost accuracy. The system allows an increased sensitivity and augmented detection, ultimately leading to improved healthcare.

With quantification of clinically relevant brain structures for individual patients and a range of identifiable neurological disorders, theres plenty that AI had to offer in the space.

The OsteoDetect software is an AI based detection/diagnostic software that utilises intelligent algorithms that analyze two-dimensional X-rays.

The software searches for damage in the bone, specifically a common wrist fracture called the distal radius fracture. The software utilises the machine learning techniques to identify these problem areas and mark the location of the fracture on the image, assisting the physician with identification of a break.

Follow this link:

Artificial Intelligence in Healthcare: the future is amazing ...

10 Best Artificial Intelligence Course & Certification [2019 …

Our global team of experts have done extensive research to come up with this list of 14 Best +Free Artificial Intelligence Courses, Tutorial, Training and Certifications available online for 2019. These are relevant for beginners, intermediate learners as well as experts.

If learning Machine Learning is on your mind, then there is no looking further. Created by Andrew Ng, Professor at Stanford University, more than 1,680,000 students & professionals globally have enrolled in this program, who have rated it very highly. This course provides an introduction to the core concepts of this field such as supervised learning, unsupervised learning, support vector machines, kernel, and neural networks. Draw from numerous case studies and applications and get hands-on to apply the theoretical concepts to practice. By the end of the classes, you will have the confidence to apply your knowledge to real-life scenarios.You may also like to have a look at some of the best machine learning courses.

Key USPs-

Understand parametric and non-parametric algorithms, clustering, dimensionality reduction among other important topics.

Gain best practices and advice from the instructor.

Interact with your peers in a community of like-minded learners from all levels of experience.

Real world based case studies give you the opportunity to understand how problems are solved on a daily basis.

The flexible deadline allows you to learn as per your convenience.

Learn to apply learning algorithms to build smart robots, understand text, audio, database mining.

Duration : 55 hours

Rating : 4.9 out of 5

You can Sign up Here

Review :This course provides a thorough, end-to-end immersion into the world of machine learning. Not only does it cover clear explanations of theory, but it also highlights practical pointers and words of caution. Highly recommended course.

This course is created for individuals who are looking forward to learning about strategies and techniques of artificial intelligence to solve business problems. After the fundamental topics are discussed you will go over how AI is impacting different industries as well as the various tools that are involved in the operations for developing efficient solutions. By the end of the program you will have numerous strategies under your belt that can be used to improve the performance of your organization.

Key USPs

Learn to manage customer expectations and develop AI models accordingly.

Multiple case studies that allow you to get a better understanding of the challenges faced in the real world.

Get answers to your queries from a dedicated support team.

Complete the exercises and get feedback on your performance.

Work with real-life based data.

Pass the exam with at least 80% to earn the certification.

Duration: 2 months, 4 to 6 hours per week

Rating: 4.5 out of 5

You can Sign up Here

If you want to jumpstart a career in AI then this specialization will help you achieve that. Through this array of 5 courses, you will explore the foundational topics of Deep Learning, understand how to build neural networks, and lead successful ML projects. Along with this, there are opportunities to work on case studies from various real-world industries. The practical assignments will allow you to practice the concepts in Python and in Tensorflow. Additionally, there are talks from top leaders in the field that will give you motivation and help you to understand the scenarios in this line of work.In case you are interested, you may also want to check out Best Python Courses.

Key USPs-

Learn about convolutional networks, RNNs, BatchNorm, Dropout and more.

The lessons will help you to learn different techniques using which you can build models to solve real-life problems.

Real-world case studies in fields such as healthcare, autonomous driving, sign language reading, music generation, and natural language processing are covered.

Gain best practices and advice from the industry experts and leaders.

Complete all the assessments and assignments as per your schedule to earn the specialization completion certification.

Duration: 3 months, 11 hours per week

Rating : 4.7 out of 5

You can Sign up Here

Review :Course content is very good. Andrew Ngs style of teaching is phenomenal. He has a knack for uncomplicating an otherwise complex subject matter. Highly recommended for anyone who is trying to understand the fundamentals of neural networks and deep learning.

Artificial intelligence is considered to be one of the more complex topics in technology but its use in our daily lives cannot be overstated. So if you want your organization to become better at using this technology then this program is worth a look. In the classes, you will learn the meaning behind basic and crucial terminologies, what AI can do and cannot do, spot opportunities to apply AI solutions to problems in your organization and more. By the end of the lectures, you will be proficient in the business aspects of AI and apply them aptly in relevant situations. The course is created by Andrew Ng, the pioneer in the field of artificial intelligence, and the founder of Coursera.

Key USPs-

Understand what it is like to build machine learning and data science projects.

Work with an artificial intelligence team and build a strategy in your company.

Navigate ethical and societal discussions surrounding this field.

The lessons do not require any prerequisites, hence it can be taken by anyone with any level of experience.

The deadlines of the classes can be adjusted as per your convenience.

Duration: 4 weeks, 2 to 3 hours per week

Rating: 4.9 out of 5

You can Sign up Here

Review :Its a fantastic course by Andrew. Everyone should take it to understand how AI can impact any software system.

Enroll in this certification to gain expertise in one of the fastest growing areas of computer science through a series of lectures and assignments. The classes will help you to get a solid understanding of the guiding principles of artificial intelligence. With equal emphasis on theory and practical, these lessons will teach you to deal with real-world problems and come up with suitable AI solutions. With this credential in your bag, it is safe to say that you will have an upper hand at job interviews and other opportunities. Dont forget checking list of best Deep Learning Courses out there.

Key USPs-

The videos guide you through all the fundamental concepts beginning from the basic topics to more advanced ones.

Apply the concepts of machine learning to real-life challenges and applications.

Thorough instructions are provided for configuring and navigating through the required software.

Working on designing and harnessing the capabilities of the neural network.

The classes are divided into 4 parts along with relevant examples and demonstrations.

Apply the knowledge gained in these lectures in an array of fields such as robotics, vision and physical simulations.

Duration: 12 weeks per course, 8 to 10 hours per week, per course

Rating : 4.5 out of 5

You can Sign up Here

Offered by IBM, this introductory course will guide you to the basics of artificial intelligence. With this course, you will learn what AI is and how it used in the software or app development industry. During the course, you will be exposed to various issues and concerns that surround artificial intelligence like ethics and bias, and jobs. After completing the course, you will also demonstrate AI in action with a mini project that is designed to test your knowledge of AI. Moreover, after finishing the project, you will also receive your certificate of completion from Udacity.

Key USPs

Learn and understand AI concepts and useful terms like machine learning, deep learning, and neural networks

No prior knowledge of programming or computer science required to enroll in this course

Get advice from experts about learning artificial intelligence better and how to start a career in this growing field

Be eligible to enter into other classes and programs like AI Foundations, IBM Applied AI professional certificate after finishing this course

100% flexible course with no deadlines and freedom to study from your own pace

Duration: 4 weeks, 1-2 hours/week

Rating: 4.7 out of 5

You can Sign up Here

This program is designed with the focus to help you gain the skills needed to build deep learning predictive models for AI. While you are free to take the lessons in any order it is advised to follow along with the suggested format so that you can develop your knowledge with gradually advanced concepts. After the completion of the first eight mandatory courses, you can choose from four options for the ninth one prior to getting started with the capstone project.

Key USPs-

Read more from the original source:

10 Best Artificial Intelligence Course & Certification [2019 ...

A.I. Artificial Intelligence (2001) – IMDb

Edit Storyline

In the not-so-far future the polar ice caps have melted and the resulting rise of the ocean waters has drowned all the coastal cities of the world. Withdrawn to the interior of the continents, the human race keeps advancing, reaching the point of creating realistic robots (called mechas) to serve them. One of the mecha-producing companies builds David, an artificial kid which is the first to have real feelings, especially a never-ending love for his "mother", Monica. Monica is the woman who adopted him as a substitute for her real son, who remains in cryo-stasis, stricken by an incurable disease. David is living happily with Monica and her husband, but when their real son returns home after a cure is discovered, his life changes dramatically. Written byChris Makrozahopoulos

Budget:$100,000,000 (estimated)

Opening Weekend USA: $29,352,630,1 July 2001

Gross USA: $78,616,689

Cumulative Worldwide Gross: $235,926,552

Runtime: 146 min

Aspect Ratio: 1.85 : 1

Original post:

A.I. Artificial Intelligence (2001) - IMDb

Artificial Intelligence | GE Research

At GE, Artificial Intelligence (AI) development is primarily focused on connecting minds and industrial machines to enable intelligent and user-friendly products and services that move, cure and power the world. GE Research spearheads this charter via the invention and deployment of AI solutions that can execute on industrial devices, at the edge or in the cloud.

AIresearch is practiced as a multidisciplinary exercise at GE, where insights from data-driven machine learning is fused with domain-specific knowledge drawn from areas such as materials, physics, biology and design engineering, to amplify the quality as well as causal-veracity of the predictions derivedwhat we call hybrid AI. We are creating state-of-the-art perception and reasoning capabilities for our AI technology to observe and understand contextual meaning, to improve the performance and life of our assets, industrial systems and human health. We are developing continuous learning systems that teach or learn from other assets or agents and learn from real and virtual experiences to understand and improve behavior.

Some key challenges we tackle include a lack of sufficient labels needed for traditional supervised learning approaches, the need to ingest and link multiple data modalities, and the need to build AI solutions that are interpretable due to safety-related regulatory requirements.

State-of-the-art capabilities in computer vision, machine learning, knowledge representation, reasoning and human system interactions are used to robustly monitor, assess and predict the performance and health of assetsinformation that, when coupled with uncertainty quantification and assurance, provides the information needed to multi-objectively optimize customer-specific metrics.

Examples of customer outcomes enhanced by AI products include reduced downtime on assets through AI-driven proactive intervention (for e.g., airline delays and cancellation), increased throughput (for e.g., optimal control of wind turbine settings to maximize farm output), or reduced costs (for e.g., optimal power plant operation to minimize fuel costs). GE Research is developing and integrating artificial intelligence in healthcare by working to incorporate the technologyinto every aspect of the patient journey (for e.g., improved disease diagnosis, augmenting doctors and clinicians by increasing workflow efficiencies to save precious time).In addition to asset-awareness and management, active AI research areas include Computer Vision, automation, autonomy, User Experience, Augmented Reality and Robotics.

Follow this link:

Artificial Intelligence | GE Research

Artificial Intelligence & the Pharma Industry: What’s Next …

Artificial intelligence in Pharma refers to the use of automated algorithms to perform tasks which traditionally rely on human intelligence. Over the last five years, the use of artificial intelligence in the pharma and biotech industry has redefined how scientists develop new drugs, tackle disease, and more.

Given the growing importance of Artificial Intelligence for the pharma industry, we wanted to create a comprehensive report which helps every business leader understand the biggest breakthroughs in the biotech space which are assisted by the deployment of artificial intelligence technologies.

Last year, Verdict AI asked businesses how vital artificial intelligence will be in their respective industries and over 70% of them thought it would be very important. From the same group, only 11% of businesses have not considered investing in AI technology.

Furthermore, according to Narrative Science, 61% of companies investing in innovative strategies are using AI to identify opportunities that they would have otherwise missed. For pharmaceutical businesses that thrive on innovation, this is an important statistic to understand.

This article aims to help business executives learn what to expect from artificial intelligence in pharma. It will cover:

Artificial intelligence and pharma can help save more lives than ever before.

A study published by the Massachusetts Institute of Technology (MIT) has found that only 13.8% of drugs successfully pass clinical trials. Furthermore, a company can expect to pay between $161 million to $2 billion for any drug to complete the entire clinical trials process and get FDA approval.

With this in mind, pharma businesses are using AI to increase the success rates of new drugs while decreasing operational costs at the same time.

Novartisis embracing advancements in AI technology to create new and improved treatments and find ways to get people access to treatment quickly.

Novartis is currently using machine learning to classify digital images of cells, each treated with different experimental compounds. The machine learning algorithms collect and group compounds that have similar effects together, before passing on the clean data to researchers who can decide how to leverage these insights in their work.

Drug discovery often takes a long time to test compounds against samples of diseased cells. Finding compounds that are biologically active and are worth investigating further requires even more analysis.

To speed up this screening process, Novartis research teams use images from machine learning algorithms to predict which untested compounds might be worth exploring in more details.

As computers are far quicker compared to traditional human analysis and laboratory experiments in uncovering new data sets, new and effective drugs can be made available sooner, while also reducing the operational costs associated with the manual investigation of each compound.

But theres another reason why Novartis is at the top of our list. CEO,Vas Narasimhan is one of the forward-looking digital leaders in healthcare who is constantly advocating for the role AI, predictive analytics and big data can play in Pharma.David Shaywitz, in an excellent Forbes articlesummarizes all the challenges Novartis is facing in adopting AI but also how the company is still pursuing AI with some notable results in clinical trials and finance.

Verge Genomics develops drugs by automating their discovery process. They use automated data gathering and analysis to create solutions to some of the most complex diseases known today, including ALS and Alzheimers.

Cost aside, one of the reasons why drug discoveries fail is because they only target one disease gene at a time.

Using the same technologies that power Googles search engines, Verge has discovered ways to map out the hundreds of genes responsible for causing disease and then finding drugs that target them all at once.

Their platform is specifically designed for neurological diseases and can predict the effect of new treatments, while also reducing the cost of drug development.

Bayer and Merck & Co were granted the Breakthrough Device Designation from the FDA for artificial intelligence software that aims to support clinical decision making of chronic thromboembolic pulmonary hypertension (CTEPH).

This form of pulmonary hypertension affects around five people per million, per year around the world. Its symptoms are similar to conditions like asthma and COPD, meaning it can be tricky to accurately diagnose.

The aim of the software is to help radiologists detect certain patterns faster, who are often on the frontline for identifying CTEPH patients. The AI would analyze image findings from cardiac, lung perfusion, and pulmonary vessels in combination with a patients clinical history and then pass the insights to the radiologists leveraging this technology.

Both Bayer and Merck note that the development of their CTEPH Pattern Recognition Artificial Intelligence Software remains complex due to the nature of the disease they are attempting to better diagnose.

However, should it prove successful, the tool will eventually be able to assist in diagnosing patients earlier and more reliably, leading to earlier treatment and better patient outcomes.

Cyclica is a biotechnology company that combines biophysics and AI to discover drugs faster, safer, and cheaper. They have partnered with Bayer to create an AI-augmented integrated network of cloud-based technologies, known as the Ligand Express.

The Ligand Express screens small-molecule drugs against repositories of structurally-characterized proteins to determine polypharmacological profiles. From here, the company identifies significant protein targets and then they use artificial intelligence to determine the drugs effect on these targets. Finally, the AI produces a visual output of how the drug and proteins interact.

By understanding how small-molecule drugs interact with all proteins in the body, Ligand Express can produce the best solution, understand potential side effects, and determine new uses for existing drugs.

AI in pharmacology can also be used to find cures for known diseases such as Parkinsons and Alzheimers, as well as rare diseases. This is great news considering the fact that 95% of rare diseases do not have a single FDA approved treatment, according to Global Genes.

Traditionally, pharmaceutical companies dont focus their efforts on treatments for rare diseases because the return on investment doesnt warrant the time and cost it takes to produce the drugs.

However, with advancements in AI technology, there has been a renewed interest in rare disease treatments.

Tencent Holdings has partnered with UK-based Medopad to build artificial intelligence algorithms capable of remotely monitoring patients with Parkinsons disease and reducing how long it takes to conduct a motor function assessment from over 30 minutes to less than three minutes.

The AI will leverage smartphone apps that monitor how a patient opens and closes their hands. The smartphones camera captures a patients movement to determine the severity of their symptoms. The frequency and amplitude score the patient receives can determine the severity of their Parkinsons.

This will allow doctors to remotely monitor patients and set new drug doses. If a patients treatment program needs changing, the AI will raise an alert to notify their doctor and arrange a checkup if required.

The technology will also reduce the patients costs of traveling back and forth to the clinic.

Mission Therapeutics, a drug creation company known for its chemistry and proprietary enzyme platform, and AbbVie, a pharmaceutical business known for its strong neurodegenerative disease research, have partnered to develop Deubiquitinase (DUB) inhibitors in the fight against Parkinsons and Alzheimers.

Both Alzheimers and Parkinsons patients have an abnormal accumulation of misfolded, toxic proteins, resulting in impaired brain functionality and the death of nerve cells.This is where DUBs comes in. They regulate the degradation of these proteins to maintain their health and stability.

By modulating specific DUBs within the brain, Mission Therapeutics is aiming to find potential treatments which will enable the degradation of these toxic proteins and prevent their accumulation.

Healx is a promising startup focused on accelerating treatments for rare diseases and artificial intelligence is at the center of their operations. Their AI platform HealNet enables scientists to increase production in disease drug discovery while simultaneously reducing time, cost and risk.

The company isnt directly focused on creating new drugs to cure these conditions. Instead, they use AI technology to examine existing drugs and repurpose them for curing rare diseases.

HealNet uses machine learning techniques to access data from a range of sources, including scientific literature, patents, clinical trials, disease symptoms, drug targets, multiomics data and chemical structures.

Drug adherence is huge for pharma. In simple terms, to prove the success rate of a drug, a pharma company uses voluntary participants in clinical studies. If these patients dont follow the trial rules, they are eitherremoved from the trial or they poison the drug results. As a result, having amazing drug adherence is crucial to any pharma company out there.

Another critical component for a successful drug trial is that participants take the necessary dosage of a particular drug at all times. For example, its been reported that machine learning algorithms can cut incorrect drug dosage intake by as much as 50% for glioblastoma patients.

Traditional methods to measure drug adherence require patients to submit the data themselves without any evidence of them taking a pill or other type of treatment. They are also subject to tampering, such as deceptively removing pills to feign higher adherence.

AiCure, a New York-based mobile SaaS platform, has developed an image recognition algorithm that removes these issues. Using a mobile phone, AiCure tracks drug adherence by videoing the patient swallowing a pill. The facial recognition system then confirms that the right person took the right pill.

In 2016, they published findings from their study that confirms that the use of their AI platform significantly increases adherence in patients with schizophrenia, as measured by drug concentration levels.The results showed that cumulative adherence was at 89.7% for those using the AiCure platform compared to 71.9% for subjects using modified Directly Observed Therapy (mDOT).

Even with the obvious advantage that this brings, AI will also decrease costs and accelerate drug development for clinical research and practices.

A research team led by the National University of Singapore (NUS) has used an AI platform called CURATE.AI to successfully treat a patient with advanced cancer and completely halting disease progression.

In this clinical study, a patient with metastatic castration-resistant prostate cancer (MCRPC) was given a novel drug combination consisting of an investigational drug, namely ZEN-3694, and an already-approved prostate cancer drug, enzalutamide.

CURATE.AI was used by the research team to continuously identify the optimal doses of each drug to result in a durable response, giving each individual patient the ability to live a free and healthy life.

Read the original post:

Artificial Intelligence & the Pharma Industry: What's Next ...

What’s the Difference Between Robotics and Artificial …

Is robotics part of AI? Is AI part of robotics? What is the difference between the two terms? We answer this fundamental question.

Robotics and artificial intelligence serve very different purposes. However, people often get them mixed up. A lot of people wonder if robotics is a subset of artificial intelligence or if they are the same thing.

Let's put things straight.

The first thing to clarify is that robotics and artificial intelligence are not the same thing at all. In fact, the two fields are almost entirely separate.

A Venn diagram of the two would look like this:

I guess that people sometimes confuse the two because of the overlap between them: Artificially Intelligent Robots.

To understand how these three terms relate to each other, let's look at each of them individually.

Robotics is a branch of technology which deals with robots. Robots are programmable machines which are usually able to carry out a series of actions autonomously, or semi-autonomously.

In my opinion, there are three important factors which constitute a robot:

I say that robots are "usually" autonomous because some robots aren't. Telerobots, for example, are entirely controlled by a human operator but telerobotics is still classed as a branch of robotics. This is one example where the definition of robotics is not very clear.

It is surprisingly difficult to get experts to agree exactly what constitutes a "robot." Some people say that a robot must be able to "think" and make decisions. However, there is no standard definition of "robot thinking." Requiring a robot to "think" suggests that it has some level of artificial intelligence.

However you choose to define a robot, robotics involves designing, building and programming physical robots. Only a small part of it involves artificial intelligence.

Artificial intelligence (AI) is a branch of computer science. It involves developing computer programs to complete tasks which would otherwise require human intelligence. AI algorithms can tackle learning, perception, problem-solving, language-understanding and/or logical reasoning.

AI is used in many ways within the modern world. For example, AI algorithms are used in Google searches, Amazon's recommendation engine and SatNav route finders. Most AI programs are not used to control robots.

Even when AI is used to control robots, the AI algorithms are only part of the larger robotic system, which also includes sensors, actuators and non-AI programming.

Often but not always AI involves some level of machine learning, where an algorithm is "trained" to respond to a particular input in a certain way by using known inputs and outputs. We discuss machine learning in our article Robot Vision vs Computer Vision: What's the Difference?

The key aspect that differentiates AI from more conventional programming is the word "intelligence." Non-AI programs simply carry out a defined sequence of instructions. AI programs mimic some level of human intelligence.

Artificially intelligent robots are the bridge between robotics and AI. These are robots which are controlled by AI programs.

Many robots are not artificially intelligent. Up until quite recently, all industrial robots could only be programmed to carry out a repetitive series of movements. As we have discussed, repetitive movements do not require artificial intelligence.

Non-intelligent robots are quite limited in their functionality. AI algorithms are often necessary to allow the robot to perform more complex tasks.

Let's look at some examples.

A simple collaborative robot (cobot) is a perfect example of a non-intelligent robot.

For example, you can easily program a cobot to pick up an object and place it elsewhere. The cobot will then continue to pick and place objects in exactly the same way until you turn it off. This is an autonomous function because the robot does not require any human input after it has been programmed. However, the task does not require any intelligence.

You could extend the capabilities of the cobot by using AI.

Imagine you wanted to add a camera to your cobot. Robot vision comes under the category of "perception" and usually requires AI algorithms.

For example, say you wanted the cobot to detect the object it was picking up and place it in a different location depending on the type of object. This would involve training a specialized vision program to recognize the different types of object. One way to do this is using an AI algorithm called Template Matching, which we discuss in our article How Template Matching Works in Robot Vision.

As you can see, robotics and artificial intelligence are really two separate things. Robotics involves building robots whereas AI involves programming intelligence.

However, I leave you with one slight confusion: software robots.

"Software robot" is the term given to a type of computer program which autonomously operates to complete a virtual task. They are not physical robots, as they only exist within a computer. The classic example is a search engine webcrawler which roams the internet, scanning websites and categorizing them for search. Some advanced software robots may even include AI algorithms. However, software robots are not part of robotics.

Do you have any fundamental robotics questions you would like answered? Tell us in the comments below or join the discussion on LinkedIn, Twitter, Facebook or the DoF professional robotics community.

Go here to see the original:

What's the Difference Between Robotics and Artificial ...

The Impact of Artificial Intelligence – Widespread Job Losses

Advances in Artificial Intelligence (AI) and Automation will transform our world. The current debate centers not on whether these changes will take place but on how, when, and where the impact of artificial intelligence will hit hardest. In this post, Ill be exploring both optimistic and pessimistic views of artificial intelligence, automation, job loss, and the future.

Questions around the impact of artificial intelligence and automation are critical for us to consider. While technology isnt inherently good or evil, in the hands of humans, technology has a great capacity for both. Id certainly prefer the good over the evil, and that will be dependent on the choices that we make today.

Technology-driven societal changes, like what were experiencing with AI and automation, always engender concern and fearand for good reason. A two-year study from McKinsey Global Institute suggests that by 2030, intelligent agents and robots could replace as much as 30 percent of the worlds current human labor. McKinsey suggests that, in terms of scale, the automation revolution could rival the move away from agricultural labor during the 1900s in the United States and Europe, and more recently, the explosion of the Chinese labor economy.

McKinsey reckons that, depending upon various adoption scenarios,automation will displace between 400 and 800 million jobs by 2030, requiring as many as 375 million people to switch job categories entirely. How could such a shift not cause fear and concern, especially for the worlds vulnerable countries and populations?

The Brookings Institution suggests that even if automation only reaches the 38 percent mean of most forecasts, some Western democracies are likely to resort to authoritarian policies to stave off civil chaos, much like they did during the Great Depression. Brookings writes, The United States would look like Syria or Iraq, with armed bands of young men with few employment prospects other than war, violence, or theft. With frightening yet authoritative predictions like those, its no wonder AI and automation keeps many of us up at night.

The Luddites were textiles workers who protested against automation, eventually attacking and burning factories because, they feared that unskilled machine operators were robbing them of their livelihood. The Luddite movement occurred all the way back in 1811, so concerns about job losses or job displacements due to automation are far from new.

When fear or concern is raised about the potential impact of artificial intelligence and automation on our workforce, a typical response is thus to point to the past; the same concerns are raised time and again and prove unfounded.

In 1961, President Kennedy said, the major challenge of the sixties is to maintain full employment at a time when automation is replacing men. In the 1980s, the advent of personal computers spurred computerphobia with many fearing computers would replace them.

So what happened?

Despite these fears and concerns, every technological shift has ended up creating more jobs than were destroyed. When particular tasks are automated, becoming cheaper and faster, you need more human workers to do the other functions in the process that havent been automated.

During the Industrial Revolution more and more tasks in the weaving process were automated, prompting workers to focus on the things machines could not do, such as operating a machine, and then tending multiple machines to keep them running smoothly. This caused output to grow explosively. In America during the 19th century the amount of coarse cloth a single weaver could produce in an hour increased by a factor of 50, and the amount of labour required per yard of cloth fell by 98%. This made cloth cheaper and increased demand for it, which in turn created more jobs for weavers: their numbers quadrupled between 1830 and 1900. In other words, technology gradually changed the nature of the weavers job, and the skills required to do it, rather than replacing it altogether. The Economist, Automation and Anxiety

Looking back on history, it seems reasonable to conclude that fears and concerns regarding AI and automation are understandable but ultimately unwarranted. Technological change may eliminate specific jobs, but it has always created more in the process.

Beyond net job creation, there are other reasons to be optimistic about the impact of artificial intelligence and automation.

Simply put, jobs that robots can replace are not good jobs in the first place. As humans, we climb up the rungs of drudgery physically tasking or mind-numbing jobs to jobs that use what got us to the top of the food chain, our brains. The Wall Street Journal, The Robots Are Coming. Welcome Them.

By eliminating the tedium, AI and automation can free us to pursue careers that give us a greater sense of meaning and well-being. Careers that challenge us, instill a sense of progress, provide us with autonomy, and make us feel like we belong; all research-backed attributes of a satisfying job.

And at a higher level, AI and automation will also help to eliminate disease and world poverty. Already, AI is driving great advances in medicine and healthcare with better disease prevention, higher accuracy diagnosis, and more effective treatment and cures. When it comes to eliminating world poverty, one of the biggest barriers is identifying where help is needed most. By applying AI analysis to data from satellite images, this barrier can be surmounted, focusing aid most effectively.

I am all for optimism. But as much as Id like to believe all of the above, this bright outlook on the future relies on seemingly shaky premises. Namely:

As explored earlier, a common response to fears and concerns over the impact of artificial intelligence and automation is to point to the past. However, this approach only works if the future behaves similarly. There are many things that are different now than in the past, and these factors give us good reason to believe that the future will play out differently.

In the past, technological disruption of one industry didnt necessarily mean the disruption of another. Lets take car manufacturing as an example; a robot in automobile manufacturing can drive big gains in productivity and efficiency, but that same robot would be useless trying to manufacture anything other than a car. The underlying technology of the robot might be adapted, but at best that still only addresses manufacturing

AI is different because it can be applied to virtually any industry. When you develop AI that can understand language, recognize patterns, and problem solve, disruption isnt contained. Imagine creating an AI that can diagnose disease and handle medications, address lawsuits, and write articles like this one. No need to imagine:AI is already doing those exact things.

Another important distinction between now and the past is the speed of technological progress. Technological progress doesnt advance linearly, it advances exponentially. Consider Moores Law: the number of transistors on an integrated circuit doubles roughly every two years.

In the words of University of Colorado physics professor Albert Allen Bartlett, The greatest shortcoming of the human race is our inability to understand the exponential function. We drastically underestimate what happens when a value keeps doubling.

What do you get when technological progress is accelerating and AI can do jobs across a range of industries? An accelerating pace of job destruction.

Theres no economic law that says You will always create enough jobs or the balance will always be even, its possible for a technology to dramatically favour one group and to hurt another group, and the net of that might be that you have fewer jobs Erik Brynjolfsson, Director of the MIT Initiative on the Digital Economy

In the past, yes, more jobs were created than were destroyed by technology. Workers were able to reskill and move laterally into other industries instead. But the past isnt always an accurate predictor of the future. We cant complacently sit back and think that everything is going to be ok.

Which brings us to another critical issue

Lets pretend for a second that the past actually will be a good predictor of the future; jobs will be eliminated but more jobs will be created to replace them. This brings up an absolutely critical question, what kinds of jobs are being created and what kinds of jobs are being destroyed?

Low- and high-skilled jobs have so far been less vulnerable to automation. The low-skilled jobs categories that are considered to have the best prospects over the next decade including food service, janitorial work, gardening, home health, childcare, and security are generally physical jobs, and require face-to-face interaction. At some point robots will be able to fulfill these roles, but theres little incentive to roboticize these tasks at the moment, as theres a large supply of humans who are willing to do them for low wages. Slate, Will robots steal your job?

Blue collar and white collar jobs will be eliminatedbasically, anything that requires middle-skills (meaning that it requires some training, but not much). This leaves low-skill jobs, as described above, and high-skill jobs which require high levels of training and education.

There will assuredly be an increasing number of jobs related to programming, robotics, engineering, etc.. After all, these skills will be needed to improve and maintain the AI and automation being used around us.

But will the people who lost their middle-skilled jobs be able to move into these high-skill roles instead? Certainly not without significant training and education. What about moving into low-skill jobs? Well, the number of these jobs is unlikely to increase, particularly because the middle-class loses jobs and stops spending money on food service, gardening, home health, etc.

The transition could be very painful. Its no secret that rising unemployment has a negative impact on society; less volunteerism, higher crime, and drug abuse are all correlated. A period of high unemployment, in which tens of millions of people are incapable of getting a job because they simply dont have the necessary skills, will be our reality if we dont adequately prepare.

So how do we prepare? At the minimum, by overhauling our entire education system and providing means for people to re-skill.

To transition from 90% of the American population farming to just 2% during the first industrial revolution, it took the mass introduction of primary education to equip people with the necessary skills to work. The problem is that were still using an education system that is geared for the industrial age. The three Rs (reading, writing, arithmetic) were once the important skills to learn to succeed in the workforce. Now, those are the skills quickly being overtaken by AI.

For a fascinating look at our current education system and its faults, check out this video from Sir Ken Robinson:

In addition to transforming our whole education system, we should also accept that learning doesnt end with formal schooling. The exponential acceleration ofdigital transformation means that learning must be a lifelong pursuit, constantly re-skilling to meet an ever-changing world.

Making huge changes to our education system, providing means for people to re-skill, and encouraging lifelong learning can help mitigate the pain of the transition, but is that enough?

When I originally wrote this article a couple years ago, I believed firmly that 99% of all jobs would be eliminated. Now, Im not so sure. Here was my argument at the time:

[The claim that 99% of all jobs will be eliminated] may seem bold, and yet its all but certain. All you need are two premises:

The first premise shouldnt be at all controversial. The only reason to think that we would permanently stop progress, of any kind, is some extinction-level event that wipes out humanity, in which case this debate is irrelevant. Excluding such a disaster, technological progress will continue on an exponential curve. And it doesnt matter how fast that progress is; all that matters is that it will continue.The incentives for people, companies, and governments are too great to think otherwise.

The second premise will be controversial, but notice that I said human intelligence. I didnt say consciousness or what it means to be human. That human intelligence arises from physical processes seems easy to demonstrate: if we affect the physical processes of the brain we can observe clear changes in intelligence. Though a gloomy example, its clear that poking holes in a persons brain results in changes to their intelligence. A well-placed poke in someones Brocas area and voilthat person cant process speech.

With these two premises in hand, we can conclude the following: we will build machines that have human-level intelligence and higher. Its inevitable.

We already know that machines are better than humans at physical tasks, they can move faster, more precisely, and lift greater loads. When these machines are also as intelligent as us, there will be almost nothing they cant door cant learn to do quickly. Therefore, 99% of jobs will eventually be eliminated.

But that doesnt mean well be redundant. Well still need leaders (unless we give ourselves over to robot overlords) and our arts, music, etc., may remain solely human pursuits too. As for just about everything else? Machines will do itand do it better.

But whos going to maintain the machines? The machines.But whos going to improve the machines? The machines.

Assuming they could eventually learn 99% of what we do, surely theyll be capable of maintaining and improving themselves more precisely and efficiently than we ever could.

The above argument is sound, but the conclusion that 99% of all jobs will be eliminated I believe over-focused on our current conception of a job. As I pointed out above, theres no guarantee that the future will play out like the past. After continuing to reflect and learn over the past few years, I now think theres good reason to believe that while 99% of all current jobs might be eliminated, there will still be plenty for humans to do (which is really what we care about, isnt it?).

The one thing that humans can do that robots cant (at least for a long while) is to decide what it is that humans want to do. This is not a trivial semantic trick; our desires are inspired by our previous inventions, making this a circular question. The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future, by Kevin Kelly

Perhaps another way of looking at the above quote is this: a few years ago I read the book Emotional Intelligence, and was shocked to discover just how essential emotions are to decision making. Not just important, essential. People who had experienced brain damage to the emotional centers of their brains were absolutely incapable of making even the smallest decisions. This is because, when faced with a number of choices, they could think of logical reasons for doing or not doing any of them but had no emotional push/pull to choose.

So while AI and automation may eliminate the need for humans to do any of thedoing, we will still need humans to determine what to do. And because everything that we do and everything that we build sparks new desires and shows us new possibilities, this job will never be eliminated.

If you had predicted in the early 19th century that almost all jobs would be eliminated, and you defined jobs as agricultural work, you would have been right. In the same way, I believe that what we think of as jobs today will almost certainly be eliminated too. But this does not mean that there will be no jobs at all, the job will instead shift to determining, what do we want to do? And then working with our AI and machines to make our desires a reality.

Is this overly optimistic? I dont think so. I still think that the transition might be a painful one and that its critical that we invest in the education and infrastructure needed to support people as many current jobs are eliminated and we transition to this new future.

Follow this link:

The Impact of Artificial Intelligence - Widespread Job Losses

9 Powerful Examples of Artificial Intelligence in Use …

Artificial Intelligence (AI) is the branch of computer sciences that emphasizes the development of intelligence machines, thinking and working like humans. For example, speech recognition, problem-solving, learning and planning.

32% of executives say voice recognition is the most-widely used AI technology in their business today.(Narrative Science)

Today, Artificial Intelligence is a very popular subject that is widely discussed in the technology and business circles. Many experts and industry analysts argue that AI or machine learning is the future but if we look around, we are convinced that its not the future it is the present.

With the advancement in technology, we are already connected to AI in one way or the other whether it is Siri, Watson or Alexa. Yes, the technology is in its initial phase and more and more companies are investing resources in machine learning, indicating a robust growth in AI products and apps in the near future.

The following statistics will give you an idea of growth!

In 2014, more than $300 million was invested in AI startups, showing an increase of 300%, compared to the previous year (Bloomberg)

By 2018, 6 billion connected devices will proactively ask for support. (Gartner)

By the end of 2018, customer digital assistants will recognize customers by face and voice across channels and partners (Gartner)

Artificial intelligence will replace 16% of American jobs by the end of the decade (Forrester)

15% of Apple phone owners users use Siris voice recognition capabilities. (BGR)

Unlike general perception, artificial intelligence is not limited to just IT or technology industry; instead, it is being extensively used in other areas such as medical, business, education, law, and manufacturing.

In the following, we are listing down 9 very intelligent AI solutions that we are using today, marketing machine learning as a present thing not the future.

Siri is one of the most popular personal assistant offered by Apple in iPhone and iPad. The friendly female voice-activated assistant interacts with the user on a daily routine. She assists us to find information, get directions, send messages, make voice calls, open applications and add events to the calendar.

Siri uses machine-learning technology in order to get smarter and capable-to-understand natural language questions and requests. It is surely one of the most iconic examples of machine learning abilities of gadgets.

Not only smartphones but automobiles are also shifting towards Artificial Intelligence. Tesla is something you are missing if you are a car geek. This is one of the best automobiles available until now. The car has not only been able to achieve many accolades but also features like self-driving, predictive capabilities, and absolute technological innovation.

If you are a technology geek and dreamt of owning a car like shown in Hollywood movies, Tesla is one you need in your garage. The car is getting smarter day by day through over the air updates.

Cogito originally co-founded by Dr. Sandy and Joshua is one of the best examples of the behavioral version to improve the intelligence of customer support representatives, currently on the market. The company is a synthesis of machine learning and behavioral science to enhance customer collaboration for phone professionals.

Cogito is applicable on millions of voice calls that take place on a daily basis. The AI solution analyzes the human voice and provides real-time guidance to enhance behavior.

Netflix needs no introduction it is a widely popular content-on-demand service that uses predictive technology to offer recommendations on the basis of consumers reaction, interests, choices, and behavior. The technology examines from a number of records to recommend movies based on your previous liking and reactions.

It is turning more intelligent with each passing year. The only the drawback of this technology is that small movie go unnoticed while big films grow and propagate on the platform. But as I wrote earlier, it is still improving and learning to be more intelligent.

Pandora is one of the most popular and highly demanded tech solutions that exist. It is also called the DNA of music. Depending on 400 musical characteristics, the team of expert musicians individually analyzes the song. The system is also good at recommending the track record for recommending songs that would never get noticed, despite peoples liking.

Modern Perspective of Digital Customer Experience

Nest was one of the most famous and successful artificial intelligence startups and it was acquired by Google in 2014 for $3.2 billion. The Nest Learning Thermostat uses behavioral algorithms to save energy based on your behavior and schedule.

It employs a very intelligent machine learning process that learns the temperature you like and programs itself in about a week. Moreover, it will automatically turn off to save energy, if nobody is at home.

In fact, it is a combination of both artificial intelligence as well as Bluetooth low-energy because some components of this solution will use BLE services and solutions.

Boxever is a company that heavily relies on machine learning to enhance the customer experience in the travel industry and conveys micro-moments or experiences that can please the customers.

Boxover significantly improves customer engagement through machine learning and Artificial Intelligence to rule the playing field, helping customers to find new ways and make memorable journeys.

The flying drones are already shipping products to customers home though on a test mode. They indicate a powerful machine learning system that can translate the environment into a 3D model through sensors and video cameras.

The sensors and cameras are able to notice the position of the drones in the room by attaching them to the ceiling. Trajectory generation algorithm guides the drone on how and where to move. Using a Wi-Fi system, we can control the drones and use them for specific purposes product delivery, video-making, or news reporting.

Echo was launched by Amazon, which is getting smarter and adding new features. It is a revolutionary product that can help you to search the web for information, schedule appointments, shop, control lights, switches, thermostats, answers questions, reads audiobooks, reports traffic and weather, gives info on local businesses, provides sports scores and schedules, and more using the Alexa Voice Service.

Artificial Intelligence is gaining popularity at a quicker pace; influencing the way we live, interact and improve customer experience. There is much more to come in the coming years with more improvements, development, and governance.

View original post here:

9 Powerful Examples of Artificial Intelligence in Use ...

What is Artificial Intelligence (AI)? … – Techopedia

Artificial intelligence is a branch of computer science that aims to create intelligent machines. It has become an essential part of the technology industry.

Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial intelligence include programming computers for certain traits such as:

Knowledge engineering is a core part of AI research. Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties and relations between all of them to implement knowledge engineering. Initiating common sense, reasoning and problem-solving power in machines is a difficult and tedious task.

Machine learning is also a core part of AI. Learning without any kind of supervision requires an ability to identify patterns in streams of inputs, whereas learning with adequate supervision involves classification and numerical regressions.

Classification determines the category an object belongs to and regression deals with obtaining a set of numerical input or output examples, thereby discovering functions enabling the generation of suitable outputs from respective inputs. Mathematical analysis of machine learning algorithms and their performance is a well-defined branch of theoretical computer science often referred to as computational learning theory.

Machine perception deals with the capability to use sensory inputs to deduce the different aspects of the world, while computer vision is the power to analyze visual inputs with a few sub-problems such as facial, object and gesture recognition.

Robotics is also a major field related to AI. Robots require intelligence to handle tasks such as object manipulation and navigation, along with sub-problems of localization, motion planning and mapping.

]]>[Master Deep Learning and build a career in AI, with this highly sought after course from Coursera.]

Go here to read the rest:

What is Artificial Intelligence (AI)? ... - Techopedia

What is AI (artificial intelligence)? – Definition from …

Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision.

AI can be categorized as either weak or strong. Weak AI, also known as narrow AI, is an AI system that is designed and trained for a particular task. Virtual personal assistants, such as Apple's Siri, are a form of weak AI. Strong AI, also known as artificial general intelligence, is an AI system with generalized human cognitive abilities. When presented with an unfamiliar task, a strong AI system is able to find a solution without human intervention.

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings, as well as access to Artificial Intelligence as a Service (AIaaS) platforms. AI as a Service allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment. Popular AI cloud offerings include Amazon AI services, IBM Watson Assistant, Microsoft Cognitive Services and Google AI services.

While AI tools present a range of new functionality for businesses ,the use of artificial intelligence raises ethical questions. This is because deep learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human selects what data should be used for training an AI program, the potential for human bias is inherent and must be monitored closely.

Some industry experts believe that the term artificial intelligence is too closely linked to popular culture, causing the general public to have unrealistic fears about artificial intelligence and improbable expectations about how it will change the workplace and life in general. Researchers and marketers hope the label augmented intelligence, which has a more neutral connotation, will help people understand that AI will simply improve products and services, not replace the humans that use them.

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, categorizes AI into four types, from the kind of AI systems that exist today to sentient systems, which do not yet exist. His categories are as follows:

AI is incorporated into a variety of different types of technology. Here are seven examples.

Artificial intelligence has made its way into a number of areas. Here are six examples.

The application of AI in the realm of self-driving cars raises security as well as ethical concerns. Cars can be hacked, and when an autonomous vehicle is involved in an accident, liability is unclear. Autonomous vehicles may also be put in a position where an accident is unavoidable, forcing the programming to make an ethical decision about how to minimize damage.

Another major concern is the potential for abuse of AI tools. Hackers are starting to use sophisticated machine learning tools to gain access to sensitive systems, complicating the issue of security beyond its current state.

Deep learning-based video and audio generation tools also present bad actors with the tools necessary to create so-called deepfakes , convincingly fabricated videos of public figures saying or doing things that never took place .

Despite these potential risks, there are few regulations governing the use AI tools, and where laws do exist, the typically pertain to AI only indirectly. For example, federal Fair Lending regulations require financial institutions to explain credit decisions to potential customers, which limit the extent to which lenders can use deep learning algorithms, which by their nature are typically opaque. Europe's GDPR puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

In 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered. Since that time the issue has received little attention from lawmakers.

See more here:

What is AI (artificial intelligence)? - Definition from ...