AI Hype: Why the Reality Often Falls Short of Expectations – insideBIGDATA

In this special guest feature, AJ Abdallat, CEO of Beyond Limits, takes a look at the tech industrys hype cycle, in particular for how it often falls short of expectations when related to AI. Beyond Limits is a full-stack Artificial Intelligence engineering company creating advanced software solutions that go beyond conventional AI. Founded in 2014, Beyond Limits is transforming proven technologies from Caltech and NASAs Jet Propulsion Laboratory into advanced AI solutions, hardened to industrial strength, and put to work for forward-looking companies on earth.

Despite what we see in science fiction, artificialintelligence (AI) is not likely going to produce sentient machines that will takeover Earth, subordinate human beings, or change the hierarchy of the planets foodchain. Nor will it be humanitys savior.

AI essentially equates to the ability of machines to performtasks that usually require human reasoning. The concept of artificialintelligence has existed for more than 60 years, and modern AI systems arerevolutionizing how people live and work. However, conventional AI solutions donot use the technology to its fullest potential.

Decisions are usually made inside black boxes

Conventional AI solutions operate inside black boxes, unableto explain or substantiate their reasoning or decisions. These solutions dependon intricate neural networks that are too complex for people to understand. Companiesutilizing conventional AI approaches primarily are in somewhat of a quandarybecause they dont know how or why the system produces its conclusions, and mostAI firms refuse to divulge, or are unable to divulge, the inner workings oftheir technology.

However, these smart systems arent generally all thatsmart. They can process very large, complex data sets, but cannot employhuman-like reasoning or problem-solving. They see data as a series ofnumbers, label those numbers based on how they were trained, and depend onrecognition to solve problems. When presented with data, a conventional AIsystem asks itself if it has seen the information before and, if so, how itlabeled that data last time. It cannot diagnose or solve problems in real-timeunless it has the ability to communicate with human operators.

Scenarios do exist where AI users may not be as concerned about collecting information around reasoning because the consequences of a negative outcome are minimal, such as algorithms that recommend items based on consumers purchasing or viewing history. However, trusting the decisions of black box-oriented AI is extremely problematic in high-value, high-risk industries such as finance, healthcare, and energy where machines may be tasked to make recommendations on which millions of dollars, or the safety and well being of humans, hang in the balance.

Imperfect edge conditions complicate matters

Enterprises are increasingly deploying AI systems to monitor IoT devices in far-flung environments where humans are not always present, and internet connectivity is spotty at best; think highway cams, drones that survey farmlands, or an oil rig infrastructure in the middle of the ocean. One-quarter of organizations with established IoT strategies are also investing in AI.

Cognitive AI solves these problems

Cognitive AI solutions solve theseissues by employing human-like problem-solving and reasoning skills that letusers see inside the black box. They do not replace complex neural networks appliedby conventional solutions, but instead interpret their outputs and usenatural-language declarations to provide an annotated narrative that humans canunderstand. Cognitive AI systems understand how they solve problems and arealso aware of the context that makes the information relevant. So instead of beingasked to implicitly trust the conclusions of a machine, with cognitive AI, humanusers can actually obtain audit trails that substantiate the systems recommendationswith evidence, risk assessment, certainty, and uncertainty.

The level of explainability generatedby an AI system is based on its use case. In general, the higher the stakes,the more explainability is needed. A robust cognitive AI system should have theautonomy to adjust the depth of its explanations based on who is viewing theinformation and in what context.

Audit trails in the form of decisiontrees are one of the most helpful methods for illustrating the cognitive AI reasoningprocess behind recommendations. The top of a tree represents the minimum amountof information explaining a decision-process, while the bottom denotes explanationsthat go into the greatest amount of detail. For this reason, explainability isclassified into two categories, top-down or bottom-up.

The top-down approach is for endusers who dont require intricate details, only a positive or negative pointof reference about whether or not an answer is correct. For example, a managermay think that a panel on a solar farm isnt working properly and simply needsto know the status of the solar panel; a cognitive AI system could generate aprediction around how much energy the panel will generate in its currentcondition.

On the other hand, a bottom-upapproach would be more useful for engineers dispatched to fix the problem.These users could query the cognitive AI system at any point along its decisiontree and obtain detailed information and suggestions to remedy the problem.

If the ultimate expectation for AI is to live up to its promise of transforming society, human users must be comfortable with the idea of trusting machine-generated decisions. Cognitive, explainable AI makes this possible. It breaks down organizational silos and bridges gaps between IT personnel and non-technical executive decision-makers of an organization, enabling optimal effectiveness in governance, compliance, risk management and quality assurance, while improving accountability.

Sign up for the free insideBIGDATAnewsletter.

Excerpt from:
AI Hype: Why the Reality Often Falls Short of Expectations - insideBIGDATA

Related Posts

Comments are closed.