Category Archives: Artificial Intelligence

Exploring Artificial Intelligence Variants and Their Uses – RTInsights

The common thread across all AI technologies is the ability to impart human-like decision-making capabilities into applications and systems.

Artificial intelligence (AI) refers to thesimulation of human intelligence in systems programmed to think like humans andmimic their actions. AI includes a broad range of technologies, including cognitive computing, deep learning, expert systems, machine learning, natural language processing, and IBM Watson.

The common thread across these areas, and allof AI, for that matter, is the ability to impart human-like decision-makingcapabilities into applications and systems. Experts predict AI will be rapidlyadopted because they believe it will be a disruptive technology acrossmany industries.

There already are many examples of the impactAI has in a variety of fields, including:

AI is a very broad field with manysubcategories. Each is aimed at particular application areas and uses specifictechnologies for those application areas. They include

Cognitive computing is the use of computerizedmodels to simulate the human thought process in complex situations where theanswers may be ambiguous and uncertain. It mimics how humans learn, think, and adapt,enabling a wide range of real-time insights and actions.

For example, cognitive computing is being usedto aid human resources with hiring decisions, help doctors make diagnoses and treatment decisionsby using the data relating to a patients case to make suggestions withconfidence levels assigned to them, and improve call center customer experience.

Cognitive computing enables such applicationsusing several technologies, including:

Deep learning is a subset of machine learningin artificial intelligence (AI) that has networks capable of learningunsupervised from unstructured or unlabeled data. Deep learning systems notonly think, but keep learning and self-directing as new data flows in.

Deep learning can play a role in a range ofreal-time, interactive applications, including speech recognition, visual recognition, and machine translation.It accomplishes this using several techniques and technologies including:

An expert system that uses artificialintelligence techniques and databases of expert knowledge to offer advice ormake decisions. In particular, expert systems emulate the decision-makingability of a human expert. Expert systems are designed to solve complexproblems by reasoning through bodies of knowledge, represented mainly asif-then rules rather than through conventional procedural code.

A key attribute of expert systems is that theyautomate many tasks and work interactively with external information (e.g., atext message, an event log, a verbal question or answer, and more). Applicationareas for expert systems include use as:

Machine learning is an application ofartificial intelligence that provides systems the ability to automaticallylearn and improve from experience without being explicitly programmed. Machinelearning uses structured data that has a single, direct input for each fieldused. In general, machine learning makes use of clean data, that is easy towork with, and for which there are no nuances to it. (In contrast, deeplearning uses unstructured data.)

Machine learning is best when there aremassive volumes of structured data that would take years for a human operatorto process. It can efficiently classify information, predicting outcomes basedon previous behavior and performance, and organizing information together basedon key variables. General applications areas include:

Natural language processing (NLP) makes use oflinguistics and artificial intelligence to improve interactions betweencomputers and humans. In many applications, NLP is used to helpsolve a problem, answer a question, or direct a person to an appropriateresource based on the spoken word.

To achieve such results, NLP-bases systemsmake use of some core technologies and deliver essential capabilities,including:

IBM Watson is an artificial intelligence platform that helps businessespredict and shape future outcomes, automate complex processes, and optimizeemployee productivity. It is widely known from its first use case as a questionand answer computer system used in a series of matches against humans on the TVshow Jeopardy!

Today, IBM Watson technology delivers acompetitive advantage to businesses by using AI to unlock the value of data innew, profound ways, giving every member of a business the power of AI. IBMWatson consists of a suite of pre-built applications and tools to givebusinesses insights to predict and shape outcomes and infuse intelligence intoyour workflows. Implementations of IBM Watson include:

See the rest here:
Exploring Artificial Intelligence Variants and Their Uses - RTInsights

Importance and Benefits of Artificial intelligence for Patent Searching – Express Computer

Authored by Amit Aggarwal, co-founder and Director, Effectual Services

Every year with the growth in new technologies and inventions there have been an astounding growth in volume of intellectual property literature. Internationally, this data has to be gathered, stored, and classified in multiple formats and languages so that it can be used as and when required. However, data alone does not create a competitive advantage, extracting significant and actionable information from this data deluge represents a major challenge and an opportunity at the same time. Analysing patent documents from the pile of data manually is getting out of question day by day as it demands extensive time and resources. So, the examiners and patent analyst need all available tools at their disposal to perform this tedious task. One of the tools with a tremendous potential is Artificial Intelligence (AI). At its core, artificial intelligence is a computer that has been programmed to mimic the natural intelligence of human beings by learning, reasoning and making decisions.

From the days of fully constructed Boolean searches, search and analytics have evolved, thanks to AI-based semantic search algorithms to provide more efficient and accurate search result than ever before. A major advantage of artificial intelligence is its ability to provide repeated results as these systems are not hindered by inexperience or fatigue. Artificial intelligence tools have potential to significantly streamline and automate the patent search process and the increase the quality and speed of theobtaining results by reducing the amount of time examiners and analyst spend researching, for example,a prior art research project that can runs into days and weeks, can be performed by an AI tool in a matter of hours. Some existing tools, that are really advanced, also incorporate natural language based input that permitsa searcher to include natural language terms that can be comprehended by the backend artificial intelligence engine, which recovers comparable documents available in different languages.

The European Patent Office (EPO) uses Intelligent machine translation tool Patent Translate to allow for translation of patent publications from 32 languages into the EPO official languages of English, French and German. The US patent office (USPTO)uses artificial intelligence to help examiners to review pending patent applications by augmenting classification and searches currently a high priority with it. The UK patent office (UKIPO) also uses artificial intelligence solutions for prior art searching. IBM is offering Watson, an IP advisor that leverages artificial intelligence for fast patent ingestion, better insights, and analytics. Turbopatent, a company that develops applications to automate and streamline the patent protection process, has introduced two artificial intelligence products for patent lawyers. Roboreview, a cloud-based product that analyses drafted patent applications and rapid response, a product that assists lawyers in writing responses to office actions.

Many key players in the industry like PatSeer, Questel, have been using artificial intelligence in combination with machine learning & semantic-based algorithms to provide patent analytics tools and software.With the help of these tools and software we can now:

There are some opposing views relating to the implementation and benefit of artificial intelligence tools and techniques there are people who are concerned about the peculiarities of language used within patent documents, and doubt that how these tools can deal with the inherent ambiguities i.e.its lack of human reasoning as it is unable to carry out a sanity check of results or inventions and lacks the experience that leads to a persons intuitive response to situations.There have been some recorded incidents where the AI based tools failed to perform what it was intended to do.

All in all, its difficult to say whether the AI based tools will be able to completely mimic the human beings and perform same level of analysis or whether they will only reach to the extent of an additional help to a patent searcher we will see in coming times.

If you have an interesting article / experience / case study to share, please get in touch with us at [emailprotected]

See the rest here:
Importance and Benefits of Artificial intelligence for Patent Searching - Express Computer

SUSE infuses portfolio with artificial intelligence and edge technology – SiliconANGLE

Now independent from previous owner Micro Focus International PLC, SUSE is out to make its presence more deeply felt with developers and innovators. Its biggest competitors, Red Hat Inc. and Microsoft Corp., have developed impressively broad, varied portfolios. Can SUSEpull any tricks from its Linux-distro hat interesting enough to compete for the attention of leading-edge, developer-driven IT departments?

Even amid the COVID-19 pandemic, SUSEis busily engaging with its community, according toMelissa Di Donato, chief executive officer of SUSE.Open source is developing a community that often times does not sit together. And now were really trying to engage with that community as much as possible to keep innovation alive, to keep collaboration alive, Di Donato said.

SUSE will collaborate and integrate with its developer community in 2020, as well as sharpen its focus on Linux use cases at the edge, such as autonomous driving, Di Donato added.

Di Donatospoke withStu Miniman, host of theCUBE, SiliconANGLE Medias livestreaming studio, during the SUSECON Digital event. They discussed how to drive engagement in open-source communities and how SUSEis infusing its portfolio with artificial intelligence, edge technology and more. (* Disclosure below.)

SUSEhas recently opened up a community to developers with content around Linux, DevOps, containers, Kubernetes, microservices and more. It has also introduced the SUSECloud Application Platform Developer Sandbox.

We wanted to make it easy for these developers to benefit from the best practicesthat evolved from the cloud-native application deliverythat we offer every day to customers and now for free to our developers, Di Donato said.You can expect SUSE to enter new markets like powering autonomous vehicles with safety-certified Linux and other really innovative technologies.

For example, SUSEiscarving out fresh terrain through its partnership with ElectrobitWireless Communications Oy, aleading providerof embedded software solutions for automotive. The two companies will be working on the use of safety-certified Linux in self-driving cars. Also, next quarter the company will announce a solution for simplifying the integration of AI building blocks into software.

Heres the complete video interview, part of SiliconANGLEs and theCUBEs coverage of the SUSECON Digital event. (* Disclosure: TheCUBE is a paid media partner for SUSECON Digital. Neither SUSE, the sponsor for theCUBEs event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

Continued here:
SUSE infuses portfolio with artificial intelligence and edge technology - SiliconANGLE

7 Types Of Artificial Intelligence – Forbes

Artificial Intelligence is probably the most complex and astounding creations of humanity yet. And that is disregarding the fact that the field remains largely unexplored, which means that every amazing AI application that we see today represents merely the tip of the AI iceberg, as it were. While this fact may have been stated and restated numerous times, it is still hard to comprehensively gain perspective on the potential impact of AI in the future. The reason for this is the revolutionary impact that AI is having on society, even at such a relatively early stage in its evolution.

AIs rapid growth and powerful capabilities have made people paranoid about the inevitability and proximity of an AI takeover. Also, the transformation brought about by AI in different industries has made business leaders and the mainstream public think that we are close to achieving the peak of AI research and maxing out AIs potential. However, understanding the types of AI that are possible and the types that exist now will give a clearer picture of existing AI capabilities and the long road ahead for AI research.

Since AI research purports to make machines emulate human-like functioning, the degree to which an AI system can replicate human capabilities is used as the criterion for determining the types of AI. Thus, depending on how a machine compares to humans in terms of versatility and performance, AI can be classified under one, among the multiple types of AI. Under such a system, an AI that can perform more human-like functions with equivalent levels of proficiency will be considered as a more evolved type of AI, while an AI that has limited functionality and performance would be considered a simpler and less evolved type.

Based on this criterion, there are two ways in which AI is generally classified. One type is based on classifying AI and AI-enabled machines based on their likeness to the human mind, and their ability to think and perhaps even feel like humans. According to this system of classification, there are four types of AI or AI-based systems: reactive machines, limited memory machines, theory of mind, and self-aware AI.

These are the oldest forms of AI systems that have extremely limited capability. They emulate the human minds ability to respond to different kinds of stimuli. These machines do not have memory-based functionality. This means such machines cannot use previously gained experiences to inform their present actions, i.e., these machines do not have the ability to learn. These machines could only be used for automatically responding to a limited set or combination of inputs. They cannot be used to rely on memory to improve their operations based on the same. A popular example of a reactive AI machine is IBMs Deep Blue, a machine that beat chess Grandmaster Garry Kasparov in 1997.

Limited memory machines are machines that, in addition to having the capabilities of purely reactive machines, are also capable of learning from historical data to make decisions. Nearly all existing applications that we know of come under this category of AI. All present-day AI systems, such as those using deep learning, are trained by large volumes of training data that they store in their memory to form a reference model for solving future problems. For instance, an image recognition AI is trained using thousands of pictures and their labels to teach it to name objects it scans. When an image is scanned by such an AI, it uses the training images as references to understand the contents of the image presented to it, and based on its learning experience it labels new images with increasing accuracy.

Almost all present-day AI applications, from chatbots and virtual assistants to self-driving vehicles are all driven by limited memory AI.

While the previous two types of AI have been and are found in abundance, the next two types of AI exist, for now, either as a concept or a work in progress. Theory of mind AI is the next level of AI systems that researchers are currently engaged in innovating. A theory of mind level AI will be able to better understand the entities it is interacting with by discerning their needs, emotions, beliefs, and thought processes. While artificial emotional intelligence is already a budding industry and an area of interest for leading AI researchers, achieving Theory of mind level of AI will require development in other branches of AI as well. This is because to truly understand human needs, AI machines will have to perceive humans as individuals whose minds can be shaped by multiple factors, essentially understanding humans.

This is the final stage of AI development which currently exists only hypothetically. Self-aware AI, which, self explanatorily, is an AI that has evolved to be so akin to the human brain that it has developed self-awareness. Creating this type of Ai, which is decades, if not centuries away from materializing, is and will always be the ultimate objective of all AI research. This type of AI will not only be able to understand and evoke emotions in those it interacts with, but also have emotions, needs, beliefs, and potentially desires of its own. And this is the type of AI that doomsayers of the technology are wary of. Although the development of self-aware can potentially boost our progress as a civilization by leaps and bounds, it can also potentially lead to catastrophe. This is because once self-aware, the AI would be capable of having ideas like self-preservation which may directly or indirectly spell the end for humanity, as such an entity could easily outmaneuver the intellect of any human being and plot elaborate schemes to take over humanity.

The alternate system of classification that is more generally used in tech parlance is the classification of the technology into Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI).

This type of artificial intelligence represents all the existing AI, including even the most complicated and capable AI that has ever been created to date. Artificial narrow intelligence refers to AI systems that can only perform a specific task autonomously using human-like capabilities. These machines can do nothing more than what they are programmed to do, and thus have a very limited or narrow range of competencies. According to the aforementioned system of classification, these systems correspond to all the reactive and limited memory AI. Even the most complex AI that uses machine learning and deep learning to teach itself falls under ANI.

Artificial General Intelligence is the ability of an AI agent to learn, perceive, understand, and function completely like a human being. These systems will be able to independently build multiple competencies and form connections and generalizations across domains, massively cutting down on time needed for training. This will make AI systems just as capable as humans by replicating our multi-functional capabilities.

The development of Artificial Superintelligence will probably mark the pinnacle of AI research, as AGI will become by far the most capable forms of intelligence on earth. ASI, in addition to replicating the multi-faceted intelligence of human beings, will be exceedingly better at everything they do because of overwhelmingly greater memory, faster data processing and analysis, and decision-making capabilities. The development of AGI and ASI will lead to a scenario most popularly referred to as the singularity. And while the potential of having such powerful machines at our disposal seems appealing, these machines may also threaten our existence or at the very least, our way of life.

At this point, it is hard to picture the state of our world when more advanced types of AI come into being. However, it is clear that there is a long way to get there as the current state of AI development compared to where it is projected to go is still in its rudimentary stage. For those holding a negative outlook for the future of AI, this means that now is a little too soon to be worrying about the singularity, and there's still time to ensure AI safety. And for those who are optimistic about the future of AI, the fact that we've merely scratched the surface of AI development makes the future even more exciting.

More:
7 Types Of Artificial Intelligence - Forbes

Artificial intelligence is struggling to cope with how the world has changed – ZDNet

From our attitude towards work to our grasp of what two metres look like, the coronavirus pandemic has made us rethink how we see the world. But while we've found it hard to adjust to the new reality, it's been even harder for the narrowly-designed artificial intelligence models that have been created to help organisation make decisions. Based on data that described the world before the crisis, these won't be making correct predictions anymore, pointing to a fundamental problem in they way AI is being designed.

David Cox, IBM director of the MIT-IBM Watson AI Lab, explains that faulty AI is particularly problematic in the case of so-called black box predictive models: those algorithms which work in ways that are not visible, or understandable, to the user. "It's very dangerous," Cox says, "if you don't understand what's going on internally within a model in which you shovel data on one end to get a result on the other end. The model is supposed to embody the structure of the world, but there is no guarantee that it will keep working if the world changes."

The COVID-19 crisis, according to Cox, has only once more highlighted what AI experts have argued for decades: that algorithms should be more explainable.

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

For example, if you were building a computer program that was a complete blackbox, aimed at predicting what the stock market would be like based on past data, there is no guarantee it's going to continue to produce good predictions in the current coronavirus crisis, he argues.

What you actually need to do is build a broader model of the economy that acknowledges supply and demand, understands supply-chains, and incorporates that knowledge, which is closer to something that an economist would do. Then you can reason about the situation more transparently, he says.

"Part of the reason why those models are hard to trust with narrow AIs is because they don't have that structure. If they did it would be much easier for a model to provide an explanation for why they are making decisions. These models are experiencing challenges now. COVID-19 has just made it very clear why that structure is important," he warns.

It's important not only because the technology would perform better and gain in reliability, but also because businesses would be far less reluctant to adopt AI if they trusted the tool more. Cox pulls out his own statistics on the matter: while 95% of companies believe that AI is key to their competitive advantage, only 5% say they've extensively implemented the technology.

While the numbers differ from survey to survey,the conclusion has been the same for some time now: there remains a significant gap between the promise of AI and its reality for businesses. And part of the reason that industry is struggling to deploy the technology boils down to a lack of understanding of AI. If you build a great algorithm but can't explain how it works, you can't expect workers to incorporate the new tool in their business flow. "If people don't understand or trust those tools, it's going to be a lost cause," says Cox.

Explaining AI is one of the main focuses of Cox's work. The MIT-IBM Watson Lab, which he co-directs, comprises of 100 AI scientists across the US university and IBM Research, and is now in its third year of operation. The Lab's motto, which comes up first thing on its website, is self-explanatory: "AI science for real-world impact".

Back in 2017, IBM announced a $240 million investment over ten years to support research by the firm's own researchers, as well as MIT's, in the newly-founded Watson AI Lab. From the start, the collaboration's goal has had a strong industry focus, with an idea to unlock the potential of AI for "business and society". The lab's focus is not on "narrow AI", which is the technology in its limited format that most organizations know today; instead the researchers should be striving for "broad AI". Broad AI can learn efficiently and flexibly, across multiple tasks and data streams, and ultimately has huge potential for businesses. "Broad AI is next," is the Lab's promise.

The only way to achieve broad AI, explains Cox, is to bridge between research and industry. The reason that AI, like many innovations, remains stubbornly stuck in the lab, is because the academics behind the technology struggle to identify and respond to the real-world needs of businesses. Incentives are misaligned; the result is that organizations see the potential of the tool, but struggle to use it. AI exists and it is effective, but is still not designed for business.

SEE: Developers say Google's Go is 'most sought after' programming language of 2020

Before he joined IBM, Cox spent ten years as a professor in Harvard University. "Coming from academia and now working for IBM, my perspective on what's important has completely changed," says the researcher. "It has given me a much clearer picture of what's missing."

The partnership between IBM and MIT is a big shift from the traditional way that academia functions. "I'd rather be there in the trenches, developing those technologies directly with the academics, so that we can immediately take it back home and integrate it into our products," says Cox. "It dramatically accelerates the process of getting innovation into businesses."

IBM has now expanded the collaboration to some of its customers through a member program, which means that researchers in the Lab benefit from the input of players from different industries. From Samsung Electronics to Boston Scientific through banking company Wells Fargo, companies in various fields and locations can explain their needs and the challenges they encounter to the academics working in the AI Watson Lab. In turn, the members can take the intellectual property generated in the Lab and run with it even before it becomes an IBM product.

Cox is adamant, however, that the MIT-IBM Watson AI Lab was also built with blue-sky research compatibility in mind. The researchers in the lab are working on fundamental, cross-industry problems that need to be solved in order to make AI more applicable. "Our job isn't to solve customer problems," says Cox. "That's not the right use for the tool that is MIT. There are brilliant people in MIT that can have a hugely disruptive impact with their ideas, and we want to use that to resolve questions like: why is it that AI is so hard to use or impact in business?"

Explainability of AI is only one area of focus. But there is also AutoAI, for example, which consists of using AI to build AI models, and would let business leaders engage with the technology without having to hire expensive, highly-skilled engineers and software developers. Then, there is also the issue of data labeling: according to Cox, up to 90% of the data science project consists of meticulously collecting, labeling and curating the data. "Only 10% of the effort is the fancy machine-learning stuff," he says. "That's insane. It's a huge inhibitor to people using AI, let alone to benefiting from it."

SEE: AI and the coronavirus fight: How artificial intelligence is taking on COVID-19

Doing more with less data, in fact, was one of the key features of the Lab's latest research project, dubbed Clevrer, in which an algorithm can recognize objects and reason about their behaviors in physical events from videos. This model is a neuro-symbolic one, meaning that the AI can learn unsupervised, by looking at content and pairing it with questions and answers; ultimately, it requires far less training data and manual annotation.

All of these issues have been encountered one way or another not only by IBM, but by the companies that signed up to the Lab's member program. "Those problems just appear again and again," says Cox and that's whether you are operating in electronics or med-tech or banking. Hearing similar feedback from all areas of business only emboldened the Lab's researchers to double down on the problems that mattered.

The Lab has about 50 projects running at any given time, carefully selected every year by both MIT and IBM on the basis that they should be both intellectually interesting, and effectively tackling the problem of broad AI. Cox maintains that within this portfolio, some ideas are very ambitious and can even border blue-sky research; they are balanced, on the other hand, with other projects that are more likely to provide near-term value.

Although more prosaic than the idea of preserving purely blue-sky research, putting industry and academia in the same boat might indeed be the most pragmatic solution in accelerating the adoption of innovation and making sure AI delivers on its promise.

See the original post:
Artificial intelligence is struggling to cope with how the world has changed - ZDNet

A New Way To Think About Artificial Intelligence With This ETF – MarketWatch

Among the myriad thematic exchange traded funds investors have to consider, artificial intelligence products are numerous and some are catching on with investors.

Count the ROBO Global Artificial Intelligence ETF THNQ, +1.57% as the latest member of the artificial intelligence ETF fray. HNQ, which debuted earlier this week, comes from a good gene pool as its stablemate,the Robo Global Robotics and Automation Index ETF ROBO, +0.44%, was the original and remains one of the largest robotics ETFs.

That's relevant because artificial intelligence and robotics are themes that frequently intersect with each other. Home to 72 stocks, the new THNQ follows the ROBO Global Artificial Intelligence Index.

Adding to the case for A.I., even with a new product such as THNQ, is that the technology has hundreds, if not thousands, of applications supporting its growth.

Companies developing AV technology are mainly relying on machine learning or deep learning, or both, according to IHS Markit. A major difference between machine learning and deep learning is that, while deep learning can automatically discover the feature to be used for classification in unsupervised exercises, machine learning requires these features to be labeled manually with more rigid rulesets. In contrast to machine learning, deep learning requires significant computing power and training data to deliver more accurate results.

Like its family ROBO, THNQ offers wide reach with exposure to 11 sub-groups. Those include big data, cloud computing, cognitive computing, e-commerce and other consumer angles and factory automation, among others. Of course, semiconductors are part of the THNQ fold, too.

The exploding use of AI is ushering in a new era of semiconductor architectures and computing platforms that can handle the accelerated processing requirements of an AI-driven world, according to ROBO Global. To tackle the challenge, semiconductor companies are creating new, more advanced AI chip engines using a whole new range of materials, equipment, and design methodologies.

While THNQ is a new ETF, investors may do well to not focus on that rather focus on the fact the AI boom is in its nascent stages.

Historically, the stock market tends to under-appreciate the scale of opportunity enjoyed by leading providers of new technologies during this phase of development, notes THNQ's issuer. This fact creates a remarkable opportunity for investors who understand the scope of the AI revolution, and who take action at a time when AI is disrupting industry as we know it and forcing us to rethink the world around us.

The new ETF charges 0.68% per year, or $68 on a $10,000 investment. That's inline with rival funds.

2020 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Read the original:
A New Way To Think About Artificial Intelligence With This ETF - MarketWatch

How Is Artificial Intelligence Combatting COVID-19? – Gigabit Magazine – Technology News, Magazine and Website

Chris Gannatti, head of research at ETF specialist WisdomTree, explains how artificial intelligence is being used to tackle Covid-19.

Artificial Intelligence (AI) is proliferating more widely than ever before, having the potential to influence many aspects of daily life. Crisis periods, like we have seen with the Covid-19 pandemic, are often catalysts for the deployment of new innovations and technologies more quickly. The power of AI is being harnessed to tackle the Covid-19 pandemic, whether that be to better understand the rate of infection or by tracing and quickly identifying infections. While AI has been associated with the future and ideas such as the development of driverless cars, its legacy could be how it has impacted the world during this crisis. It is likely that AI is already playing a major part in the early stages of vaccine development - the uses of artificial intelligence are seemingly endless.AI was already growing quickly and being deployed in ever more areas of our data-driven world.

Covid-19 has accelerated some of these deployments, bringing greater comfort and familiarity to the technology. To really understand how AI is making a difference, it is worth looking at some examples which illustrate the breadth of activities being carried out by AI during the pandemic.

Rizwan Malik, the lead radiologist at Royal Bolton Hospital run by the UKs National Health Service (NHS) designed a conservative clinical trial to help obtain initial readings of X-rays for patients faster. Waiting for specialists could sometimes take up to six hours. He identified a promising AI-based chest X-ray system and then set up a test to occur over six months. For all chest X-rays handled by his trainees, the system would offer a second opinion. He would then check if the systems conclusion matched his own and if it did, he would phase the system in as a permanent check on his trainees. As Covid-19 hit, the system became an important way to identify certain characteristics unique to Covid-19 that were visible on chest X-rays. While not perfect, the system did represent an interesting case-study in the use of computer vision in medical imagery.A great example of the collaborative efforts that can be inspired during times of crisis involved three organisations coming together to release the Covid-19 Open Research Dataset. This includes more than 24,000 research papers from peer-reviewed journals and other sources.

See also:

The National Library of Medicine at the National Institutes of Health provided access to existing scientific publications; Microsoft used its literature curation algorithms to find relevant articles; and research non-profit the Allen Institute for Artificial Intelligence converted them from web pages and PDFs into a structured format that can be processed by algorithms. Many major cities affected by Covid-19 were faced with a very real problem - getting the right care to the people who needed it without allowing hospitals to become overrun. Helping people to self-triage, therefore staying away from the hospital unless absolutely necessary, was extremely important. Providence St. Joseph Health System in Seattle built an online screening and triage tool that could rapidly differentiate between those potentially really sick with Covid-19 and those with less life-threatening ailments. In its first week of operation, it served 40,000 patients. The Covid-19 pandemic has pushed the unemployment rate in the US to 14.7%. This has led to unprecedented numbers of people filing unemployment claims and asking questions to different state agencies. Texas, which has received millions of these claims since early March, is using artificial intelligence-driven chatbots to answer questions from unemployed residents in need of benefits.

Other states, like Georgia and South Carolina, have reported similar activity. To give a sense of scale, the system that has been deployed in Texas can handle 20,000 concurrent users. Think of how much staff would be required to deal with 20,000 inquiries at the same time. These are but four of many, many ways in which AI has been deployed to help in the time of the Covid-19 pandemic. While we continue to hope for cures and vaccinations, which AI will help in developing, we expect to see more innovative uses of AI that will benefit society over the long-term.

How you can slow the spread of coronavirus:

Wash your hands with soap and water often do this for at least 20 seconds

Use hand sanitiser gel if soap and water are not available

Wash your hands as soon as you get home

Cover your mouth and nose with a tissue or your sleeve (not your hands) when you cough or sneeze

Put used tissues in the bin immediately and wash your hands afterwards

SOURCE: Funds Europe

Visit link:
How Is Artificial Intelligence Combatting COVID-19? - Gigabit Magazine - Technology News, Magazine and Website

The Expanding Role Of Artificial Intelligence In Tax – Forbes

Watch Benjamin Alarie, co-founder and CEO of Blue J Legal, discuss the expanding role of artificial intelligence in tax with contributing editor atTax Notes FederalBenjamin Willis.

Here are some highlights

On machine learning and tax law

Benjamin Alarie: Whenwe talk about machine learning and artificial intelligence of the law, what we're doing is talking about collecting the raw materials, the rulings, the cases, the legislation, the regs, all that information, and bringing it to bear on a particular problem. We're synthesizing all of those materials to make a prediction about how a new situation would likely be decided by the courts.

. . . Law should be predictable. We have lots of data out there in the form of rulings, in the form of judgments that we can collect as good examples of how the courts have decided these matters in the past. And we can reverse engineer using machine learning methods how the courts are mapping the facts of different situations into outcomes. And we can do this in a really elegant way that leads to high quality prediction. So predictions of 90 percentor better accuracy about how the courts are going to make those decisions in the future, which is incredibly valuable to taxpayers to tax administration and to anyone who's looking for certainty, predictability and fairness, in the application of law.

On the availability of artificial intelligence

Benjamin Alarie: We're doing a lot to make this technology available throughout industry. Law firms are increasingly seeing this as one of the tools that they need to have in order to practice tax as effectively as possible. Academic programs see using this kind of technology [as]a huge boost for their graduates who are going to go into practice being familiar already with the leading tools for how to leverage machine learning and artificial intelligence. Accounting firms are also quite interested in this approach too because it has huge implications in terms of speeding up research [and] doing quality assurance . . .

On the moldability of results

Benjamin Alarie: You can play with different dimensions. You can swap out that assumption of fact, swap in a different assumption of fact, and see how that's likely to influence the results. So, then you can do scenario testing to really get comfortable with how much risk there is in a particular situation as the one providing a new opinion or providing advice to a client. That's really reassuring. You might say, Okay, I need to get this to 80 percent probability. I'm not willing to bite off more than that . . . Or you might be like, Well, I have a really risk-loving client. I just need to get to 51 percent . . . [machine learning] allows you to really calibrate the amount of risk that you're taking on, depending on the risk appetite of the client and your comfort as the practitioner.

Benjamin Willis, contributing editor with Tax Notes Federal, and Benjamin Alarie, co-founder and CEO ... [+] of Blue J Legal, discuss the expanding role of artificial intelligence and machine learning in the government, academia and tax practice.

On artificial intelligence and the courts

Benjamin Alarie: [Machine learning] is a great tool to encourage settlement between the parties, and so I think we're increasingly seeing that phenomenon where the party with the really strong position is using this to support their argument. They say, Don't take our word for it. We ran it on this independent system. . . Here's the report from the system saying that we have a 95 percent or better chance of winning this case. Are you still sure you don't want to enter into terms of settlement? That's often very convincing to the other side, who then run their analysis through the same system and they say, Okay . . . It's not nearly as strong as we thought it might be. Maybe we should talk about settling this and that saves judges from having to contend with cases that really aren't the best use of their time because it's pretty clear how those cases should get decided.

On artificial intelligence and low-income taxpayers

Benjamin Alarie: There are early adopters at these low income taxpayer clinics across the country who are interested in using technology to allow them to give faster advice to the low income taxpayers . . . Folks understand increasingly how to use the software and how it can materially assist their clientele and so the goal is to learn from those early adopters and to figure out how to position the software to help as much as possible in other clinics where maybe we don't have early adopters present, but who could still genuinely, really benefit from this.

Go here to read the rest:
The Expanding Role Of Artificial Intelligence In Tax - Forbes

Artificial Intelligence (AI) Is Nothing Without Humans – E3zine.com

AI is not just a fad. Its a technology thats set to last. However, only companies who know how to leverage its full potential will succeed.

Leveraging AIs full potential doesnt mean developing a pilot project in a vacuum with a handful of experts which, ironically, is often called accelerator project. Companies need a tangible idea as to how artificial intelligence can benefit them in their day-to-day operations.

For this to happen, one has to understand how these new AI colleagues work and what they need to successfully do their jobs.

An example for why this understanding is so crucial is lead management in sales. Instead of sales team wasting their time on someone who will never buy anything, AI is supposed to determine which leads are promising and at what moment salespeople can make their move to close the contract. CEOs are usually very taken with that idea, sales staff not so much.

Experienced salespeople know that its not that easy. Its not only the hard facts like name, address, industry or phone number that are important. Human sales people consider many different factors, such as relationships, past conversations, customer satisfaction, experience with products, the current market situation, and more.

Make no mistake: if the data are available in a set framework, AI will also leverage them, searching for patterns, calculating behavior scores and match scores, and finally indicating if the lead is promising or not. They can make sense of the data, but they will never see more than them.

The real challenge with AI are therefore the data. Without data, artificial intelligence solutions cannot learn. Data have to be collected and clearly structured to be usable in sales and service.

Without enough data to draw conclusions from, all decisions that AI makes will be unreliable at best. Meaning that in our example, theres no AI without CRM. Thats not really new, I know. However, CRM systems now have to be interconnected with numerous touchpoints (personal conversations, ERP, online shops, customer portal, website and others) to aggregate reliable customer data. Best case: all of this happens automatically. Entrusting a human with this task makes collecting data laborious, inconsistent and faulty.

To profit from AI, companies need to understand where it makes sense to implement it and how they should train it. Theres one problem, however: the thought patterns of AI are often so complex and take so many different information and patterns into consideration that one cant understand why and how it made a decision.

In conclusion, AI is not a universal remedy. Its based on things we already know. Its recommendations and decisions are more error-prone than many would like them to be. Right now, AI has more of a supporting role than an autonomous one. They can help us in our daily routine, take care of monotonous tasks, and let others make the important decisions.

However, we shouldnt underestimate AI either. In the future, it will gain importance as it grows more autonomous each day. Artificial intelligence often reaches its limits when interacting with humans. When interacting with other AI solutions in clearly defined frameworks, it can often already make the right decisions today.

Read the rest here:
Artificial Intelligence (AI) Is Nothing Without Humans - E3zine.com

Five Important Subsets of Artificial Intelligence – Analytics Insight

As far as a simple definition, Artificial Intelligence is the ability of a machine or a computer device to imitate human intelligence (cognitive process), secure from experiences, adapt to the most recent data and work people-like-exercises.

Artificial Intelligence executes tasks intelligently that yield in creating enormous accuracy, flexibility, and productivity for the entire system. Tech chiefs are looking for some approaches to implement artificial intelligence technologies into their organizations to draw obstruction and include values, for example, AI is immovably utilized in the banking and media industry. There is a wide arrangement of methods that come in the space of artificial intelligence, for example, linguistics, bias, vision, robotics, planning, natural language processing, decision science, etc. Let us learn about some of the major subfields of AI in depth.

ML is maybe the most applicable subset of AI to the average enterprise today. As clarified in the Executives manual for real-world AI, our recent research report directed by Harvard Business Review Analytic Services, ML is a mature innovation that has been around for quite a long time.

ML is a part of AI that enables computers to self-learn from information and apply that learning without human intercession. When confronting a circumstance wherein a solution is covered up in a huge data set, AI is a go-to. ML exceeds expectations at processing that information, extracting patterns from it in a small amount of the time a human would take and delivering in any case out of reach knowledge, says Ingo Mierswa, founder and president of the data science platform RapidMiner. ML powers risk analysis, fraud detection, and portfolio management in financial services; GPS-based predictions in travel and targeted marketing campaigns, to list a few examples.

Joining cognitive science and machines to perform tasks, the neural network is a part of artificial intelligence that utilizes nervous system science ( a piece of biology that worries the nerve and nervous system of the human cerebrum). Imitating the human mind where the human brain contains an unbounded number of neurons and to code brain-neurons into a system or a machine is the thing that the neural network functions.

Neural network and machine learning combinedly tackle numerous intricate tasks effortlessly while a large number of these tasks can be automated. NLTK is your sacred goal library that is utilized in NLP. Ace all the modules in it and youll be a professional text analyzer instantly. Other Python libraries include pandas, NumPy, text blob, matplotlib, wordcloud.

An explainer article by AI software organization Pathmind offers a valuable analogy: Think of a lot of Russian dolls settled within one another. Profound learning is a subset of machine learning and machine learning is a subset of AI, which is an umbrella term for any computer program that accomplishes something smart.

Deep learning utilizes alleged neural systems, which learn from processing the labeled information provided during training and uses this answer key to realize what attributes of the information are expected to build the right yield, as per one clarification given by deep AI. When an adequate number of models have been processed, the neural network can start to process new, inconspicuous sources of info and effectively return precise outcomes.

Deep learning powers product and content recommendations for Amazon and Netflix. It works in the background of Googles voice-and image-recognition algorithms. Its ability to break down a lot of high-dimensional information makes deep learning unmistakably appropriate for supercharging preventive maintenance frameworks

This has risen as an extremely sizzling field of artificial intelligence. A fascinating field of innovative work for the most part focuses around designing and developing robots. Robotics is an interdisciplinary field of science and engineering consolidated with mechanical engineering, electrical engineering, computer science, and numerous others. It decides the designing, producing, operating, and use of robots. It manages computer systems for their control, intelligent results and data change.

Robots are deployed regularly for directing tasks that may be difficult for people to perform consistently. Major robotics tasks included an assembly line for automobile manufacturing, for moving large objects in space by NASA. Artificial intelligence scientists are additionally creating robots utilizing machine learning to set interaction at social levels.

Have you taken a stab at learning another language by labeling the items in your home with the local language and translated words? It is by all accounts a successful vocab developer since you see the words again and again. Same is the situation with computers fueled with computer vision. They learn by labeling or classifying various objects that they go over and handle the implications or decipher, however, at a much quicker pace than people (like those robots in science fiction motion pictures).

The tool OpenCV empowers processing of pictures by applying them to mathematical operations. Recall that elective subject in engineering days called Fluffy Logic? Truly, that approach is utilized in Image processing that makes it a lot simpler for computer vision specialists to fuzzify or obscure the readings that cant be placed in a crisp Yes/No or True/False classification. OpenTLA is utilized for video tracking which is the procedure to find a moving object(s) utilizing a camera video stream.

Share This ArticleDo the sharing thingy

See the original post:
Five Important Subsets of Artificial Intelligence - Analytics Insight