Category Archives: Artificial Intelligence

Killer AI could hit within our lifetime if they develop Artificial General Intelligence – Daily Star

Many people, influenced by films such as The Terminator or The Matrix, think True Artificial intelligence the technology to give robots or computers their own independent personality could be dangerous.

AI developers are already discussing how to place limits on future "thinking machines" so they will always act in humanity's interest.

AI consultant Matthew Kershaw told the Daily Star that it's even possible the technology will reach worrying heights within the lifetime of the younger among us.

"It might just be in our lifetime," he says, adding: "If youre young enough!"

His comments come after Professor Stephen Hawking, seen by many as the greatest scientific genius of the modern era, warned: The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.

Meanwhile, SpaceX entrepreneur Elon Musk agrees with him, saying: I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, its probably that. So we need to be very careful.

But Matthew says that true General AI will require computers powerful enough to hold a comprehensive model of the world "which isnt going be anytime soon".

Given that we dont really understand what it means to be conscious ourselves, I think its unlikely that General AI will be a reality anytime soon," he adds. "We just dont know what it actually means to be conscious.

Matthew says that while existing limited AI enabled computers to do incredible things, they still dont learn as well as children do: A human child doesnt need to see more than five cars to learn how to recognise a car. A computer would need to see thousands."

The kind of self-aware artificial intelligences we see in the likes of Star Wars and Westworld are nothing new in science fiction. As far back as 1920, Karel apeks play RUR predicted a robot uprising.

But what scientists call artificial general intelligence remains for now a scientific dream.

Robonaut, the robotic astronaut NASA installed on the International Space Station in 2011 broke down and had to be returned to Earth after astronauts struggled to fit it with some legs.

In 1950, computing pioneer Alan Turing devised a test to determine if an Artificial Intelligence could pass for a human. As yet, no AI system has passed it although a few have come close.

Perhaps the closest was in 2018, when Googles Duplex AI telephoned a hairdressers salon and successfully made an appointment

But that, like the annoying automated systems that answer the phone when you try to book a cinema ticket, Duplex was working on a very specific task. A true General AI would have been able to continue chatting to the hairdresser about where it was going on its holidays.

Artificial Intelligence is everywhere these days in autonomous weapons systems, self-driving cars, even in toothbrushes. But while those systems are inhumanly good at their dedicated missions, none of them as yet can learn to do something different without human help.

In polls, most AI experts say that we will see a General Artificial Intelligence by the end of this century. The most optimistic estimates tend to be around 2040 while some pundits put the date somewhere in the 2080s.

But even that might be a bit over-optimistic. AI pioneer Herbert A. Simon predicted machines will be capable, within twenty years, of doing any work a man can do way back in 1965.

Many pundits say that true artificial consciousness will never be achieved, because we dont fully understand our own. Straying from pure science into something like mysticism, a lot of scientists say that the human mind is something independent of the physical brain.

Anil Seth at the University of Sussex speculates that human consciousness could be substrate-independent that its more than just the brain-cells it occupies.

Phil Maguire, from the National University of Ireland, told New Scientist: Machines are made up of components that can be analysed independently.

They are dis-integrated. Dis-integrated systems can be understood without resorting to the interpretation of consciousness.

Excerpt from:
Killer AI could hit within our lifetime if they develop Artificial General Intelligence - Daily Star

What is Artificial Intelligence (AI)? | Oracle

Despite AIs promise, many companies are not realizing the full potential of machine learning and other AI functions. Why? Ironically, it turns out that the issue is, in large part...people. Inefficient workflows can hold companies back from getting the full value of their AI implementations.

For example, data scientists can face challenges getting the resources and data they need to build machine learning models. They may have trouble collaborating with their teammates. And they have many different open source tools to manage, while application developers sometimes need to entirely recode models that data scientists develop before they can embed them into their applications.

With a growing list of open source AI tools, IT ends up spending more time supporting the data science teams by continuously updating their work environments. This issue is compounded by limited standardization across how data science teams like to work.

Finally, senior executives might not be able to visualize the full potential of their companys AI investments. Consequently, they dont lend enough sponsorship and resources to creating the collaborative and integrated ecosystem required for AI to be successful.

Original post:
What is Artificial Intelligence (AI)? | Oracle

8 Examples of Artificial Intelligence in our Everyday Lives

Main Examples of Artificial Intelligence Takeaways:

The words artificial intelligence may seem like a far-off concept that has nothing to do with us. But the truth is that we encounter several examples of artificial intelligence in our daily lives.

From Netflixs movie recommendation to Amazons Alexa, we now rely on various AI models without knowing it. In this post, well consider eight examples of how were already using artificial intelligence.

Artificial intelligence is an expansive branch of computer science that focuses on building smart machines. Thanks to AI, these machines can learn from experience, adjust to new inputs, and perform human-like tasks. For example, chess-playing computers and self-driving cars rely heavily on natural language processing and deep learning to function.

American computer scientist John McCarthy coined the term artificial intelligence back in 1956. At the time, McCarthy only created the term to distinguish the AI field from cybernetics.

However, AI is more popular than ever today due to:

Hollywood movies tend to depict artificial intelligence as a villainous technology that is destined to take over the world.

One example is the artificial superintelligence system, Skynet, from the film franchise Terminator. Theres also VIKI, an AI supercomputer from the movie I, Robot, who deemed that humans cant be trusted with their own survival.

Holywood has also depicted AI as superintelligent robots, like in movies I Am Mother and Ex Machina.

However, the current AI technologies are not as sinister or quite as advanced. With that said, these depictions raise an essential question:

No, not exactly. Artificial intelligence and robotics are two entirely separate fields. Robotics is a technology branch that deals with physical robots programmable machines designed to perform a series of tasks. On the other hand, AI involves developing programs to complete tasks that would otherwise require human intelligence. However, the two fields can overlap to create artificially intelligent robots.

Most robots are not artificially intelligent. For example, industrial robots are usually programmed to perform the same repetitive tasks. As a result, they typically have limited functionality.

However, introducing an AI algorithm to an industrial robot can enable it to perform more complex tasks. For instance, it can use a path-finding algorithm to navigate around a warehouse autonomously.

To understand how thats possible, we must address another question:

The four artificial intelligence types are reactive machines, limited memory, Theory of Mind, and self-aware. These AI types exist as a type of hierarchy, where the simplest level requires basic functioning, and the most advanced level is well, all-knowing. Other subsets of AI include big data, machine learning, and natural language processing.

The simplest types of AI systems are reactive. They can neither learn from experiences nor form memories. Instead, reactive machines react to some inputs with some output.

Examples of artificial intelligence machines in this category include Googles AlphaGo and IBMs chess-playing supercomputer, Deep Blue.

Deep Blue can identify chess pieces and knows how each of them moves. While the machine can choose the most optimal move from several possibilities, it cant predict the opponents moves.

A reactive machine doesnt rely on an internal concept of the world. Instead, it perceives the world directly and acts on what it sees.

Limited memory refers to an AIs ability to store previous data and use it to make better predictions. In other words, these types of artificial intelligence can look at the recent past to make immediate decisions.

Note that limited memory is required to create every machine learning model. However, the model can get deployed as a reactive machine type.

The three significant examples of artificial intelligence in this category are:

Self-driving cars are limited memory AI that makes immediate decisions using data from the recent past.

For example, self-driving cars use sensors to identify steep roads, traffic signals, and civilians crossing the streets. The vehicles can then use this information to make better driving decisions and avoid accidents.

In Psychology, theory of mind refers to the ability to attribute mental state beliefs, intent, desires, emotion, knowledge to oneself and others. Its the fundamental reason we can have social interactions.

Unfortunately, were yet to reach the Theory of Mind artificial intelligence type. Although voice assistants exhibit such capabilities, its still a one-way relationship.

For example, you could yell angrily at Google Maps to take you in another direction. However, itll neither show concern for your distress nor offer emotional support. Instead, the map application will return the same traffic report and ETA.

An AI system with Theory of Mind would understand that humans have thoughts, feelings, and expectations for how to be treated. That way, it can adjust its response accordingly.

The final step of AI development is to build self-aware machines that can form representations of themselves. Its an extension and advancement of the Theory of Mind AI.

A self-aware machine has human-level consciousness, with the ability to think, desire, and understand its feelings. At the moment, these types of artificial intelligence only exist in movies and comic book pages. Self-aware machines do not exist.

Although self-aware machines are still decades away, several artificial intelligence examples already exist in our everyday lives.

Several examples of artificial intelligence impact our lives today. These include FaceID on iPhones, the search algorithm on Google, and the recommendation algorithm on Netflix. Youll also find other examples of how AI is in use today on social media, digital assistants like Alexa, and ride-hailing apps such as Uber.

Virtual filters on Snapchat and the FaceID unlock on iPhones are two examples of AI applications today. While the former uses face detection technology to identify any face, the latter relies on face recognition.

So, how does it work?

The TrueDepth camera on the Apple devices projects over 30,000 invisible dots to create a depth map of your face. It also captures an infrared image of the users face.

After that, a machine learning algorithm compares the scan of your face with what a previously enrolled facial data. That way, it can determine whether to unlock the device or not.

According to Apple, FaceID automatically adapts to changes in the users appearance. These include wearing cosmetic makeup, growing facial hair, or wearing hats, glasses, or contact lens.

The Cupertino-based tech giant also stated that the chance of fooling FaceID is one in a million.

Several text editors today rely on artificial intelligence to provide the best writing experience.

For example, document editors use an NLP algorithm to identify incorrect grammar usage and suggest corrections. Besides auto-correction, some writing tools also provide readability and plagiarism grades.

However, editors such as INK took AI usage a bit further to provide specialized functions. It uses artificial intelligence to offer smart web content optimization recommendations.

Just recently, INK has released a study showing how its AI-powered writing platform can improve content relevance and help drive traffic to sites. You can read their full study here.

Social media platforms such as Facebook, Twitter, and Instagram rely heavily on artificial intelligence for various tasks.

Currently, these social media platforms use AI to personalize what you see on your feeds. The model identifies users interests and recommends similar content to keep them engaged.

Also, researchers trained AI models to recognize hate keywords, phrases, and symbols in different languages. That way, the algorithm can swiftly take down social media posts that contain hate speech.

Other examples of artificial intelligence in social media include:

Plans for social media platform involve using artificial intelligence to identify mental health problems. For example, an algorithm could analyze content posted and consumed to detect suicidal tendencies.

Getting queries directly from a customer representative can be very time-consuming. Thats where artificial intelligence comes in.

Computer scientists train chat robots or chatbots to impersonate the conversational styles of customer representatives using natural language processing.

Chatbots can now answer questions that require a detailed response in place of a specific yes or no answer. Whats more, the bots can learn from previous bad ratings to ensure maximum customer satisfaction.

As a result, machines now perform basic tasks such as answering FAQs or taking and tracking orders.

Media streaming platforms such as Netflix, YouTube, and Spotify rely on a smart recommendation system thats powered by AI.

First, the system collects data on users interests and behavior using various online activities. After that, machine learning and deep learning algorithms analyze the data to predict preferences.

Thats why youll always find movies that youre likely to watch on Netflixs recommendation. And you wont have to search any further.

Search algorithms ensure that the top results on the search engine result page (SERP) have the answers to our queries. But how does this happen?

Search companies usually include some type of quality control algorithm to recognize high-quality content. It then provides a list of search results that best answer the query and offers the best user experience.

Since search engines are made entirely of codes, they rely on natural language processing (NLP) technology to understand queries.

Last year, Google announced Bidirectional Encoder Representations from Transformers (BERT), an NLP pre-training technique. Now, the technology powers almost all English-based query on Google Search.

In October 2011, Apples Siri became the first digital assistant to be standard on a smartphone. However, voice assistants have come a long way since then.

Today, Google Assistant incorporates advanced NLP and ML to become well-versed in human language. Not only does it understand complex commands, but it also provides satisfactory outputs.

Also, digital assistants now have adaptive capabilities for analyzing user preferences, habits, and schedules. That way, they can organize and plan actions such as reminders, prompts, and schedules.

Various smart home devices now use AI applications to conserve energy.

For example, smart thermostats such as Nest use our daily habits and heating/cooling preferences to adjust home temperatures. Likewise, smart refrigerators can create shopping lists based on whats absent on the fridges shelves.

The way we use artificial intelligence at home is still evolving. More AI solutions now analyze human behavior and function accordingly.

We encounter AI daily, whether youre surfing the internet or listening to music on Spotify.

Other examples of artificial intelligence are visible in smart email apps, e-commerce, smart keyboard apps, as well as banking and finance. Artificial intelligence now plays a significant role in our decisions and lifestyle.

The media may have portrayed AI as a competition to human workers or a concept thatll eventually take over the world. But thats not the case.

Instead, artificial intelligence is helping humans become more productive and helping us live a better life.

More here:
8 Examples of Artificial Intelligence in our Everyday Lives

Top 10 Artificial Intelligence Books for Beginner in 2021 …

In 2021, Artificial Intelligence is the hottest and demanding field; most engineers want to make their career in AI, Data Science & Data Analytics. Going through the best and reliable resources is the best way to learn, So here is the list of the best AI Books.

Artificial Intelligence is the field of study that simulates the processes of human intelligence on computer systems. These processes include the acquisition of information, using them, and approximating conclusions. The research topics in AI include problem-solving, reasoning, planning, natural language, programming, and machine learning. Automation, Robotics and sophisticated computer software and programs characterize a career in Artificial Intelligence. Basic foundations in maths, technology, logic, and engineering can go a long way in kick-starting a career in Artificial Intelligence.

Here we have listed a few basic and advanced Artificial Intelligence books, which will help you find your way around AI.

By Stuart Russell and Peter Norvig

This edition covers the changes and developments in Artificial Intelligence since those covered in the last edition of this book in 2003. This book covers the latest development in AI in the field of practical speech recognition, machine translation, autonomous vehicles, and household robotics. It also covers the progress, in areas such as probabilistic reasoning, machine learning, and computer vision.

You can buy it here.

By James V Stone

In this book, key neural network learning algorithms are explained, followed by detailed mathematical analyses. Online computer programs collated from open source repositories give hands-on experience of neural networks. It is an ideal introduction to the algorithmic engines of modern-day artificial intelligence.

You can but it here.

By Denis Rothman

This book serves as a starting point for understanding how Artificial Intelligence works with the help of real-life scenarios. You will be able to understand the most advanced machine learning models, understand how to apply AI to blockchain and IoT, and develop emotional quotient in chatbots using neural networks. By the end of this book, you will have understood the fundamentals of AI and worked through a number of case studies that will help you develop the business vision. This book will help you develop your adaptive thinking to solve real-life AI case. Prior experience with Python and statistical knowledge is essential to make the most out of this book.

You can buy it here.

By Chandra S.S.V

This book is primarily intended for undergraduate and postgraduate students of computer science and engineering. This textbook covers the gap between the difficult contexts of Artificial Intelligence and Machine Learning. It provides the most number of case studies and worked-out examples. In addition to Artificial Intelligence and Machine Learning, it also covers various types of learning like reinforced, supervised, unsupervised and statistical learning. It features well-explained algorithms and pseudo-codes for each topic which makes this book very useful for students.

You can buy it here.

By Tom Taulli

This book equips you with a fundamental grasp of Artificial Intelligence and its impact. It provides a non-technical introduction to important concepts such as Machine Learning, Deep Learning, Natural Language Processing, Robotics and more. Further the author expands on the questions surrounding the future impact of AI on aspects that include societal trends, ethics, governments, company structures and daily life.

You can buy it here.

By Neil Wilkins

This book gives you a glimpse into Artificial Intelligence and a hypothetical simulation of a living brain inside a computer. This book features the following topics:

You can buy it here.

By Deepak Khemani

This book follows a bottom-up approach exploring the basic strategies needed problem-solving mainly on the intelligence part. Its main features include an introductory course on Artificial Intelligence, a knowledge-based approach using agents all across and detailed, well-structured algorithms with proofs.

You can buy it here.

By Mariya Yao, Adelyn Zhou, Marlene Jia

Applied Artificial Intelligence is a practical guide for business leaders who are passionate about leveraging machine intelligence to enhance the productivity of their organizations and the quality of life in their communities. This book focuses on driving concrete business decisions through applications of artificial intelligence and machine language. It is one of the best practical guide for business leaders looking to get a true value from the adoption of Machine Learning Technology.

You can buy it here.

By Mahajan MD, Parag Suresh

This book explores the role of Artificial Intelligence in Healthcare, how it is revolutionizing all aspects of healthcare and guides you through the current state and future applications of AI in healthcare, including those under development. It also discusses the ethical concerns related to the use of AI in healthcare, principles of AI & how it works, the vital role of AI in all major medical specialties, & the role of start-ups and corporate players in AI in healthcare.

You can buy it here.

By Max Tegmark

This book takes its readers to the heart of the latest AI thought process to explore the next phase of human existence. The author here explores the burning questions of how to prosper through automation without leaving people jobless, how to ensure that future AI systems work as intended without malfunctioning or getting hacked and how to flourish life with AI without eventually getting outsmarted by lethal autonomous machines.

You can buy it here.

By Dr. Dheeraj Mehrotra This book delivers an understanding of Artificial Intelligence and Machine Learning with a better framework of technology.

You can buy it here.

By Peter Norvig

This book teaches advanced Common Lisp techniques in the context of building major AI systems. It reconstructs authentic, complex AI programs using state-of-the-art Common Lisp, builds and debugs robust practical programs while demonstrating superior programming style and important AI concepts. It is a useful supplement for general AI courses and an indispensable reference for a professional programmer.

You can buy it here.

By Rahul Kumar, Ankit Dixit, Denis Rothman, Amir Ziai, Mathew Lamons

This book helps you to gain real-world contextualization using deep learning problems concerning research and application. Design and implement machine intelligence using real-world AI-based examples. This book offers knowledge in machine learning, deep learning, data analysis, TensorFlow, Python, fundamentals of AI and will be able to apply your skills in real-world projects.

You can buy it here.

By Giuseppe Bonaccorso, Armando Fandango, Rajalingappaa Shanmugamani

This book is a complete guide to learning popular machine learning algorithms. You will learn how to extract features from your dataset and perform dimensionality reduction by using Python-based libraries. Then you will be learning the advanced features of Tensorflow and implement different techniques related to object classification, object detection, image segmentation and more. By the end of this book, you will have an in-depth knowledge of Tensorflow and will be the go-to person for solving AI problems.

You can buy it here.

By Chris Baker

This book explores the potential consequences of Artificial Intelligence and how it will shape the world in the coming years. It familiarizes how AI aims to aid human cognitive limitations. It covers:

You can buy it here.

By John Mueller and Luca Massaron

This offers a much-needed entry point for anyone looking to use machine learning to accomplish practical tasks. This book makes it easy to understand and implement machine learning seamlessly. It explains how

You can buy it here.

By Ethem Alpaydin

It is a concise overview of machine learning which underlies applications that include recommendation systems, face recognition, and driverless cars. The author offers a concise overview of the subject for the general reader, describing its evolution, explaining important learning algorithms, and presenting example applications.

You can buy it here.

By John D. Kelleher, Brian Mac Namee

It is a comprehensive introduction to the most important machine learning approaches used in predictive data analytics, covering both theoretical concepts and practical applications. Technical and mathematical material is augmented with explanatory worked examples, and case studies illustrate the application of these models in the broader business context. Finally, the book considers techniques for evaluating prediction models and offers two case studies that describe specific data analytics projects through each phase of development, from formulating the business problem to implementation of the analytics solution.

You can buy it here.

By Chris Sebastian

This book traces the development of Machine Learning from the early days of computer learning to machines being able to beat human experts. It explains the importance of data and how massive amounts of it provide ML programmers with the information they need to developing learning algorithms. This book explores the relationship between Artificial Intelligence and Machine Learning.

You can buy it here.

By Deepti Gupta

It is a Data Science Bool with an effective understanding on ML Algorithms on R and SAS. This book provides real-time industrial data sets. It covers the Role of Analytics in various Industries with case studies in Banking, Retail, Telecommunications, Healthcare, Airlines and FMCG along with Analytical Solutions.

You can buy it here.

By Lopez de Prado, Marcos

This book teaches readers how to structure Big Data in a way that is amenable to Machine Language Algorithms, how to conduct research on that data with ML algorithms, how to use supercomputing methods and how to backtest discoveries while avoiding false positives. The book addresses real-life problems faced by practitioners on a daily basis and explains scientifically sound solutions using math, supported by code and examples.

You can buy it here.

By Stuart Russel

In this book, the author explores the idea of intelligence in humans and machines. He describes the near time benefits that can be expected from intelligent personal assistants to vastly accelerated scientific researches. The author suggests that AI can be built on a new foundation by which machines will be designed where they will be uncertain about the human preference they are required to satisfy. Such machines would be humble, altruistic and committed to pursuing human objectives.

You can buy it here.

A career in Artificial Intelligence can be realized in a variety of spheres which include private organizations, public undertakings, education, arts, health care, government services, and military. The extent of artificial intelligence continues to advance every day. Hence, those with the ability to translate those digital bits of data into meaningful human conclusions will be able to sustain a much rewarding career in this field. You can check out a lot many courses and certifications provided online in this field. If your intent is promising, the courses will definitely be promising and a whole lot of opportunities will show up on your way.

People are also reading:

View original post here:
Top 10 Artificial Intelligence Books for Beginner in 2021 ...

7 Risks Of Artificial Intelligence You Should Know | Built In

Last March, at the South by Southwest tech conference in Austin, Texas, Tesla and SpaceXfounder Elon Musk issued a friendly warning: Mark my words, he said, billionaire casualin a furry-collared bomber jacket and days old scruff, AIis far more dangerous than nukes.

No shrinking violet, especially when it comes to opining about technology, the outspoken Musk has repeated a version of these artificial intelligence premonitions in other settings as well.

I am really quite close to the cutting edge in AI, and it scares the hell out of me, he told his SXSW audience. Its capable of vastly more than almost anyone knows, and the rate of improvement is exponential.

Musk, though, is far from alone in his exceedingly skeptical (some might say bleakly alarmist) views. A year prior, the late physicist Stephen Hawking was similarly forthright when he told an audience in Portugal that AIs impact could be cataclysmic unless its rapid development is strictly and ethically controlled.

Unless we learn how to prepare for, and avoid, the potential risks, he explained, AI could be the worst event in the history of our civilization.

Considering the number and scope of unfathomably horrible events in world history, thats really saying something.

And in case we havent driven home the point quite firmly enough, research fellow Stuart Armstrong from the Future of Life Institute has spoken of AI as an extinction risk were it to go rogue. Even nuclear war, he said, is on a different level destruction-wise because it would kill only a relatively small proportion of the planet. Ditto pandemics, even at their more virulent.

If AI went bad, and 95 percent of humans were killed, he said, then the remaining five percent would be extinguished soon after. So despite its uncertainty, it has certain features of very bad risks.

How, exactly, would AI arrive at such a perilous point? Cognitive scientist and author Gary Marcus offered some details in an illuminating 2013 New Yorker essay. The smarter machines become, he wrote, themore their goals could shift.

Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called technological singularity or intelligence explosion, the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.

As AI grows more sophisticated and ubiquitous, the voices warning against its current and future pitfalls grow louder. Whether it's the increasing automation of certain jobs, gender and racial bias issues stemming from outdated information sources orautonomous weapons that operate without human oversight (to name just a few), unease abounds on a number of fronts. And were still in the very early stages.

The tech community has long-debated the threats posed by artificial intelligence. Automation of jobs, the spread of fake news and a dangerous arms race of AI-powered weaponry have been proposed as a few of the biggest dangers posed by AI.

Destructive superintelligence aka artificial general intelligence thats created by humans and escapes our control to wreak havoc is in a category of its own. Its also something that might or might not come to fruition (theories vary), so at this point its less risk than hypothetical threat and ever-looming source of existential dread.

Job automation is generally viewed as the most immediate concern. Its no longer a matter of if AI will replace certain types of jobs, but to what degree. In many industries particularly but not exclusively those whose workers perform predictable and repetitive tasks disruptioniswell underway. According to a 2019 Brookings Institution study, 36 million people work in jobs with high exposure to automation, meaning that before long at least 70 percent of their tasks ranging from retail sales and market analysis to hospitality and warehouse labor will be done using AI. An even newer Brookings report concludes that white collar jobs may actuallybe most at risk. And per a 2018 report from McKinsey & Company, the African American workforce will be hardest hit.

The reason we have a low unemployment rate, which doesnt actually capture people that arent looking for work, is largely that lower-wage service sector jobs have been pretty robustly created by this economy, renowned futurist Martin Ford (left) told Built In. I dont think thats going to continue.

As AI robots become smarter and more dextrous, he added, the same tasks will require fewer humans. And while its true that AI will create jobs, an unspecified number of which remain undefined, many will be inaccessible to less educationally advanced members of the displaced workforce.

If youre flipping burgers at McDonalds and more automation comes in, is one of these new jobs going to be a good match for you? Ford said. Or is it likely that the new job requires lots of education or training or maybe even intrinsic talents really strong interpersonal skills or creativity that you might not have? Because those are the things that, at least so far, computers are not very good at.

John C. Havens, author of Heartificial Intelligence: Embracing Humanity andMaximizing Machines, calls bull on the theory that AI will create as many or more jobs than it replaces.

About four years ago, Havens said, he interviewed the head of a law firm about machine learning. The man wanted to hire more people, but he was also obliged to achieve a certain level of returns for his shareholders. A $200,000 piece of software, he discovered, could take the place of ten people drawing salaries of $100,000 each. That meant hed save $800,000. The software would also increase productivity by 70 percent and eradicate roughly 95 percent of errors. From a purely shareholder-centric, single bottom-line perspective, Havens said, there is no legal reason that he shouldnt fire all the humans. Would he feel bad about it? Of course. But thats beside the point.

Even professions that require graduate degrees and additional post-college training arent immune to AI displacement. In fact, technology strategist Chris Messina said, some of them may well be decimated. AI already is having a significant impact on medicine. Law and accounting are next, Messina said, the former being poised for a massive shakeup.

Think about the complexity of contracts, and really diving in and understanding what it takes to create a perfect deal structure, he said. It's a lot of attorneys reading through a lot of information hundreds or thousands of pages of data and documents. Its really easy to miss things. So AI that has the ability to comb through and comprehensively deliver the best possible contract for the outcome you're trying to achieve is probably going to replace a lot of corporate attorneys.

Accountants should also prepare for a big shift, Messina warned. Once AI is able to quickly comb through reams of data to make automatic decisions based on computational interpretations, human auditors may well be unnecessary.

While job loss is currently the most pressing issue related to AI disruption, its merely one among many potential risks. In a February 2018 paper titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, 26 researchers from 14 institutions (academic, civil and industry) enumerated a host of other dangers that could cause serious harm or, at minimum, sow minor chaos in less than five years.

Malicious use of AI, they wrote in their 100-page report, could threaten digital security (e.g. through criminals training machines to hack or socially engineer victims at human or superhuman levels of performance), physical security (e.g. non-state actors weaponizing consumer drones), and political security (e.g. through privacy-eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns).

In addition to its more existential threat, Ford is focused on the way AI will adversely affect privacy and security. A prime example, he said, is Chinas Orwellian use of facial recognition technology in offices, schools and other venues. But thats just one country. A whole ecosphere of companies specialize in similar tech and sell it around the world.

What we can so far only guess at is whether thattech will ever become normalized.As with the internet, where we blithely sacrifice our digital data at the altar of convenience, will round-the-clock, AI-analyzed monitoring someday seem like a fair trade-off for increased safety and security despite its nefarious exploitation by bad actors?

Authoritarian regimes use or are going to use it, Ford said. The question is, How much does it invade Western countries, democracies, and what constraints do we put on it?

AI will also give rise to hyper-real-seeming social media personalities that are very difficult to differentiate from real ones, Ford said. Deployed cheaply and at scale on Twitter, Facebook or Instagram, they could conceivably influence an election.

The same goes for so-called audio and video deepfakes created by manipulating voices and likenesses. The latter is already making waves. But the former, Ford thinks, will prove immensely troublesome. Using machine learning, a subset of AI thats involved in natural language processing, an audio clip of any given politician could be manipulated to make it seem as if that person spouted racist or sexist views when in fact they uttered nothing of the sort. If the clips quality is high enough so as to fool the general public and avoid detection, Ford added, it could completely derail a political campaign.

And all it takes is one success.

From that point on, he noted, no one knows whats real and whats not. So it really leads to a situation where you literally cannot believe your own eyes and ears; you can't rely on what, historically, weve considered to be the best possible evidence Thats going to be a huge issue.

Lawmakers, though frequently less than tech-savvy, are acutely aware and pressing for solutions.

Widening socioeconomicinequality sparked by AI-driven job loss is another cause for concern. Along with education, work has long been a driver of social mobility. However, when its a certain kind of work the predictable, repetitive kind thats prone to AI takeover research has shown that those who find themselves out in the cold are much less apt to get or seek retraining compared to those in higher-level positions who have more money. (Then again, not everyone believes that.)

Various forms of AI bias are detrimental, too. Speaking recently to the New York Times, Princeton computer science professor Olga Russakovsky said it goes well beyond gender and race. In addition to data and algorithmic bias (the latter of which can amplify the former), AI is developed by humans and humans are inherently biased.

A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities, Russakovsky said. Were a fairly homogeneous population, so its a challenge to think broadly about world issues.

In the same article, Google researcher Timnit Gebru said the root of bias is social rather than technological, and called scientists like herself some of the most dangerous people in the world, because we have this illusion of objectivity. The scientific field, she noted, has to be situated in trying to understand the social dynamics of the world, because most of the radical change happens at the social level.

And technologists arent alone in sounding the alarm about AIs potential socio-economic pitfalls. Along with journalists and political figures, Pope Francis is also speaking up and hes not just whistling Sanctus. At a late-September Vatican meeting titled, The Common Good in the Digital Age, Francis warned that AI has the ability to circulate tendentious opinions and false data that could poison public debates and even manipulate the opinions of millions of people, to the point of endangering the very institutions that guarantee peaceful civil coexistence.

If mankinds so-called technological progress were to become an enemy of the common good, he added, this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest.

A big part of the problem, Messina said, is the private sectors pursuit of profit above all else. Because thats what theyre supposed to do, he said. And so theyre not thinking of, Whats the best thing here? Whats going to have the best possible outcome?

The mentality is, If we can do it, we should try it; lets see what happens, he added. And if we can make money off it, well do a whole bunch of it. But thats not unique to technology. Thats been happening forever.

Not everyone agrees with Musk that AI is more dangerous than nukes, including Ford. But what if AI decides to launch nukes or, say, biological weapons sans human intervention? Or, whatif an enemy manipulates data to return AI-guided missiles whence they came? Both are possibilities. And both would be disastrous. The more than 30,000 AI/robotics researchers and others who signed an open letter on the subject in 2015 certainly think so.

The key question for humanity today is whether to start a global AI arms race or to prevent it from starting, they wrote. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

(The U.S. Militarys proposed budget for 2020 is $718 billion. Of that amount, nearly $1 billion would support AI and machine learning for things like logistics, intelligence analysis and, yes, weaponry.)

Earlier this year, a story in Vox detailed a frightening scenario involving the development of a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the worlds computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.

Thats jarring, sure. But rest easy. In 2012 the Obama Administrations Department of Defense issued a directive regarding Autonomy in Weapon Systems that included this line: Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.

And in early November of this year, a Pentagon group called the Defense Innovation Board published ethical guidelines regarding the design and deployment of AI-enabled weapons. According to the Washington Post, however, the boards recommendations are in no way legally binding. It now falls to the Pentagon to determine how and whether to proceed with them.

Well, thats a relief. Or not.

Have you ever considered that algorithms could bring down our entire financial system? Thats right, Wall Street. You might want to take notice. Algorithmic trading could be responsible for our next major financial crisis in the markets.

What is algorithmic trading? This type of trading occurs when a computer, unencumbered by the instincts or emotions that could cloud a humans judgement, execute trades based off of pre-programmed instructions. These computers can make extremely high-volume, high-frequency and high-value trades that can lead to big losses and extreme market volatility. Algorithmic High-Frequency Trading (HFT) is proving to be a huge risk factor in our markets. HFT is essentially when a computer places thousands of trades at blistering speeds with the goal of selling a few seconds later for small profits. Thousands of these trades every second can equal a pretty hefty chunk of change. The issue with HFT is that it doesnt take into account how interconnected the markets are or the fact that human emotion and logic still play a massive role in our markets.

A sell-off of millions of shares in the airline market could potentially scare humans into selling off their shares in the hotel industry, which in turn could snowball people into selling off their shares in other travel-related companies, which could then affect logistics companies, food supply companies, etc.

Take the Flash Crash of May 2010 as an example. Towards the end of the trading day, the Dow Jones plunged 1,000 points (more than $1 trillion in value) before rebounding towards normal levels just 36 minutes later. What caused this crash? A London-based trader named Navinder Singh Sarao first caused the crash and then it became exacerbated by HFT computers. Apparently Sarao used a spoofing algorithm that placed an order for thousands of stock index futures contracts betting that the market would fall. Instead of going through with the bet, Sarao was going to cancel the order at the last second and buy the lower priced stocks that were being sold off due to his original bet. Other humans and HFT computers saw this $200 million bet and took it as a sign that the market was going to tank. In turn, HFT computers began one of the biggest stock sell-offs in history, causing a brief loss of more than $1 trillion globally.

Financial HFT algorithms arent always correct, either. We view computers as the end-all-be-all when it comes to being correct, but AI is still really just as smart as the humans who programmed it. In 2012, Knight Capital Group experienced a glitch that put them on the verge of bankruptcy. Knights computers mistakenly streamed thousands of orders per second into the NYSE market causing mass chaos for the company. The HFT algorithms executed an astounding 4 million trades of 397 million shares in only 45 minutes. The volatility created by this computer error led to Knight losing $460 million overnight and having to be acquired by another firm. Errant algorithms obviously have massive implications for shareholders and the markets themselves, and nobody learned this lesson harder than Knight.

Many believe the only way to prevent or at least temper the most malicious AI from wreaking havoc is some sort of regulation.

I am not normally an advocate of regulation and oversight I think one should generally err on the side of minimizing those things but this is a case where you have a very serious danger to the public, Musk said at SXSW.

It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely. This is extremely important.

Ford agrees with a caveat. Regulation of AI implementation is fine, he said, but not of the research itself.

You regulate the way AI is used, he said, but you dont hold back progress in basic technology. I think that would be wrong-headed and potentially dangerous.

Because any country that lags in AI development is at a distinct disadvantage militarily, socially and economically. The solution, Ford continued, is selective application:

We decide where we want AI and where we dont; where its acceptable and where its not. And different countries are going to make different choices. So China might have it everywhere, but that doesnt mean we can afford to fall behind them in the state-of-the-art.

Speaking about autonomous weapons at Princeton University in October, American General John R. Allen emphasized the need for a robust international conversation that can embrace what this technology is. If necessary, he went on, there should also be a conversation about how best to control it, be that a treaty that fully bans AI weapons or one that permits only certain applications of the technology.

For Havens, safer AI starts and ends with humans. His chief focus, upon which he expounds in his 2016 book, is this: How will machines know what we value if we dont know ourselves? In creating AI tools, he said, its vitally important to honor end-user values with a human-centric focus rather than fixating on short-term gains.

Technology has been capable of helping us with tasks since humanity began, Havens wrote in Heartificial Intelligence. But as a race weve never faced the strong possibility that machines may become smarter than we are or be imbued with consciousness. This technological pinnacle is an important distinction to recognize, both to elevate the quest to honor humanity and to best define how AI can evolve it. Thats why we need to be aware of which tasks we want to train machines to do in an informed manner. This involved individual as well as societal choice.

AI researchers Fei-Fei Li and John Etchemendy, of Stanford Universitys Institute for Human-Centered Artificial Intelligence, feel likewise. In a recent blog post, they proposed involving numerous people in an array of fields to make sure AI fulfills its huge potential and strengthens society instead of weakening it:

Our future depends on the ability of social- and computer scientists to work side-by-side with people from multiple backgrounds a significant shift from todays computer science-centric model, they wrote. The creators of AI must seek the insights, experiences and concerns of people across ethnicities, genders, cultures and socio-economic groups, as well as those from other fields, such as economics, law, medicine, philosophy, history, sociology, communications, human-computer-interaction, psychology, and Science and Technology Studies (STS). This collaboration should run throughout an applications lifecycle from the earliest stages of inception through to market introduction and as its usage scales.

Messina is somewhat idealistic about what should happen to help avoid AI chaos, though hes skeptical that it will actually come to pass. Government regulation, he said, isnt a given especially in light of failures on that front in the social media sphere, whose technological complexities pale in comparison to those of AI. It will take a very strong effort on the part of major tech companies to slow progress in the name of greater sustainability and fewer unintended consequences especially massively damaging ones.

At the moment, he said, I dont think the onus is there for that to happen.

As Messina sees things, its going to take some sort of catalyst to arrive at that point. More specifically, a catastrophic catalyst like war or economic collapse. Though whether such an event will prove big enough to actually effect meaningful long-term change is probably open for debate.

For his part, Ford remains a long-run optimist despite being very un-bullish on AI.

I think we can talk about all these risks, and theyre very real, but AI is also going to be the most important tool in our toolbox for solving the biggest challenges we face, including climate change.

When it comes to the near term, however, his doubts are more pronounced.

We really need to be smarter, he said. Over the next decade or two, I do worry about these challenges and our ability to adapt to them.

Read the original:
7 Risks Of Artificial Intelligence You Should Know | Built In

Artificial intelligence: Cheat sheet – TechRepublic

Learn artificial intelligence basics, business use cases, and more in this beginner's guide to using AI in the enterprise.

Artificial intelligence (AI) is the next big thing in business computing. Its uses come in many forms, from simple tools that respond to customer chat, to complex machine learning systems that predict the trajectory of an entire organization. Popularity does not necessarily lead to familiarity, and despite its constant appearance as a state-of-the-art feature, AI is often misunderstood.

In order to help business leaders understand what AI is capable of, how it can be used, and where to begin an AI journey, it's essential to first dispel the myths surrounding this huge leap in computing technology. Learn more in this AI cheat sheet. This article is also available as a download, Cheat sheet: Artificial intelligence (free PDF).

SEE: All of TechRepublic's cheat sheets and smart person's guides

When AI comes to mind, it's easy to get pulled into a world of science-fiction robots like Data from Star Trek: The Next Generation, Skynet from the Terminator series, and Marvin the paranoid android from The Hitchhiker's Guide to the Galaxy.

The reality of AI is nothing like fiction, though. Instead of fully autonomous thinking machines that mimic human intelligence, we live in an age where computers can be taught to perform limited tasks that involve making judgments similar to those made by people, but are far from being able to reason like human beings.

Modern AI can perform image recognition, understand the natural language and writing patterns of humans, make connections between different types of data, identify abnormalities in patterns, strategize, predict, and more.

All artificial intelligence comes down to one core concept: Pattern recognition. At the core of all applications and varieties of AI is the simple ability to identify patterns and make inferences based on those patterns.

SEE: Artificial intelligence: A business leader's guide (free PDF) (TechRepublic)

AI isn't truly intelligent in the way we define intelligence: It can't think and lacks reasoning skills, it doesn't show preferences or have opinions, and it's not able to do anything outside of the very narrow scope of its training.

That doesn't mean AI isn't useful for businesses and consumers trying to solve real-world problems, it just means that we're nowhere close to machines that can actually make independent decisions or arrive at conclusions without being given the proper data first. Artificial intelligence is still a marvel of technology, but it's still far from replicating human intelligence or truly intelligent behavior.

Additional resources

AI's power lies in its ability to become incredibly skilled at doing the things humans train it to. Microsoft and Alibaba independently built AI machines capable of better reading comprehension than humans, Microsoft has AI that is better at speech recognition than its human builders, and some researchers are predicting that AI will outperform humans in most everything in less than 50 years.

That doesn't mean those AI creations are truly intelligent--only that they're capable of performing human-like tasks with greater efficiency than us error-prone organic beings. If you were to try, say, to give a speech recognition AI an image-recognition task, it would fail completely. All AI systems are built for very specific tasks, and they don't have the capability to do anything else.

Since the COVID-19 pandemic began in early 2020, artificial intelligence and machine learning has seen a surge of activity as businesses rush to fill holes left by employees forced to work remotely, or those who've lost jobs due to the financial strain of the pandemic.

The quick adoption of AI during the pandemic highlights another important thing that AI can do: Replace human workers. According to Gartner, 79% of businesses are currently exploring or piloting AI projects, meaning those projects are in the early post-COVID-19 stages of development. What the pandemic has done for AI is cause a shift in priorities and applications: Instead of focusing on financial analysis and consumer insight, post-pandemic AI projects are focusing on customer experience and cost optimization, Algorithmia found.

Like other AI applications, customer experience and cost optimization are based on pattern recognition. In the case of the former, AI bots can perform many basic customer service tasks, freeing employees up to only address cases that need human intervention. AI like this has been particularly widespread during the pandemic, when workers forced out of call centers put stress on the customer service end of business.

Additional resources

Modern AI systems are capable of amazing things, and it's not hard to imagine what kind of business tasks and problem solving exercises they could be suited to. Think of any routine task, even incredibly complicated ones, and there's a possibility an AI can do it more accurately and quickly than a human--just don't expect it to do science fiction-level reasoning.

In the business world, there are plenty of AI applications, but perhaps none is gaining traction as much as business analytics and its end goal: Prescriptive analytics.

Business analytics is a complicated set of processes that aim to model the present state of a business, predict where it will go if kept on its current trajectory, and model potential futures with a given set of changes. Prior to the AI age, analytics work was slow, cumbersome, and limited in scope.

SEE: Special report: Managing AI and ML in the enterprise (ZDNet) | Download the free PDF version (TechRepublic)

When modeling the past of a business, it's necessary to account for nearly endless variables, sort through tons of data, and include all of it in an analysis that builds a complete picture of the up-to-the-present state of an organization. Think about the business you're in and all the things that need to be considered, and then imagine a human trying to calculate all of it--cumbersome, to say the least.

Predicting the future with an established model of the past can be easy enough, but prescriptive analysis, which aims to find the best possible outcome by tweaking an organization's current course, can be downright impossible without AI help.

SEE: Artificial intelligence ethics policy (TechRepublic Premium)

There are many artificial intelligence software platforms and AI machines designed to do all that heavy lifting, and the results are transforming businesses: What was once out of reach for smaller organizations is now feasible, and businesses of all sizes can make the most of each resource by using artificial intelligence to design the perfect future.

Analytics may be the rising star of business AI, but it's hardly the only application of artificial intelligence in the commercial and industrial worlds. Other AI use cases for businesses include the following.

If a problem involves data, there's a good possibility that AI can help. This list is hardly complete, and new innovations in AI and machine learning are being made all the time.

Additional resources

What AI platforms are available?

When adopting an AI strategy, it's important to know what sorts of software are available for business-focused AI. There are a wide variety of platforms available from the usual cloud-hosting suspects like Google, AWS, Microsoft, and IBM, and choosing the right one can mean the difference between success and failure.

AWS Machine Learning offers a wide variety of tools that run in the AWS cloud. AI services, pre-built frameworks, analytics tools, and more are all available, with many designed to take the legwork out of getting started. AWS offers pre-built algorithms, one-click machine learning training, and training tools for developers getting started in, or expanding their knowledge of AI development.

Google Cloud offers similar AI solutions to AWS, as well as having several pre-built total AI solutions that organizations can (ideally) plug into their organizations with minimal effort. Google's AI offerings include the TensorFlow open source machine learning library.

Microsoft's AI platform comes with pre-generated services, ready-to-deploy cloud infrastructure, and a variety of additional AI tools that can be plugged in to existing models. Its AI Lab also offers a wide range of AI apps that developers can tinker with and learn from what others have done. Microsoft also offers an AI school with educational tracks specifically for business applications.

Watson is IBM's version of cloud-hosted machine learning and business AI, but it goes a bit further with more AI options. IBM offers on-site servers custom built for AI tasks for businesses that don't want to rely on cloud hosting, and it also has IBM AI OpenScale, an AI platform that can be integrated into other cloud hosting services, which could help to avoid vendor lock-in.

Before choosing an AI platform, it's important to determine what sorts of skills you have available within your organization, and what skills you'll want to focus on when hiring new AI team members. The platforms can require specialization in different sorts of development and data science skills, so be sure to plan accordingly.

Additional resources

With business AI taking so many forms, it can be tough to determine what skills an organization needs to implement it.

As previously reported by TechRepublic, finding employees with the right set of AI skills is the problem most commonly cited by organizations looking to get started with artificial intelligence.

Skills needed for an AI project differ based on business needs and the platform being used, though most of the biggest platforms (like those listed above) support most, if not all, of the most commonly used programming languages and skills needed for AI.

SEE: Don't miss our latest coverage about AI (TechRepublic on Flipboard)

TechRepublic covered in March 2018 the 10 most in-demand AI skills, which is an excellent summary of the types of training an organization should look at when building or expanding a business AI team:

Many business AI platforms offer training courses in the specifics of running their architecture and the programming languages needed to develop more AI tools. Businesses that are serious about AI should plan to either hire new employees or give existing ones the time and resources necessary to train in the skills needed to make AI projects succeed.

Additional resources

Getting started with business AI isn't as easy as simply spending money on an AI platform provider and spinning up some pre-built models and algorithms. There's a lot that goes into successfully adding AI to an organization.

At the heart of it all is good project planning. Adding artificial intelligence to a business, no matter how it will be used, is just like any business transformation initiative. Here is an outline of just one way to approach getting started with business AI.

Determine your AI objective. Figure out how AI can be used in your organization and to what end. By focusing on a narrower implementation with a specific goal, you can better allocate resources.

Identify what needs to happen to get there. Once you know where you want to be, you can figure out where you are and how to make the journey. This could include starting to sort existing data, gathering new data, hiring talent, and other pre-project steps.

Build a team. With an end goal in sight and a plan to get there, it's time to assemble the best team to make it happen. This can include current employees, but don't be afraid to go outside the organization to find the most qualified people. Also, be sure to allow existing staff to train so they have the opportunity to contribute to the project.

Choose an AI platform. Some AI platforms may be better suited to particular projects, but by and large they all offer similar products in order to compete with each other. Let your team give recommendations on which AI platform to choose--they're the experts who will be in the trenches.

Begin implementation. With a goal, team, and platform, you're ready to start working in earnest. This won't be quick: AI machines need to be trained, testing on subsets of data has to be performed, and lots of tweaks will need to be made before a business AI is ready to hit the real world.

Additional resources

Be in the know about smart cities, AI, Internet of Things, VR, AR, robotics, drones, autonomous driving, and more of the coolest tech innovations. Delivered Wednesdays and Fridays

Read the original here:
Artificial intelligence: Cheat sheet - TechRepublic

Global Artificial Intelligence in Big Data Analytics and IoT Report 2021: Data Capture, Information and Decision Support Services Markets 2021-2026 -…

DUBLIN, August 06, 2021--(BUSINESS WIRE)--The "Artificial Intelligence in Big Data Analytics and IoT: Market for Data Capture, Information and Decision Support Services 2021 - 2026" report has been added to ResearchAndMarkets.com's offering.

This report evaluates various AI technologies and their use relative to analytics solutions within the rapidly growing enterprise and industrial data arena. The report assesses emerging business models, leading companies, and solutions.

The report also analyzes how different forms of AI may be best used for problem-solving. The report also evaluates the market for AI in IoT networks and systems. The report provides forecasting for unit growth and revenue for both analytics and IoT from 2021 to 2026.

The Internet of Things (IoT) in consumer, enterprise, industrial, and government market segments has very unique needs in terms of infrastructure, devices, systems, and processes. One thing they all have in common is that they each produce massive amounts of data, most of which is of the unstructured variety, requiring big data technologies for management.

Artificial Intelligence (AI) algorithms enhance the ability for big data analytics and IoT platforms to provide value to each of these market segments. The author sees three different types of IoT Data: (1) Raw (untouched and unstructured) Data, (2) Meta (data about data), and (3) Transformed (valued-added data). Artificial Intelligence (AI) will be useful in support of managing each of these data types in terms of identifying, categorizing, and decision making.

AI coupled with advanced big data analytics provides the ability to make raw data meaningful and useful as information for decision-making purposes. The use of AI for decision making in IoT and data analytics will be crucial for efficient and effective decision making, especially in the area of streaming data and real-time analytics associated with edge computing networks.

Story continues

Real-time data will be a key value proposition for all use cases, segments, and solutions. The ability to capture streaming data, determine valuable attributes, and make decisions in real-time will add an entirely new dimension to service logic. In many cases, the data itself, and actionable information will be the service.

Report Benefits:

Forecasts for AI in big data analytics 2021 to 2026

Identify the highest potential AI technology area opportunities

Understand AI strategies and initiatives of leading companies

Learn the optimal use of AI for smart predictive analytics in IoT data

Understand the AI in Big Data, Analytics, and IoT ecosystem and value chain

Identify opportunities for AI in Analytics for IoT and other unstructured data

Select Report Findings:

Global market for AI in big data and IoT as a whole will reach $27.3B by 2026

Embedded AI in support of IoT-connected things will reach $6.3B globally by 2026

AI makes IoT data 27% more efficient and analytics 48% more effective for industry apps

Overall market for AI in big data and IoT will be led by Asia Pac followed by North America

AI in industrial machines will reach $727M globally by 2026 with collaborative robot growth at 42.5% CAGR

AI in autonomous weapon systems will reach $203M globally by 2026 with AI in military robotics growing at 40.3% CAGR

Machine learning will become a key AI technology to realize the full potential of big data and IoT, particularly in edge computing platforms

Top three segments will be: (1) Data Mining and Automation, (2) Automated Planning, Monitoring, and Scheduling, and (3) Data Storage and Customer Intelligence

Key Topics Covered:

1.0 Executive Summary

2.0 Introduction

3.0 Overview

Artificial Intelligence and Machine Learning

AI Types

AI & ML Language

Artificial Intelligence Technology

AI and ML Technology Goal

AI Approaches

AI Tools

AI Outcomes

Neural Network and Artificial Intelligence

Deep Learning and Artificial Intelligence

Predictive Analytics and Artificial Intelligence

Internet of Things and Big Data Analytics

IoT and Artificial Intelligence

Consumer IoT, Big Data Analytics, and Artificial Intelligence

Industrial IoT, Big Data Analytics, and Machine Learning

Artificial intelligence and cognitive computing

Transhumanism or H+ and Artificial Intelligence

Rise of Analysis of Things (AoT)

Supervised vs. Unsupervised Learning

AI as New form of UI

4.0 AI Technology in Big Data and IoT

Machine Learning Everywhere

Machine Learning APIs and Big Data Development

Phases of Machine Learning APIs

Machine Learning API Challenges

Top Machine Learning APIs

IBM Watson API

Microsoft Azure Machine Learning API

Google Prediction API

Amazon Machine Learning API

BigML

AT&T Speech API

Wit.ai

AlchemyAPI

Diffbot

PredictionIO

4.0 Machine Learning API in the General Application Environment

Enterprise Benefits of Machine Learning

Machine Learning in IoT Data

Ultra Scale Analytics and Artificial Intelligence

Rise of Algorithmic Business

Cloud Hosted Machine Intelligence

Contradiction of Machine Learning

Value Chain Analysis

5.0 AI Technology Application and Use Case

Intelligence Performance Monitoring

Infrastructure Monitoring

Generating Accurate Models

Recommendation Engine

Blockchain and Crypto Technologies

Enterprise Application

Contextual Awareness

Customer Feedback

Self-Driving Car

Fraud Detection System

Personalized Medicine and Healthcare Service

Predictive Data Modelling

Smart Machines

Cybersecurity Solutions

Autonomous Agents

Intelligent Assistant

Intelligent Decision Support System

Risk Management

Data Mining and Management

Intelligent Robotics

Financial Technology

Machine Intelligence

6.0 AI Technology Impact on Vertical Market

Enterprise Productivity Gain

Digital Twinning and Physical Asset Security

IT Process Efficiency Increase

AI to Replace Human Form Work

The rest is here:
Global Artificial Intelligence in Big Data Analytics and IoT Report 2021: Data Capture, Information and Decision Support Services Markets 2021-2026 -...

Clarkson Electrical and Computer Engineering Team Publish Paper on Artificial intelligence for Resource-Constrained Devices – Clarkson University News

Figure: An overview of the deep learning model deployment process on resource-constrained devices, which are widely used in modern autonomous robotics applications.

Due to the success of research on deep learning methods, the last ten years have seen an explosion in the development of artificial intelligence (AI) techniques for a wide variety of applications. Deep learning broadly refers to a class of algorithms that mimic the human brain structure and can be used to build systems that learn from previous data. Techniques based on deep learning have allowed ever more large and sophisticated machine learning models to be built and deployed, allowing a very rich set of complex problems to be solved. However, deep learning is computationally expensive, severely limiting its use on resource-constrained devices like single-board computers. Clarkson University Professors of Electrical and Computer Engineering Faraz Hussain and James Carroll along with Ph.D. student M. G. Sarwar Murshed have been working on designing novel techniques for deploying deep-learning-based intelligent systems in resource-constrained settings by optimizing models that can be used on edge devices.

Based on research supported by Badger Inc, they recently published a paper entitled Resource-aware On-device Deep Learning for Supermarket Hazard Detection in the 19th IEEE International Conference on Machine Learning and Applications (ICMLA 20), which demonstrated a method for deploying deep learning models on small devices such as the Coral Dev board, Jetson Nano, and the Raspberry Pi. The paper describes a new dataset of images for supermarket floor hazards and a new deep learning model named EdgeLite to automatically identify such hazards, specifically intended to be used in extremely resource-constrained settings. EdgeLite processes all the images locally to allow it to be used to monitor supermarket floors in real-time. By processing all data locally using a resource-constrained device, EdgeLite helps preserves the privacy of the data.

A comparison of EdgeLite with six state-of-the-art deep learning models (viz. MobileNetV1, MobileNetV2, InceptionNet V1, InceptionNet V2, ResNet V1, and GoogleNet) for supermarket hazard detection when deployed on the Coral Dev Board, the Raspberry Pi, and the Nvidia Jetson TX2, showed it to have the highest accuracy and comparable resource requirements in terms of memory, inference time, and energy.

Further, they have successfully demonstrated how to deploy EdgeLite on autonomous robots. This was done using the Robot Operating System (ROS), a widely-used middleware platform for building autonomous robot applications. Using EdgeLite, a robot can identify hazardous floors by analyzing the image data without the help of additional hardware such as Lidar or other sensors, which can help a robot navigate through the supermarket aisles and report potential hazards, thus significantly improving safety.

Continued here:
Clarkson Electrical and Computer Engineering Team Publish Paper on Artificial intelligence for Resource-Constrained Devices - Clarkson University News

Biggest companies trending on artificial intelligence in Q2 2021 – Verdict

GlobalData research has found the top companies trending in artificial intelligence based on their performance and engagement online.

Using research from GlobalDatas Influencer platform, Verdict has named five of the top companies trending on artificial intelligence in Q2 2021.

Alphabet Inc is a holding company under which a number of companies operate with Google being the largest among them. Google offers multiple services to its users such as YouTube, Chrome, Maps, AdSense, and Android. The company also owns other subsidiary companies such as venture capital investment arm GV, Verily Life Sciences, a research organisation, and Calico Life Sciences, a research and development company.

Alphabet is headquartered in Mountain View, California, US. Googles plans to double the research staff for its artificial intelligence (AI) ethics team, a research paper released by Google Research to understand the spatio-temporal frames in videos, and the detailed mapping of brain connections by Google and Harvard formed some of the major discussions that took place around Alphabet on Twitter in Q2 2021.

Mario Pawlowski, the CEO of iTrucker, an online marketplace platform for the trucking industry, shared an article on Googles plans to double its AI ethics research staff in the future due to the issues faced by the group owing to controversies surrounding its research apart from rise in staff attrition. The resultant increase in hiring is expected to increase the size of the responsible AI team to 200 researchers. Alphabet has also pledged to boost the operating budget of the team, which is working on preventing discrimination and other issues with AI.

Amazon is a technology company dealing in e-commerce through its portal Amazon.com. It also offers AI services through the Amazon Web Services (AWS) division apart from other services such as digital streaming and music streaming through Prime Video and Prime Music respectively. Some of the AI services offered by the company are Amazon Lex that enables the development of conversational interfaces, Amazon Polly, a cloud service, and Amazon Rekognition, a computer vision platform.

Amazon.com is headquartered in Seattle, Washington, US. Amazons cashier-less stores, automated packaging adopted by the company, and the launch of machine learning summer school were some of the key discussions that took place on Twitter over Amazon in Q2 2021.

Evan Kirstel, a B2B tech influencer, shared a video showing the operations of cashier-less Amazon Go and Amazon Fresh stores that use AI technology. The customers are required to install an app and scan it at the entrance, following which every item picked up by them in the Amazon Go store is charged on their account after they leave the store. Amazon has opened 30 Amazon Go stores in New York, San Francisco, Chicago, and Seattle that are completely unmanned.

Microsoft is a technology company that develops computer software, devices, and provides cloud-based solutions and emerging technologies such as AI, Internet of Things (IoT) and mixed reality. The company offers AI services through Microsoft AI, which includes AI platform that provides a framework for the development of AI-based solutions as well as connects companies with partners to integrate AI into their organisations.

Microsoft is headquartered in Redmond, Washington, US. Microsoft winning US Armys contract for supplying augmented reality (AR) headsets, the companys investments in open AI, and AI debugging and visualisation tool TensorWatch being made open source were some of the discussions that took place on Twitter over Microsoft in Q2 2021.

Dr. Ganapathi Pulipaka, a chief data scientist and SAP technology architect at management consulting and technology services company Accenture, shared an article on Microsoft winning a contract worth up to $21.9bn for over ten years from the US Army for supplying its HoloLens AR headsets. The company will provide over 1,20,000 headsets to the US Army under the contract. HoloLens provides mixed reality using AI enabling its users to view holograms overlapping the actual environment and communicate with other team members using simple hand and voice gestures.

Intel Corporation is a technology company that is involved in manufacturing, developing, and supplying microprocessors, motherboards, integrated circuits, flash memory, and network interface controllers. Intel also offers AI and deep learning solutions to develop and deploy AI applications apart from processors equipped with AI software.

Intel is headquartered in Santa Clara, California, US. Intel and L&Ts AI-based smart parking solution, the realistic GTA V graphics delivered by the company using machine learning, and partnership between Intel and John Deere to develop an AI-based programme to detect manufacturing defects were the popular discussions surrounding Intel on Twitter in Q2 2021.

Nige Willson, founder of awaken AI, -an AI advisory and consultancy company, shared an article on an AI-based outdoor smart parking solution developed by Intel and L&T Technology Services, an engineering services company. The solution utilises OpenVINO Toolkit that operate on Intel Xeon scalable processors and Intel Movidius vision processing units to provide smart parking experience in public areas, airports, stadiums and offices.

Nvidia Corp is a multinational technology company that deals in the gaming, mobile computing, and automotive market. It designs graphics processing units (GPUs) and systems on a chip units (SoCs). In addition, it provides deep learning and AI solutions such as purpose-built AI supercomputers, open AI car computing platform and embedded AI and deep learning for intelligent devices.

Nvidia Corp is headquartered in Santa Clara, California, US. The launch of Morpheus, an AI-powered app framework for cybersecurity, partnership between Plotty and Nvidia to merge Dash, an open-source framework, and RAPIDS, a suite of software libraries, and the power of automation were the major discussions that took place on Twitter over Nvidia Corp in Q2 2021.

Faisal Khan, a tech blogger, crypto evangelist, and forex trader, shared an article about Nvidias launch of the AI-powered app framework Morpheus. The cloud-native app framework utilises AI and machine learning to identify and neutralise cyber threats and attacks. Morpheus can identify malware and prevent data leaks and phishing attempts.

Related Report Download the full report from GlobalData's Report StoreGet the Report

Latest report from Visit GlobalData Store

See the original post:
Biggest companies trending on artificial intelligence in Q2 2021 - Verdict

OHSU is part of national institute to advance artificial intelligence in aging – OHSU News

Oregon Health & Science University is part of a new National Science Foundation-funded institute to develop artificial intelligence systems to help people live independently as they age. (OHSU)

Oregon Health & Science University is one of five universities nationwide to form a new National Science Foundation-funded institute to design and build intelligent systems to help people age in place.

The five-year, $20 million grant will support the creation of an AI Institute for Collaborative Assistance and Responsive Interaction for Networked Groups, or AI-Caring. The institute will develop artificial intelligence systems that work for aging adults, including those diagnosed with mild cognitive impairment, and their caregivers.

Most older adults prefer to remain in their own homes. But safety concerns, medication schedule and isolation can all make it difficult for them to do so.

The work builds on a model OHSU established more than a decade ago through its Oregon Center for Aging and Technology, or ORCATECH. In this research, participants agree to permit the ORCATECH system to collect unique life data in their homes, using an array of sensors to assess changes in gait, sleep and overall activity. It also includes a MedTracker electronic pill box as well as a scale to measure weight, body fat and pulse.

Our project is focused on assisting people to age independently and in particular people who might develop cognitive impairment later in life, said OHSU site leader Jeffrey Kaye, M.D., director of the OHSU Layton Aging & Alzheimers Disease Center. ORCATECH has unique datasets that will allow the new institute to develop and create advanced artificial intelligence algorithms to help people age in place.

OHSU has developed terabytes of privacy-protected data that will be useful for the new institute.

Were very honored and pleased to be partners in this national effort, Kaye said. We look forward to collaborating with other investigators, who will help advance our home assessment platform and the artificial intelligence. The goal is to make better diagnoses and ultimately mediate disabilities in aging.

Given the staggering costs of long-term care services for people who can no longer live independently, estimated to top $1 trillion by 2050 to care for those with Alzheimers disease, Kaye said the new institute is an important step toward developing solutions.

Our goal is to create systems that help people take care of people, said Beth Mynatt, director of the Institute for People and Technology at Georgia Tech, the lead institution for the new project. Care can be a complicated task, requiring coordination and decision-making across family members managing day to day demands.

Aside from OHSU and Georgia Tech, the institute will include faculty from Carnegie Mellon University, Oregon State University and the University of Massachusetts Lowell. Amazon and Google are industry sponsors.

Read more here:
OHSU is part of national institute to advance artificial intelligence in aging - OHSU News