6 Predictions for the Future of Artificial Intelligence in …

The business worlds enthusiasm for artificial intelligence has been building towards a fever pitch in the past few years, but those feelings could get a bit more complicated in 2020.

Despite investment, research publications and job demand in the field continuing to grow through 2019, technologists are starting to come to terms with potential limitations in what AI can realistically achieve. Meanwhile, a growing movement is grappling with its ethics and social implications, and widespread business adoption remains stubbornly low.

As a result, companies and organizations are increasingly pushing tools that commoditize existing predictive and image recognition machine learning, making the tech easier to explain and use for non-coders. Emerging breakthroughs, like the ability to create synthetic data and open-source language processors that require less training than ever, are aiding these efforts.

At the same time, the use of AI for nefarious ends like deepfakes and the mass-production of spam are still in their earliest theoretical stages, and troubling reports indicate such dystopia may become more real in 2020.

Here are six predictions for the tech in this new year:

A high-profile research org called OpenAI grabbed headlines in early 2019 when it proclaimed its latest news-copy generating machine learning software, GPT-2, was too dangerous to publicly release in full. Researchers worried the passably realistic-sounding text generated by GPT-2 would be used for the mass-generation of fake news.

GPT-2 is the most sophisticated of a new type of language generation. It involves a base program trained on a massive dataset. In GPT-2s case, it trains on more than 8 million websites to understand the general mechanics of how language works. That foundational system can then be trained on a relatively smaller, more specific dataset to mimic a certain style for uses like predictive text, chatbots or even creative writing aids.

OpenAI ended up publishing the full version of the model in November. It called attention to the excitingif sometimes unsettlingpotential of a growing trend in a subfield of AI called natural language processing, the ability to parse and produce natural-sounding human language.

The resource and accessibility breakthrough is analogous to a similar milestone in the subfield of computer vision around 2012, one widely credited with spawning the surge in image and facial recognition AI of the last few years. Some researchers think natural language tech is rumored to be poised for a similar boom in the next year or so. Its now starting to emerge, Tsung-Hsien Wen, chief technology officer at a chatbot startup called PolyAI, said of this possibility.

Ask any data scientist or company toiling over a nascent AI strategy what their biggest headache is and the answer will likely involve data. Machine learning systems perform only as well as the data on which theyre trained, and the scale at which they require it is massive.

One reprieve from this insatiable need may come from an unexpected place: an emergent new machine learning model currently best known for its role in deepfakes and AI-generated art. Patent applications indicate that brands explored all kinds of uses for this tech, known as a generative adversarial network (GAN), in 2019. But one of its unsung, yet potentially most impactful, talents is its ability to pad out a dataset with mass-produced fake data thats similar but slightly varied from the original material.

What happens here is that you try to complement a set of data with another kind of data that may not be exactly what youve observedthat could be made upbut that are trustworthy enough to be used in a machine learning environment, said Gartner analyst Erick Brethenoux.

Continue Reading

See more here:
6 Predictions for the Future of Artificial Intelligence in ...

Related Posts

Comments are closed.