Category Archives: Artificial Intelligence

Artificial intelligence is helping us talk to animals (yes, really) – Wired.co.uk

Each time any of us uses a tool, such as Gmail, where theres a powerful agent to help correct our spellings, and suggest sentence endings, theres an AI machine in the background, steadily getting better and better at understanding language. Sentence structures are parsed, word choices understood, idioms recognised.

That exact capability could, in 2020, grant the ability to speak with other large animals. Really. Maybe even faster than brain-computer interfaces will take the stage.

Our AI-enhanced abilities to decode languages have reached a point where they could start to parse languages not spoken by anyone alive. Recently, researchers from MIT and Google applied these abilities to ancient scripts Linear B and Ugaritic (a precursor of Hebrew) with reasonable success (no luck so far with the older, and as-yet undeciphered Linear A).

First, word-to-word relations for a specific language are mapped, using vast databases of text. The system searches texts to see how often each word appears next to every other word. This pattern of appearances is a unique signature that defines the word in a multidimensional parameter space. Researchers estimate that languages all languages can be best described as having 600 independent dimensions of relationships, where each word-word relationship can be seen as a vector in this space. This vector acts as a powerful constraint on how the word can appear in any translation the machine comes up with.

These vectors obey some simple rules. For example: king man + woman = queen. Any sentence can be described as a set of vectors that in turn form a trajectory through the word space.

These relationships persist even when a language has multiple words for related concepts: the famed near-100 words Inuits have for snow will all be in similar dimensional spaces each time someone talks about snow, it will always be in a similar linguistic context.

Take a leap. Imagine that whale songs are communicating in a word-like structure. Then, what if the relationships that whales have for their ideas have dimensional relationships similar to those we see in human languages?

That means we should be able to map key elements of whale songs to dimensional spaces, and thus to comprehend what whales are talking about and perhaps to talk to and hear back from them. Remember: some whales have brain volumes three times larger than adult humans, larger cortical areas, and lower but comparable neuron counts. African elephants have three times as many neurons as humans, but in very different distributions than are seen in our own brains. It seems reasonable to assume that the other large mammals on earth, at the very least, have thinking and communicating and learning attributes we can connect with.

What are the key elements of whale songs and of elephant sounds? Phonemes? Blocks of repeated sounds? Tones? Nobody knows, yet, but at least the journey has begun. Projects such as the Earth Species Project aim to put the tools of our time particularly artificial intelligence, and all that we have learned in using computers to understand our own languages to the awesome task of hearing what animals have to say to each other, and to us.

There is something deeply comforting to think that AI language tools could do something so beautiful, going beyond completing our emails and putting ads in front of us, to knitting together all thinking species. That, we perhaps can all agree, is a better and perhaps nearer-term ideal to reach than brain-computer communications. The beauty of communicating with them will then be joined to the market ideal of talking to our pet dogs. (Cats may remain beyond reach.)

Mary Lou Jepsen is the founder and CEO of Openwater. John Ryan, her husband, is a former partner at Monitor Group

The illegal trade of Siberian mammoth tusks revealed

I ditched Google for DuckDuckGo. Here's why you should too

How to use psychology to get people to answer your emails

The WIRED Recommends guide to the best Black Friday deals

Get The Email from WIRED, your no-nonsense briefing on all the biggest stories in technology, business and science. In your inbox every weekday at 12pm sharp.

by entering your email address, you agree to our privacy policy

Thank You. You have successfully subscribed to our newsletter. You will hear from us shortly.

Sorry, you have entered an invalid email. Please refresh and try again.

Visit link:

Artificial intelligence is helping us talk to animals (yes, really) - Wired.co.uk

Artificial Intelligence Identifies Previously Unknown Features Associated with Cancer Recurrence – Imaging Technology News

December 27, 2019 Artificial intelligence (AI) technology developed by the RIKEN Center for Advanced Intelligence Project (AIP) in Japan has successfully found features in pathology images from human cancer patients, without annotation, that could be understood by human doctors. Further, the AI identified features relevant to cancer prognosis that were not previously noted by pathologists, leading to a higher accuracy of prostate cancer recurrence compared to pathologist-based diagnosis. Combining the predictions made by the AI with predictions by human pathologists led to an even greater accuracy.

According to Yoichiro Yamamoto, M.D., Ph.D., the first author of the study published in Nature Communications, "This technology could contribute to personalized medicine by making highly accurate prediction of cancer recurrence possible by acquiring new knowledge from images. It could also contribute to understanding how AI can be used safely in medicine by helping to resolve the issue of AI being seen as a 'black box.'"

The research group led by Yamamoto and Go Kimura, in collaboration with a number of university hospitals in Japan, adopted an approach called "unsupervised learning." As long as humans teach the AI, it is not possible to acquire knowledge beyond what is currently known. Rather than being "taught" medical knowledge, the AI was asked to learn using unsupervised deep neural networks, known as autoencoders, without being given any medical knowledge. The researchers developed a method for translating the features found by the AI only numbers initially into high-resolution images that can be understood by humans.

To perform this feat the group acquired 13,188 whole-mount pathology slide images of the prostate from Nippon Medical School Hospital (NMSH), The amount of data was enormous, equivalent to approximately 86 billion image patches (sub-images divided for deep neural networks), and the computation was performed on AIP's powerful RAIDEN supercomputer.

The AI learned using pathology images without diagnostic annotation from 11 million image patches. Features found by AI included cancer diagnostic criteria that have been used worldwide, on the Gleason score, but also features involving the stroma connective tissues supporting an organ in non-cancer areas that experts were not aware of. In order to evaluate these AI-found features, the research group verified the performance of recurrence prediction using the remaining cases from NMSH (internal validation). The group found that the features discovered by the AI were more accurate (AUC=0.820) than predictions made based on the human-established cancer criteria developed by pathologists, the Gleason score (AUC=0.744). Furthermore, combining both AI-found features and the human-established criteria predicted the recurrence more accurately than using either method alone (AUC=0.842). The group confirmed the results using another dataset including 2,276 whole-mount pathology images (10 billion image patches) from St. Marianna University Hospital and Aichi Medical University Hospital (external validation).

"I was very happy," said Yamamoto, "to discover that the AI was able to identify cancer on its own from unannotated pathology images. I was extremely surprised to see that AI found features that can be used to predict recurrence that pathologists had not identified."

He continued, "We have shown that AI can automatically acquire human-understandable knowledge from diagnostic annotation-free histopathology images. This 'newborn' knowledge could be useful for patients by allowing highly-accurate predictions of cancer recurrence. What is very nice is that we found that combining the AI's predictions with those of a pathologist increased the accuracy even further, showing that AI can be used hand-in-hand with doctors to improve medical care. In addition, the AI can be used as a tool to discover characteristics of diseases that have not been noted so far, and since it does not require human knowledge, it could be used in other fields outside medicine."

For more information:www.riken.jp/en/research/labs/aip/

Visit link:

Artificial Intelligence Identifies Previously Unknown Features Associated with Cancer Recurrence - Imaging Technology News

Quantum leap: Why we first need to focus on the ethical challenges of artificial intelligence – Economic Times

By Vivek Wadhwa

AI has the potential to be as transformative to the world as electricity, by helping us understand the patterns of information around us. But it is not close to living up to the hype. The super-intelligent machines and runaway AI that we fear are far from reality; what we have today is a rudimentary technology that requires lots of training. Whats more, the phrase artificial intelligence might be a misnomer because human intelligence and spirit amount to much more than what bits and bytes can encapsulate.

I encourage readers to go back to the ancient wisdoms of their faith to understand the role of the soul and the deeper self. This is what shapes our consciousness and makes us human, what we are always striving to evolve and perfect. Can this be uploaded to the cloud or duplicated with computer algorithms? I dont think so.

What about the predictions that AI will enable machines to have human-like feeling and emotions? This, too, is hype. Love, hate and compassion arent things that can be codified. Not to say that a machine interaction cant seem human we humans are gullible, after all. According to Amazon, more than 1 million people had asked their Alexa-powered devices to marry them in 2017 alone. I doubt those marriages, should Alexa agree, would last very long!

Todays AI systems do their best to replicate the functioning of the human brains neural networks, but their emulations are very limited. They use a technique called Deep Learning. After you tell a machine exactly what you want it to learn and provide it with clearly labelled examples, it analyses the patterns in those data and stores them for future application. The accuracy of its patterns depends on completeness of data. So the more examples you give it, the more useful it becomes.

Herein lies a problem, though an AI system is only as good as the data it receives. It is able to interpret them only within the narrow confines of the supplied context. It doesnt understand what it has analysed so it is unable to apply its analysis to other scenarios. And it cant distinguish causation from correlation.

AI shines in performing tasks that match patterns in order to obtain objective outcomes. Examples of what it does well include playing chess, driving a car on a street and identifying a cancer lesion in a mammogram. These systems can be incredibly helpful extensions of how humans work, and with more data, the systems will keep improving. Although an AI machine may best a human radiologist in spotting cancer, it will not, for many years to come, replicate the wisdom and perspective of the best human radiologists. And it wont be able to empathise with a patient in the way that a doctor does. This is where AI presents its greatest risk and what we really need to worry about use of AI in tasks that may have objective outcomes but incorporate what we would normally call judgement. Some such tasks exercise much influence over peoples lives. Granting a loan, admitting a student to a university, or deciding whether children should be separated from their birth parents due to suspicions of abuse falls into this category. Such judgements are highly susceptible to human biases but they are biases that only humans themselves have the ability to detect.

And AI throws up many ethical dilemmas around how we use technology. It is being used to create killing machines for the battlefield with drones which can recognise faces and attack people. China is using AI for mass surveillance, and wielding its analytical capabilities to assign each citizen a social credit based on their behaviour. In America, AI is mostly being built by white people and Asians. So, it amplifies their inbuilt biases and misreads African Americans. It can lead to outcomes that prefer males over females for jobs and give men higher loan amount than women. One of the biggest problems we are facing with Facebook and YouTube is that you are shown more and more of the same thing based on your past views, which creates filter bubbles and a hotbed of misinformation. Thats all thanks to AI.

Rather than worrying about super-intelligence, we need to focus on the ethical issues about how we should be using this technology. Should it be used to recognise the faces of students who are protesting against the Citizenship (Amendment) Act? Should India install cameras and systems like China has? These are the types of questions the country needs to be asking.The writer is a distinguished fellow and professor, Carnegie Mellon Universitys College of Engineering, Silicon Valley.

This story is part of the 'Tech that can change your life in the next decade' package.

Link:

Quantum leap: Why we first need to focus on the ethical challenges of artificial intelligence - Economic Times

In 2020, lets stop AI ethics-washing and actually do something – MIT Technology Review

Last year, just as I was beginning to cover artificial intelligence, the AI world was getting a major wake-up call. There were some incredible advancements in AI research in 2018from reinforcement learning to generative adversarial networks (GANs) to better natural-language understanding. But the year also saw several high-profile illustrations of the harm these systems can cause when they are deployed too hastily.

A Tesla crashed on Autopilot, killing the driver, and a self-driving Uber crashed, killing a pedestrian. Commercial face recognition systems performed terribly in audits on dark-skinned people, but tech giants continued to peddle them anyway, to customers including law enforcement. At the beginning of this year, reflecting on these events, I wrote a resolution for the AI community: Stop treating AI like magic, and take responsibility for creating, applying, and regulating it ethically.

In some ways, my wish did come true. In 2019, there was more talk of AI ethics than ever before. Dozens of organizations produced AI ethics guidelines; companies rushed to establish responsible AI teams and parade them in front of the media. Its hard to attend an AI-related conference anymore without part of the programming being dedicated to an ethics-related message: How do we protect peoples privacy when AI needs so much data? How do we empower marginalized communities instead of exploiting them? How do we continue to trust media in the face of algorithmically created and distributed disinformation?

Sign up for The Algorithm artificial intelligence, demystified

But talk is just thatits not enough. For all the lip service paid to these issues, many organizations AI ethics guidelines remain vague and hard to implement. Few companies can show tangible changes to the way AI products and services get evaluated and approved. Were falling into a trap of ethics-washing, where genuine action gets replaced by superficial promises. In the most acute example, Google formed a nominal AI ethics board with no actual veto power over questionable projects, and with a couple of members whose inclusion provoked controversy. A backlash immediately led to its dissolution.

Meanwhile, the need for greater ethical responsibility has only grown more urgent. The same advancements made in GANs in 2018 have led to the proliferation of hyper-realistic deepfakes, which are now being used to target women and erode peoples belief in documentation and evidence. New findings have shed light on the massive climate impact of deep learning, but organizations have continued to train ever larger and more energy-guzzling models. Scholars and journalists have also revealed just how many humans are behind the algorithmic curtain. The AI industry is creating an entirely new class of hidden laborerscontent moderators, data labelers, transcriberswho toil away in often brutal conditions.

But not all is dark and gloomy: 2019 was the year of the greatest grassroots pushback against harmful AI from community groups, policymakers, and tech employees themselves. Several citiesincluding San Francisco and Oakland, California, and Somerville, Massachusettsbanned public use of face recognition, and proposed federal legislation could soon ban it from US public housing as well. Employees of tech giants like Microsoft, Google, and Salesforce also grew increasingly vocal against their companies use of AI for tracking migrants and for drone surveillance.

Within the AI community, researchers also doubled down on mitigating AI bias and reexamined the incentives that lead to the fields runaway energy consumption. Companies invested more resources in protecting user privacy and combating deepfakes and disinformation. Experts and policymakers worked in tandem to propose thoughtful new legislationmeant to rein in unintended consequences without dampening innovation. At the largest annual gathering in the field this year, I was both touched and surprised by how many of the keynotes, workshops, and posters focused on real-world problemsboth those created by AI and those it could help solve.

So here is my hope for 2020: that industry and academia sustain this momentum and make concrete bottom-up and top-down changes that realign AI development. While we still have time, we shouldnt lose sight of the dream animating the field. Decades ago, humans began the quest to build intelligent machines so they could one day help us solve some of our toughest challenges.

AI, in other words, is meant to help humanity prosper. Lets not forget.

To have more stories like this delivered directly to your inbox,sign upfor our Webby-nominated AI newsletter The Algorithm. It's free.

More here:

In 2020, lets stop AI ethics-washing and actually do something - MIT Technology Review

The Power Of Purpose: How We Counter Hate Used Artificial Intelligence To Battle Hate Speech Online – Forbes

We Counter Hate

One of the most fascinating examples of social innovation Ive been tracking recently was the We Counter Hate platform, by Seattle-based agency Wunderman Thompson Seattle (formerly POSSIBLE) that sought to reduce hate speech on Twitter by turning retweets of these hateful messages into donations for a good cause.

Heres how it worked: Using machine learning, it first identified hateful speech on the platform. A human moderator then selected the most offensive and most dangerous tweets and attached an undeletable reply, which informed recipients that if they retweet the message, a donation will be committed to an anti-hate group. In a beautiful twist this non-profit wasLife After Hate, a group that helps members of extremist groups leave and transition to mainstream life.

Unfortunately (and ironically) on the very day I reached out to the team, Twitter decided to allow users to hide replies in their feeds in an effort to empower people faced with bullying and harassment, eliminating the reply function which was the main mechanism that gave #WeCounterHate its power and led to it being able to remove more than 20M potentialhatespeech impressions.

Undeterred, I caught up with some members of the core teamShawn Herron, Jason Carmel and Matt Gilmoreto find out more about their journey.

(From left to right)Shawn Herron, Experience Technology Director @ Wunderman ThompsonMatt ... [+] Gilmore, Creative Director @ Wunderman ThompsonJason Carmel, Chief Data Officer @ Wunderman Thompson

Afdhel Aziz: Gentlemen, welcome. How did the idea for WeCounterHate come about?

Shawn Herron: It started when we caught wind of what the citizens of the town of Wunsiedel, Germany were doing to combat the annual extremists that were descending on their town every year to hold rally and march through the streets. The towns people had devised a peaceful way to upend the extremists efforts by turning their hateful march into an involuntary walk-a-thon that benefitted EXIT Deutschland, an organization that helps people escape extremist groups. For every meter the neo Nazis marched 10 euro would be donated to Exit Deutschland. The question became, how can we scale something like that so anyone, anywhere, could have the ability to fight against hate in a meaningful way?

Jason Carmel: We knew that, to create scale, it had to be digital in nature and Twitter seemed like the perfect problem in need of a solution. We figured if we could reduce hate on a platform of that magnitude, even a small percentage, it could have a big impact. We started by developing an innovative machine-learning and natural-language processing technology that could identify and classify hate speech.

Matt Gilmore: But we still needed the mechanic, a catch 22, that would present those looking to spread hate on the platform with a no-win decision to make. Thats when we stumbled onto the fact that Twitter didnt allow people to delete comments on their tweets. The only way to remove a comment was to delete the post entirely. That mechanic is what gave us a way put a permanent marker, in the form of an image and message, on tweets containing hate speech. Its that permanent marker that let those looking to retweet, and spread hate, know that doing so would benefit an organization theyre opposed to, Life After Hate. No matter what they chose to do, love wins.

Aziz: Fascinating. So, what led you to the partnership with Life After Hate and how did that work?

Carmel: Staffed and founded by former hate group members and violent extremists, Life After Hate is a non-profit that helps people in extremist groups break from that hate-filled lifestyle. They offer a welcoming way out thats free of judgement.We collaborated with them in training the AI thats used to identify hate speech in near real time on Twitter. With the benefit of their knowledge our AI can even find hidden forms of hate speech (coded language, secret emoji combinations) in a vast sea of tweets. Their expertise was crucial to align the language we used when countering hate, making it more compassionate and matter-of-fact, rather than confrontational.

Herron: Additionally, their partnership just made perfect sense on a conceptual level as the beneficiary of the effort. If youre one of those people looking to spread hate on Twitter, youre much less likely to hit retweet knowing that youll be benefiting an organization youre opposed to.

Aziz: Was it hard to wade through that much hate speech? What surprised you?

Herron: Being exposed to all the hate filled tweets was easily the most difficult part of the whole thing. The human brain is not wired to read and see the kinds of messages we encountered for long periods of time. At the end of the countering process, after the AI identified hate, we always relied on a human moderator to validate it before countering/tagging it. We broke up the shifts between many volunteers, but it was always quite difficult when it was your shift.

Carmel: We learned that the identification of hate speech was much easier than categorizing it. Or initial understanding of hate speech, especially before Life After Hate helped us, was really just the movie version of hate speech and missed a lot of hidden context. We were also surprised at how much the language would evolve relative to current events. It was definitely something we had to stay on top of.

We were surprised by how broad a spectrum of people the hate was coming from. We went in thinking wed just encounter a bunch of thugs, but many of these people held themselves out as academics, comedians, or historians. The brands of hate some of them shared were nuanced and, in an insidious way, very compelling.

We were caught off guard by the amount of time and effort those who disliked our platform would take to slam or discredit it. A lot of these people are quite savvy and would go to great lengths to attempt to undermine our efforts. Outside of the things we dealt with in Twitter, one YouTube hate-fluencer made a video, close to an hour long, that wove all sorts of intricate theories and conspiracies about our platform.

Gilmore: We were also surprised by how wrong our instincts were. When we first started, the things we were seeing made us angry and frustrated. We wanted to come after these hateful people in an aggressive way. We wanted to fight back. Life After Hate was essential in helping course-correct our tone and message. They helped us understand (and wed like more people to know) the power of empathy combined with education, and its ability to remove walls rather than build them between people. It can be difficult to take this approach, but it ultimately gets everyone to a better place.

Aziz: I love that idea empathy with education. What were the results of the work youve done so far? How did you measure success?

Carmel: The WeCounterHate platform radically outperformed expectations of identifying hate speech (91% success) relative to a human moderator, as we continued to improve the model over the course of the project.

When @WeCounterHatereplied to a tweet containing hate, it reduces the spread of that hate by an average of 54%. Furthermore, 19% of the "hatefluencers" deleted their original tweet outright once it had been countered.

By our estimates, the Hate Tweets we countered were shared roughly 20 million fewer times compared to similar Hate Tweets by the same authors that werent countered.

Matt: It was a pretty mind-bending exercise for people working in an ad agency, that have spent our entire careers trying to gain exposure for the work we do on behalf of clients, to suddenly be trying to reduce impressions. We even began referring to WCH as the worlds first reverse-media plan, designed to reduce impressions by stopping retweets.

Aziz: So now that the project has ended, how do you hope to take this idea forward in an open source way?

Herron: Our hope was to counter hate speech online, while collecting insightful data about how hate speech online propagates. Going forward, hopefully this data will allow experts in the field to address the hate speech problem at a more systemic level. Our goal is to publicly open source archived data that has been gathered, hopefully next quarter (Q1 2020)

I love this idea on so many different levels. The ingenuity of finding a way to counteract hate speech without resorting to censorship. The partnership with Life After Hate to improve the sophistication of the detection. And the potential for this same model to be applied to so many different problems in the world (*anyone want to build a version for climate change deniers?). It proves that the creativity of the advertising world can truly be turned into a force for good, and for that I salute the team for showing us this powerful act of moral imagination.

Read this article:

The Power Of Purpose: How We Counter Hate Used Artificial Intelligence To Battle Hate Speech Online - Forbes

The skills needed to land the hottest tech job of 2020 – Business Insider Nordic

Artificial intelligence is one of the hottest topics in corporate America. So it's no surprise that companies are rushing to find the talent to support the push to adopt the advanced tech.

Demand for AI specialists grew 74% in the last five years and is expected to be one of the most highly sought-after roles in 2020, according to a new study from LinkedIn. Among the necessary skills for the position are machine learning and natural language processing.

But it's not just AI experts that are in high-demand. Cloud engineers, developers, cybersecurity experts, and data scientists also made the list. Alongside the individuals needed to support the technology, companies are also seeking leaders, like a chief transformation officer and chief culture officer, to oversee the adoption. Even non-tech positions like managing the customer experience a key focus for many digital overhauls are hot positions for 2020.

Those projections indicate just how aggressively organizations are trying to adopt more sophisticated technology, but also the major problem they face in navigating the skills gap and the tight labor market.

A struggle, however, will be finding the talent to fill the vacancies. One way companies are tackling that challenge is by upskilling their current employees.

Jeff McMillan, the chief data and analytics officer for Morgan Stanley's wealth management division, runs an internal AI boot camp that covers the basics of the technology. And Microsoft and others are working with online educational platforms like OpenClassrooms to craft comprehensive curriculum to give existing workers the chance to train for new jobs within the organization.

With tech-heavy skills in such short supply, some experts even suggest that corporations should appoint a "chief reskilling" officer to manage the push to reskill employees. "What this new role will be doing is future thinking, future strategy, future alignment with talent and people," Jason Wingard, the dean of the School of Professional Studies at Columbia University, previously told Business Insider.

While investments in larger, enterprise-wide AI projects could slip in 2020, the push to adopt the tech will remain fervent, creating a lucrative job market for those who have the skills to support the shift.

See original here:

The skills needed to land the hottest tech job of 2020 - Business Insider Nordic

In the 2020s, human-level A.I. will arrive, and finally ace the Turing test – Inverse

The past decade has seen the rise of remarkably human personal assistants, increasing automation in transportation and industrial environments, and even the alleged passing of Alan Turings famous robot consciousness test. Such innovations have taken artificial intelligence out labs and into our hands.

A.I. programs have become painters, drivers, doctors assistants, and even friends. But with these new benefits have also come increasing dangers. This ending decade saw the first, and likely not the last, death caused by a self-driving car.

This is #20 on Inverses 20 predictions for the 2020s.

And as we head toward another decade of machine learning and robotics research, questions surrounding the moral programming of A.I. and the limits of their autonomy will no longer be just thought-experiments but time-sensitive problem.

One such area to keep on eye on going forward into a new decade will be partially defined by this question: what kind of legal status will A.I. be granted as their capabilities and intelligence continues to scale closer to that of humans? This is a conversation the archipelago nation Malta started in 2018 when its leaders proposed that it should prepare to grant or deny citizenship to A.I.s just as they would humans.

The logic behind this being that A.I.s of the future could have just as much agency and potential to cause disruption as any other non-robotic being. Francois Piccione, policy advisor for the Maltese government, told Inverse in 2019 that not taking such measures would be irresponsible.

Artificial Intelligence is being seen in many quarters as the most transformative technology since the invention of electricity, said Piccione. To realize that such a revolution is taking place and not do ones best to prepare for it would be irresponsible.

While the 2020s might not see fully fledged citizenship for A.I.s, Inverse predicts that there will be increasing legal scrutiny in coming years over who is legally responsible over the actions of A.I., whether it be their owners or the companies designing them. Instead of citizenship or visas for A.I., this could lead to further restrictions on the humans who travel with them and the ways in which A.I. can be used in different settings.

Another critical point of increasing scrutiny in the coming years will be how to ensure A.I. programmers continue to think critically about the algorithms they design.

This past decade saw racism and death as the result of poorly designed algorithms and even poorer introspection. Inverse predicts that as A.I. continues to scale labs will increasingly call upon outside experts, such as ethicists and moral psychologists, to make sure these human-like machines are not doomed to repeat our same, dehumanizing, mistakes.

As 2019 draws to a close, Inverse is looking to the future. These are our 20 predictions for science and technology for the 2020s. Some are terrifying, some are fascinating, and others we can barely wait for. This has been #20. Read a related story here.

See the article here:

In the 2020s, human-level A.I. will arrive, and finally ace the Turing test - Inverse

Samsung to announce its Neon artificial intelligence project at CES 2020 – Firstpost

tech2 News StaffDec 26, 2019 17:21:10 IST

Samsung has been teasing Neon for quite a while on social media. It appears to be an artificial intelligence (AI) project by its research arm and the company will be announcing more details about it during CES 2020 in January.

Samsung Neon AI project. Image: Neon

Neon hasnt really revealed any details. Its being developed under Samsung Technology & Advanced Research Labs (STAR Labs). STAR Labs could be a reference to the Scientific and Technological Advanced Research Laboratories (STAR Labs) from DC Comics, but we cant confirm that. Samsungs research division is led by Pranav Mistry who earlier worked on the Samsung Galaxy Gear and is now the President and CEO of STAR Labs.

The company has set up a website with a landing page that doesnt really mention any details. It only has a message saying, Have you ever met an Artificial? It has been continuously posting images on Twitter and Instagram, including a couple of videos. These images contain the same message in different languages as well, indicating that the AI has multilingual functionality. Mistry has also been teasing Neon on his own Twitter account.

This wont be Samsungs first venture into AI since it already has the Bixby digital assistant. However, it never really took off. CES 2020 begins on 7 January and well get to know more about Neon during the expo.

Find latest and upcoming tech gadgets online on Tech2 Gadgets. Get technology news, gadgets reviews & ratings. Popular gadgets including laptop, tablet and mobile specifications, features, prices, comparison.

Go here to read the rest:

Samsung to announce its Neon artificial intelligence project at CES 2020 - Firstpost

Artificial intelligence jobs on the rise, along with everything else AI – ZDNet

AI jobs are on the upswing, as are the capabilities of AI systems. The speed of deployments has also increased exponentially. It's now possible to train an image-processing algorithm in about a minute -- something that took hours just a couple of years ago.

These are among the key metrics of AI tracked in the latest release of theAI Index, an annual data update from Stanford University'sHuman-Centered Artificial Intelligence Institutepublished in partnership with McKinsey Global Institute. The index tracks AI growth across a range of metrics, from papers published to patents granted to employment numbers.

Here are some key measures extracted from the 290-page index:

AI conference attendance: One important metric is conference attendance, for starters. That's way up. Attendance at AI conferences continues to increase significantly. In 2019, the largest, NeurIPS, expects 13,500 attendees, up 41% over 2018 and over 800% relative to 2012. Even conferences such as AAAI and CVPR are seeing annual attendance growth around 30%.

AI jobs: Another key metric is the amount of AI-related jobs opening up. This is also on the upswing, the index shows. Looking at Indeed postings between 2015 and October 2019, the share of AI jobs in the US increased five-fold since 2010, with the fraction of total jobs rising from 0.26% of total jobs posted to 1.32% in October 2019. While this is still a small fraction of total jobs, it's worth mentioning that these are only technology-related positions working directly in AI development, and there are likely an increasingly large share of jobs being enhanced or re-ordered by AI.

Among AI technology positions, the leading category being job postings mentioning "machine learning" (58% of AI jobs), followed by artificial intelligence (24%), deep learning (9%), and natural language processing (8%). Deep learning is the fastest growing job category, growing 12-fold between 2015 and 2018. Artificial Intelligence grew by five-fold, machine learning grew by five-fold, machine learning by four-fold, and natural language processing two-fold.

Compute capacity: Moore's Law has gone into hyperdrive, the AI Index shows, with substantial progress in ramping up the computing capacity required to run AI, the index shows. Prior to 2012, AI results closely tracked Moore's Law, with compute doubling every two years. Post-2012, compute has been doubling every 3.4 months -- a mind-boggling net increase of 300,000x. By contrast, the typical two-year doubling period that characterized Moore's law previously would only yield a 7x increase, the index's authors point out.

Training time: The among of time it takes to train AI algorithms has accelerated dramatically -- it now can happen in almost 1/180th of the time it took just two years ago to train a large image classification system on a cloud infrastructure. Two years ago, it took three hours to train such a system, but by July 2019, that time shrunk to 88 seconds.

Commercial machine translation: One indicator of where AI hits the ground running is machine translation -- for example, English to Chinese. The number of commercially available systems with pre-trained models and public APIs has grown rapidly, the index notes, from eight in 2017 to over 24 in 2019. Increasingly, machine-translation systems provide a full range of customization options: pre-trained generic models, automatic domain adaptation to build models and better engines with their own data, and custom terminology support."

Computer vision: Another benchmark is accuracy of image recognition. The index tracked reporting through ImageNet, a public dataset of more than 14 million images created to address the issue of scarcity of training data in the field of computer vision. In the latest reporting, the accuracy of image recognition by systems has reached about 85%, up from about 62% in 2013.

Natural language processing: AI systems keep getting smarter, to the point they are surpassing low-level human responsiveness through natural language processing. As a result, there are also stronger standards for benchmarking AI implementations. GLUE, the General Language Understanding Evaluation benchmark, was only released in May 2018, intended to measure AI performance for text-processing capabilities. The threshold for submitted systems crossing non-expert human performance was crossed in June, 2019, the index notes. In fact, the performance of AI systems has been so dramatic that industry leaders had to release a higher-level benchmark, SuperGLUE, "so they could test performance after some systems surpassed human performance on GLUE."

Read more here:

Artificial intelligence jobs on the rise, along with everything else AI - ZDNet

Why Cognitive Technology May Be A Better Term Than Artificial Intelligence – Forbes

One of the challenges for those tracking the artificial intelligence industry is that, surprisingly, theres no accepted, standard definition of what artificial intelligence really is. AI luminaries all have slightly different definitions of what AI is. Rodney Brooks says that artificial intelligence doesnt mean one thing its a collection of practices and pieces that people put together. Of course, thats not particularly settling for companies that need to understand the breadth of what AI technologies are and how to apply them to their specific needs.

Getty

In general, most people would agree that the fundamental goals of AI are to enable machines to have cognition, perception, and decision-making capabilities that previously only humans or other intelligent creatures have. Max Tegmark simply defines AI as intelligence that is not biological. Simple enough but we dont fully understand what biological intelligence itself means, and so trying to build it artificially is a challenge.

At the most abstract level, AI is machine behavior and functions that mimic the intelligence and behavior of humans. Specifically, this usually refers to what we come to think of as learning, problem solving, understanding and interacting with the real-world environment, and conversations and linguistic communication. However the specifics matter, especially when were trying to apply that intelligence to solve very specific problems businesses, organizations, and individuals have.

Saying AI but meaning something else

There are certainly a subset of those pursuing AI technologies with a goal of solving the ultimate problem: creating artificial general intelligence (AGI) that can handle any problem, situation, and thought process that a human can. AGI is certainly the goal for many in the AI research being done in academic and lab settings as it gets to the heart of answering the basic question of whether intelligence is something only biological entities can have. But the majority of those who are talking about AI in the market today are not talking about AGI or solving these fundamental questions of intelligence. Rather, they are looking at applying very specific subsets of AI to narrow problem areas. This is the classic Broad / Narrow (Strong / Weak) AI discussion.

Since no one has successfully built an AGI solution, it follows that all current AI solutions are narrow. While there certainly are a few narrow AI solutions that aim to solve broader questions of intelligence, the vast majority of narrow AI solutions are not trying to achieve anything greater than the specific problem the technology is being applied to. What we mean to say here is that were not doing narrow AI for the sake of solving a general AI problem, but rather narrow AI for the sake of narrow AI. Its not going to get any broader for those particular organizations. In fact, it should be said that many enterprises dont really care much about AGI, and the goal of AI for those organizations is not AGI.

If thats the case, then it seems that the industrys perception of what AI is and where it is heading differs from what many in research or academia think. What interests enterprises most about AI is not that its solving questions of general intelligence, but rather that there are specific things that humans have been doing in the organization that they would now like machines to do. The range of those tasks differs depending on the organization and the sort of problems they are trying to solve. If this is the case, then why bother with an ill-defined term in which the original definition and goals are diverging rapidly from what is actually being put into practice?

What are cognitive technologies?

Perhaps a better term for narrow AI being applied for the sole sake of those narrow applications is cognitive technology. Rather than trying to build an artificial intelligence, enterprises are leveraging cognitive technologies to automate and enable a wide range of problem areas that require some aspect of cognition. Generally, you can group these aspects of cognition into three P categories, borrowed from the autonomous vehicles industry:

From this perspective, its clear that while cognitive technologies are indeed a subset of Artificial Intelligence technologies, with the main difference being that AI can be applied both towards the goals of AGI as well as narrowly-focused AI applications. On the other-hand, using the term cognitive technology instead of AI is an acceptance of the fact that the technology being applied borrows from AI capabilities but doesnt have ambitions of being anything other than technology applied to a narrow, specific task.

Surviving the next AI winter

The mood in the AI industry is noticeably shifting. Marketing hype, venture capital dollars, and government interest is all helping to push demand for AI skills and technology to its limits. We are still very far away from the end vision of AGI. Companies are quickly realizing the limits of AI technology and we risk industry backlash as enterprises push back on what is being overpromised and under delivered, just as we experienced in the first AI Winter. The big concern is that interest will cool too much and AI investment and research will again slow, leading to another AI Winter. However, perhaps the issue never has been with the term Artificial Intelligence. AI has always been a lofty goal upon which to set the sights of academic research and interest, much like building settlements on Mars or interstellar travel. However, just as the Space Race has resulted in technologies with broad adoption today, so too will the AI Quest result in cognitive technologies with broad adoption, even if we never achieve the goals of AGI.

View post:

Why Cognitive Technology May Be A Better Term Than Artificial Intelligence - Forbes