Category Archives: Ai

New AI research lets you click and drag images to manipulate them … – The Verge

No, its not over yet: the ability of AI tools to manipulate images continues to grow. The latest example is only a research paper for now, but a very impressive one, letting users simply drag elements of a picture to change their appearance.

This doesnt sound too exciting on the face of it, but take a look at the examples below to get an idea of what this system can do.

Not only can you change the dimensions of a car or manipulate a smile into a frown with a simple click and drag, but you can rotate a pictures subject as if it were a 3D model changing the direction someone is facing, for example. One demo even shows the user adjusting the reflections on a lake and height of a mountain range with a few clicks.

Heres an overview on various subjects:

Heres a closer look at landscape manipulation:

And just for fun, messing about with lions:

These videos come from the research teams homepage, though this has been crashing due to the amount of traffic sent to the site by Twitter (mainly by user @_akhaliq, who does a fantastic job highlighting interesting AI papers and is well worth a follow if that interests you). You can also read the research paper on arXiv right here.

As the team responsible note, whats really interesting about this work is not necessarily the image-manipulation per se, but the user interface. Weve been able to use AI tools like GANs to generate realistic images for a while now, but most methods lack flexibility and precision. You can tell an AI image generator to make a picture of a lion stalking through the savannah, and youll get one, but it might not be the exact pose you want or need.

This model, named DragGAN, offers a clear solution to this. The interface is exactly the same as traditional image-warping, but rather than simply smudging and mushing existing pixels, the model generates the subject anew. As the researchers write: [O]ur approach can hallucinate occluded content, like the teeth inside a lions mouth, and can deform following the objects rigidity, like the bending of a horse leg.

Obviously this is just a demo for now, and its impossible to evaluate the tech completely. (How realistic are the end images, for example? Its hard to say based on the low res videos available.) But its another example of making image manipulation more accessible.

The rest is here:

New AI research lets you click and drag images to manipulate them ... - The Verge

Bloomsbury admits using AI-generated artwork for Sarah J Maas novel – The Guardian

Books

Publisher says cover of House of Earth and Blood was prepared by in-house designers unaware the stock image chosen was not human-made

Fri 19 May 2023 10.30 EDT

Publisher Bloomsbury has said it was unaware an image it used on the cover of a book by fantasy author Sarah J Maas was generated by artificial intelligence.

The paperback of Maass House of Earth and Blood features a drawing of a wolf, which Bloomsbury had credited to Adobe Stock, a service that provides royalty-free images to subscribers.

But the Verge reported that the illustration of the wolf matches one created by a user on Adobe Stock called Aperture Vintage, who has marked the image as AI-generated.

A number of illustrators and fans have criticised the cover for using AI, but Bloomsbury has said it was unaware of the images origin.

Bloomsburys in-house design team created the UK paperback cover of House of Earth and Blood, and as part of this process we incorporated an image from a photo library that we were unaware was AI when we licensed it, said Bloomsbury in a statement. The final cover was fully designed by our in-house team.

This is not the first time that a book cover from a major publishing house has used AI. In 2022, sci-fi imprint Tor discovered that a cover it had created had used a licensed image created by AI, but decided to go ahead anyway due to production constraints.

And this month Bradford literature festival apologised for the hurt caused after artists criticised it for using AI-generated images on promotional material.

Meanwhile, sci-fi publisher Clarkesworld, which publishes science fiction short stories, was forced to close itself to submissions after a deluge of entries generated by AI.

The publishing industry is more broadly grappling with the use and role of AI. It has led to the Society of Authors (SoA) issuing a paper on artificial intelligence, in which it said that while there are potential benefits of machine learning, there are risks that need to be assessed, and safeguards need to be put in place to ensure that the creative industries will continue to thrive.

The SoA has advised that consent should be sought from creators before their work is used by an AI system, and that developers should be required to publish the data sources they have used to train their AI systems.

The guidance addresses concerns similar to those raised by illustrators and artists who spoke to the Guardian earlier this year about the way in which AI image generators use databases of already existing art and text without the creators permission.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read more from the original source:

Bloomsbury admits using AI-generated artwork for Sarah J Maas novel - The Guardian

For chemists, the AI revolution has yet to happen – Nature.com

More than 20 years ago, the Cancer Research Screensaver harnessed distributed computing power to assess anti-cancer activity in molecules.Credit: James King-Holmes/SPL

Many people are expressing fears that artificial intelligence (AI) has gone too far or risks doing so. Take Geoffrey Hinton, a prominent figure in AI, who recently resigned from his position at Google, citing the desire to speak out about the technologys potential risks to society and human well-being.

But against those big-picture concerns, in many areas of science you will hear a different frustration being expressed more quietly: that AI has not yet gone far enough. One of those areas is chemistry, for which machine-learning tools promise a revolution in the way researchers seek and synthesize useful new substances. But a wholesale revolution has yet to happen because of the lack of data available to feed hungry AI systems.

Any AI system is only as good as the data it is trained on. These systems rely on what are called neural networks, which their developers teach using training data sets that must be large, reliable and free of bias. If chemists want to harness the full potential of generative-AI tools, they need to help to establish such training data sets. More data are needed both experimental and simulated including historical data and otherwise obscure knowledge, such as that from unsuccessful experiments. And researchers must ensure that the resulting information is accessible. This task is still very much a work in progress.

Take, for example, AI tools that conduct retrosynthesis. These begin with a chemical structure a chemist wants to make, then work backwards to determine the best starting materials and sequence of reaction steps to make it. AI systems that implement this approach include 3N-MCTS, designed by researchers at the University of Mnster in Germany and Shanghai University in China1. This combines a known search algorithm with three neural networks. Such tools have attracted attention, but few chemists have yet adopted them.

What's next for AlphaFold and the AI protein-folding revolution

To make accurate chemical predictions, an AI system needs sufficient knowledge of the specific chemical structures that different reactions work with. Chemists who discover a new reaction usually publish results exploring this, but often these are not exhaustive. Unless AI systems have comprehensive knowledge, they might end up suggesting starting materials with structures that would stop reactions working or lead to incorrect products2.

An example of mixed progress comes in what AI researchers call inverse design. In chemistry, this involves starting with desired physical properties and then identifying substances that have these properties, and that can, ideally, be made cheaply. For example, AI-based inverse design helped scientists to select optimal materials for making blue phosphorescent organic light-emitting diodes3.

Computational approaches to inverse design, which ask a model to suggest structures with the desired characteristics, are already in use in chemistry, and their outputs are routinely scrutinized by researchers. If AI is to outperform pre-existing computational tools in inverse design, it needs enough training data relating chemical structures to properties. But what is meant by enough training data in this context depends on the type of AI used.

A generalist generative-AI system such as ChatGPT, developed by OpenAI in San Francisco, California, is simply data-hungry. To apply such a generative-AI system to chemistry, hundreds of thousands or possibly even millions of data points would be needed.

A more chemistry-focused AI approach trains the system on the structures and properties of molecules. In the language of AI, molecular structures are graphs. In molecules, chemical bonds connect atoms just as edges connect nodes in graphs. Such AI systems fed with 5,00010,000 data points can already beat conventional computational approaches to answering chemical questions4 . The problem is that, in many cases, even 5,000 data points is far more than are currently available.

Artificial intelligence in structural biology is here to stay

The AlphaFold protein-structure-prediction tool5, arguably the most successful chemistry AI application, uses such a graph-representation approach. AlphaFolds creators trained it on a formidable data set: the information in the Protein Data Bank, which was established in 1971 to collate the growing set of experimentally determined protein structures and currently contains more than 200,000 structures. AlphaFold provides an excellent example of the power AI systems can have when furnished with sufficient high-quality data.

So how can other AI systems create or access more and better chemistry data? One possible solution is to set up systems that pull data out of published research papers and existing databases, such as an algorithm created by researchers at the University of Cambridge, UK, that converts chemical names to structures6. This approach has accelerated progress in the use of AI in organic chemistry.

Another potential way to speed things up is to automate laboratory systems. Existing options include robotic materials-handling systems, which can be set up to make and measure compounds to test AI model outputs7,8. However, at present this capability is limited, because the systems can carry out only a relatively narrow range of chemical reactions compared with a human chemist.

AI developers can train their models using both real and simulated data. Researchers at the Massachusetts Institute of Technology in Cambridge have used this approach to create a graph-based model that can predict the optical properties of molecules, such as their colour9.

How AlphaFold can realize AIs full potential in structural biology

There is another, particularly obvious solution: AI tools need open data. How people publish their papers must evolve to make data more accessible. This is one reason why Nature requests that authors deposit their code and data in open repositories. It is also yet another reason to focus on data accessibility, above and beyond scientific crises surrounding the replication of results and high-profile retractions. Chemists are already addressing this issue with facilities such as the Open Reaction Database.

But even this might not be enough to allow AI tools to reach their full potential. The best possible training sets would also include data on negative outcomes, such as reaction conditions that dont produce desired substances. And data need to be recorded in agreed and consistent formats, which they are not at present.

Chemistry applications require computer models to be better than the best human scientist. Only by taking steps to collect and share data will AI be able to meet expectations in chemistry and avoid becoming a case of hype over hope.

The rest is here:

For chemists, the AI revolution has yet to happen - Nature.com

G7 calls for adoption of international technical standards for AI – Reuters

TOKYO, May 20 (Reuters) - Leaders of the Group of Seven (G7) nations on Saturday called for the development and adoption of international technical standards for trustworthy artificial intelligence (AI) as lawmakers of the rich countries focus on the new technology.

While the G7 leaders, meeting in Hiroshima, Japan, recognised that the approaches to achieving "the common vision and goal of trustworthy AI may vary", they said in a statement that "the governance of the digital economy should continue to be updated in line with our shared democratic values".

The agreement came after European Union, which is represented at the G7, inched closer this month to passing legislation to regulate AI technology, potentially the world's first comprehensive AI law.

"We want AI systems to be accurate, reliable, safe and non-discriminatory, regardless of their origin," European Commission President Ursula von der Leyen said on Friday.

The G7 leaders mentioned generative AI, the subset popularised by the ChatGPT app, saying they "need to immediately take stock of the opportunities and challenges of generative AI."

The heads of government agreed on Friday to create a ministerial forum dubbed the "Hiroshima AI process" to discuss issues around generative AI tools, such as intellectual property rights and disinformation, by the end of this year.

The summit followed a G7 digital ministers' meeting last month, where the countries - the U.S., Japan, Germany, Britain, France, Italy and Canada - said they should adopt "risk-based" AI regulation.

Reporting by Kantaro Komiya; Editing by William Mallard

Our Standards: The Thomson Reuters Trust Principles.

See the original post:

G7 calls for adoption of international technical standards for AI - Reuters

Meta Made Its AI Tech Open-Source. Rivals Say Its a Risky Decision. – The New York Times

In February, Meta made an unusual move in the rapidly evolving world of artificial intelligence: It decided to give away its A.I. crown jewels.

The Silicon Valley giant, which owns Facebook, Instagram and WhatsApp, had created an A.I. technology, called LLaMA, that can power online chatbots. But instead of keeping the technology to itself, Meta released the systems underlying computer code into the wild. Academics, government researchers and others who gave their email address to Meta could download the code once the company had vetted the individual.

Essentially, Meta was giving its A.I. technology away as open-source software computer code that can be freely copied, modified and reused providing outsiders witheverything they needed to quickly build chatbots of their own.

The platform that will win will be the open one, Yann LeCun, Metas chief A.I. scientist, said in an interview.

As a race to lead A.I. heats up across Silicon Valley, Meta is standing out from its rivals by taking a different approach to the technology. Driven by its founder and chief executive, Mark Zuckerberg, Meta believes that the smartest thing to do is share its underlying A.I. engines as a way to spread its influence and ultimately move faster toward the future.

Its actions contrast with those of Google and OpenAI, the two companies leading the new A.I. arms race. Worried that A.I. tools like chatbots will be used to spread disinformation, hate speech and other toxic content, those companies are becoming increasingly secretive about the methods and software that underpin their A.I. products.

Google, OpenAI and others have been critical of Meta, saying an unfettered open-source approach is dangerous. A.I.srapid rise in recent months has raised alarm bells about the technologys risks, including how it could upend the job market if it is not properly deployed. And within days of LLaMAs release, the system leaked onto 4chan, the online message board known for spreading false and misleading information.

We want to think more carefully about giving away details or open sourcing code of A.I. technology, said Zoubin Ghahramani, a Google vice president of research who helps oversee A.I. work. Where can that lead to misuse?

Some within Google have also wondered if open-sourcing A.I. technology may pose a competitive threat. In a memo this month, which was leaked on the online publication Semianalysis.com, a Google engineer warned colleagues that the rise of open-source software like LLaMA could cause Google and OpenAI to lose their lead in A.I.

But Meta said it saw no reason to keep its code to itself. The growing secrecy at Google and OpenAI is a huge mistake, Dr. LeCun said, and a really bad take on what is happening. He argues that consumers and governments will refuse to embrace A.I. unless it is outside the control of companies like Google and Meta.

Do you want every A.I. system to be under the control of a couple of powerful American companies? he asked.

OpenAI declined to comment.

Metas open-source approach to A.I. is not novel. The history of technology is littered with battles between open source and proprietary,or closed, systems. Some hoard the most important tools that are used to build tomorrows computing platforms, while others give those tools away. Most recently, Google open-sourced the Android mobile operating system to take on Apples dominance in smartphones.

Many companies have openly shared their A.I. technologies in the past, at the insistence of researchers. But their tactics are changing because of the race around A.I. That shift began last year when OpenAI released ChatGPT. The chatbots wild success wowed consumers and kicked up the competition in the A.I. field, with Google moving quickly to incorporate more A.I. into its products and Microsoft investing $13 billion in OpenAI.

While Google, Microsoft and OpenAI have since received most of the attention in A.I., Meta has also invested in the technology for nearly a decade. The company has spent billions of dollars building the software and the hardware needed to realize chatbots and other generative A.I., which produce text, images and other media on their own.

In recent months, Meta has worked furiously behind the scenes to weave its years of A.I. research and development into new products. Mr. Zuckerberg is focused on making the company an A.I. leader, holding weekly meetings on the topic with his executive team and product leaders.

On Thursday, in a sign of its commitment to A.I., Meta said it had designed a new computer chip and improved a new supercomputer specifically for building A.I. technologies. It is also designing a new computer data center with an eye toward the creation of A.I.

Weve been building advanced infrastructure for A.I. for years now, and this work reflects long-term efforts that will enable even more advances and better use of this technology across everything we do, Mr. Zuckerberg said.

Metas biggest A.I. move in recent months was releasing LLaMA, which is what is known as a large language model, or L.L.M. (LLaMA stands for Large Language Model Meta AI.) L.L.M.s are systems that learn skills byanalyzing vast amounts of text, including books, Wikipedia articles and chat logs. ChatGPT and Googles Bard chatbot are also built atop such systems.

L.L.M.s pinpoint patterns in the text they analyze and learn to generate text of their own, including term papers, blog posts, poetry and computer code. They can even carry on complex conversations.

In February, Meta openly released LLaMA, allowingacademics, government researchers and others who provided their email address todownload the code and use it to build a chatbot of their own.

But the company went further than many other open-source A.I. projects. Itallowed people to download a version of LLaMA after it had been trained on enormous amounts of digital text culled from the internet. Researchers call this releasing the weights, referring to the particular mathematical values learned by the system as it analyzes data.

This was significant because analyzing all that data typically requires hundreds of specialized computer chips and tens of millions of dollars, resources most companies do not have. Those who have the weights can deploy the software quickly, easily and cheaply, spending a fraction of what it would otherwise cost to create such powerful software.

As a result, many in the tech industry believed Meta had set a dangerous precedent. And within days, someone released the LLaMA weights onto 4chan.

At Stanford University, researchers used Metas new technology to build their own A.I. system, which was made available on the internet. A Stanford researcher named Moussa Doumbouya soon used it to generate problematic text, according to screenshots seen by The New York Times. In one instance, the system provided instructions for disposing of a dead body without being caught.It also generated racistmaterial, including commentsthat supported the views of Adolf Hitler.

In a private chat among the researchers, which was seen by The Times, Mr. Doumbouya said distributing the technology to the public would be like a grenade available to everyone in a grocery store. He did not respond to a request for comment.

Stanford promptly removed the A.I. system from the internet. The project was designed to provide researchers with technology that captured the behaviors of cutting-edge A.I. models, said Tatsunori Hashimoto, the Stanford professor who led the project. We took the demo down as we became increasingly concerned about misuse potential beyond a research setting.

Dr. LeCun argues that this kind of technology is not as dangerous as it might seem. He said small numbers of individuals could already generate and spread disinformation and hate speech. He added that toxic material could be tightly restricted by social networks such as Facebook.

You cant preventpeople from creating nonsense or dangerous information or whatever, he said. But you can stop it from being disseminated.

For Meta, more people using open-source software can also level the playing field as it competes with OpenAI, Microsoft and Google. If every software developer in the world builds programs using Metas tools, it could help entrench the company for the next wave of innovation, staving off potential irrelevance.

Dr. LeCun also pointed to recent history to explain why Meta was committed toopen-sourcing A.I. technology. He said the evolution of the consumer internet was the result of open, communal standards that helped build the fastest, most widespread knowledge-sharing network the world had ever seen.

Progress is faster when it is open, he said. You have a more vibrant ecosystem where everyone can contribute.

Go here to see the original:

Meta Made Its AI Tech Open-Source. Rivals Say Its a Risky Decision. - The New York Times

From Amazon to Wendy’s, how 4 companies plan to incorporate AIand how you may interact with it – CNBC

Smith Collection/Gado | Archive Photos | Getty Images

Artificial intelligence is no longer limited to the realm of science-fiction novels it's increasingly becoming a part of our everyday lives.

AI chatbots, such as OpenAI's ChatGPT, are already being used in a variety of ways, from writing emails to booking trips. In fact, ChatGPT amassed over 100 million users within just months of launching.

But AI goes beyond large language models (LLMs) like ChatGPT. Microsoft defines AI as "the capability of a computer system to mimic human-like cognitive functions such as learning and problem-solving."

For example, self-driving cars use AI to simulate the decision-making processes a human driver would usually make while on the road such as identifying traffic signals or choosing the best route to reach a given destination, according to Microsoft.

AI's boom in popularity has many companies racing to integrate the technology into their own products. In fact, 94% of business leaders believe that AI development will be critical to the success of their business over the next five years, according to Deloitte's latest survey.

For consumers, this means AI may be coming to a store, restaurant or supermarket nearby. Here are four companies that are already utilizing AI's capabilities and how it may impact you.

Amazon delivery package seen in front of a door.

Sopa Images | Lightrocket | Getty Images

Amazon uses AI in a number of ways, but one strategy aims to get your orders to you faster, Stefano Perego, vice president of customer fulfilment and global ops services for North America and Europe at Amazon, told CNBC on Monday.

The company's "regionalization" plan involves shipping products from warehouses that are closest to customers rather than from a warehouse located in a different part of the country.

To do that, Amazon is utilizing AI to analyze data and patterns to determine where certain products are in demand. This way, those products can be stored in nearby warehouses in order to reduce delivery times.

Screens displaying the logos of Microsoft and ChatGPT, a conversational artificial intelligence application software developed by OpenAI.

Lionel Bonaventure | Afp | Getty Images

Microsoft is putting its $13 billion investment in OpenAI to work. In March, the tech behemoth announced that a new set of AI features, dubbed Copilot, will be added to its Microsoft 365 software, which includes popular apps such as Excel, PowerPoint and Word.

When using Word, for example, Copilot will be able to produce a "first draft to edit and iterate on saving hours in writing, sourcing, and editing time," Microsoft says. But Microsoft acknowledges that sometimes this type of AI software can produce inaccurate responses and warns that "sometimes Copilot will be right, other times usefully wrong."

A Brain Corp. autonomous floor scrubber, called an Auto-C, cleans the aisle of a Walmart's store. Sam's Club completed the rollout of roughly 600 specialized scrubbers with inventory scan towers last October in a partnership Brain Corp.

Source: Walmart

Walmart is using AI to make sure shelves in its nearly 4,700 stores and 600 Sam's Clubs stay stocked with your favorite products. One way it's doing that: automated floor scrubbers.

As the robotic scrubbers clean Sam's Club aisles, they also capture images of every item in the store to monitor inventory levels. The inventory intelligence towers located on the scrubbers take more than 20 million photos of the shelves every day.

The company has trained its algorithms to be able to tell the difference between brands and determine how much of the product is on the shelf with more than 95% accuracy, Anshu Bhardwaj, senior vice president of Walmart's tech strategy and commercialization, told CNBC in March. And when a product gets too low, the stock room is automatically alerted to replenish it, she said.

A customer waits at a drive-thru outside a Wendys Co. restaurant in El Sobrante, California, U.S.

Bloomberg | Bloomberg | Getty Images

An AI chatbot may be taking your order when you pull up to a Wendy's drive-thru in the near future.

The fast-food chain partnered with Google to develop an AI chatbot specifically designed for drive-thru ordering, Wendy's CEO Todd Penegor told CNBC last week. The goal of this new feature is to speed up ordering at the speaker box, which is "the slowest point in the order process," the CEO said.

In June, Wendy's plans to test the first pilot of its "Wendy's FreshAI" at a company-operated restaurant in the Columbus, Ohio area, according to a May press release.

Powered by Google Cloud's generative AI and large language models, it will be able to have conversations with customers, understand made-to-order requests and generate answers to frequently asked questions, according to the company's statement.

DON'T MISS: Want to be smarter and more successful with your money, work & life?Sign up for our new newsletter!

Get CNBC's free report,11 Ways to Tell if We're in a Recession,where Kelly Evans reviews the top indicators that a recession is coming or has already begun.

CHECK OUT: Mark Cuban says the potential impact of AI tools like ChatGPT is beyond anything Ive ever seen in tech

Go here to read the rest:

From Amazon to Wendy's, how 4 companies plan to incorporate AIand how you may interact with it - CNBC

‘Heart wrenching’: AI expert details dangers of deepfakes and tools to detect manipulated content – Fox News

While some uses of deepfakes are lighthearted like the pope donning a white Balenciaga puffer jacket or an AI-generated song using vocals from Drake and The Weeknd, they can also sow doubt about the authenticity of legitimate audio and videos.

Criminals are taking advantage of the technology to conduct misinformation campaigns, commit fraud and obstruct justice. As artificial intelligence (AI) continues to advance, so does the proliferation of fake content that experts warn could pose a serious threat to various aspects of everyday life if proper controls aren't put in place.

AI-manipulated images, videos and audio known as "deepfakes" are often used to create convincing but false representations of people and events. Because deepfakes are difficult for the average consumer to detect, companies like Pindrop are working to help companies and consumers identify what's real and what's fake.

AI manipulated images, videos and audio, known as "deepfakes" are often used to create convincing but false representations of people and events. (iStock)

Pindrop co-founder and CEO, Vijay Balasubramaniyan, said his company looks at security, identity and intelligence in audio communications to help the top banks, insurance companies and health care providers in the world determine whether they are talking to a human on the other end of the line.

Balasubramaniyan said Pindrop is at the forefront of AI security and has analyzed more than five billion voice interactions, two million of which they identified as fraudsters using AI to try to convince a caller they are human.

He explained that when you call a business with sensitive information like a bank, insurance company or health care provider, they verify it's you by asking a multitude of security questions, but Pindrop replaces that process and instead verifies people based on their voice, device and behavior.

WHEN WILL ARTIFICIAL INTELLIGENCE ANSWER EMAILS? EXPERTS WEIGN IN ON HOW THE TECHNOLOGY WILL AFFECT WORK

"We're seeing very specific targeted attacks," he said. "If I'm the CEO of a particular organization, I probably have a lot of audio content out there, video content out there, [so fraudsters] create a deepfake of that person to go after them for their bank accounts [and] their health care records."

While Pindrop mainly focuses on helping large companies avoid AI scams, Balasubramaniyan said he eventually wants to expand his technology to help the individual consumer because the problem is affecting everyone.

He predicts audio and video breaches are only going to become more common because if people have "tons of audio or tons of video of a particular person, you can create their likeness a whole lot easier."

"Once they have a version of your audio or your video, they can actually start creating versions of you," he said. "Those versions of you can be used for all kinds of things to get bank account information, to get health care records, to get to talk to your parents or a loved one claiming to be you. That's where technology like ours is super important."

He explained that AI and machine learning (ML) systems work by learning from the information that already exists and building upon that knowledge.

AI and machine learning (ML) systems work by learning from the information that already exists and building upon that knowledge. (getty images)

"The more of you that's out there, the more likely it is to create a version of you and a human is not going to figure out who that is," he said.

He said there are some telltale signs that can indicate a call or video is a deepfake, such as a time lag between when a question is asked and an answer is given, which can actually work in the scammer's favor because it leads the person on the other end of the line to believe something is wrong.

"When a call center agent is trying to help you and you don't respond immediately, they actually think, 'Oh man, this person is unhappy or I didn't say the right thing,'" he explained. "Therefore many of them actually start divulging all kinds of things."

"The same thing is happening on the consumer side when you are getting a call from your daughter, your son saying, 'There's a problem, I've been kidnapped' and then you have this really long pause," he added. "That pause is unsettling, but it's actually a sign that someone's using a deepfake because they have to type the answer and the system has to process that."

OPENAI CHIEF ALTMAN DESCRIBED WHAT SCARY AI MEANS TO HIM, BUT CHATGPT HAS ITS OWN EXAMPLES

In an experiment conducted by Pindrop, people were given examples of audio and asked to determine if they thought it was authentic.

"When we did it across a wide variety of humans, they got it right 54% of times," he said. "What that means is they're 4% better than a monkey who did a coin toss."

As it becomes more difficult to ascertain who is human and who is a machine, it is important to adopt technology that allows you to make that determination, Balasubramaniyan argued.

"But the scarier thing for me is our democracy," he added. "We're coming up to an election cycle in the next year, and you're seeing ads, you're seeing images."

For example, the leading candidate of a campaign could be smeared by a series of deepfakes or there might be authentic content that puts a candidate in a bad light, but they can deny it by using AI as a scapegoat.

In the lead-up to his recent New York arraignment, deepfakes of former President Trump's mugshot, as well as fake photos showing him resisting arrest, went viral on the internet.

"If something is too good to be true or too sensational, think twice," he said. "Don't react immediately people get too worked up or react too much to a particular thing in the immediate moment."

CRITICS SAY AI CAN THREATEN HUMANITY, BUT CHATGPT HAS ITS OWN DOOMSDAY PREDICTIONS

Balasubramaniyan said people need to be increasingly skeptical about what they are hearing and viewing and warned that if a voice seems robotic, a video is choppy, there is background noise, pauses between questions or the subject isn't blinking, they should exercise caution and assume it is a deepfake.

He said this added caution is especially important if the video or message appeals to your emotions, which can lead to "heart-wrenching" consequences if a loved one gets a call about you or your grandparent is coerced into forking over their hard-earned money, as well as instances where a woman's image and likeness is used to generate deep fake pictures or videos.

Some of the most successful companies in the business profit off of AI companionship to generate fake boyfriends, or more often according to Balasubramaniyan, fake boyfriends with certain qualities or capabilities.

Balasubramaniyan argued that as it becomes more difficult to ascertain who is human and who is machine, it is important to adopt technology that allows you to make that determination. (Photo by MARCO BERTORELLO/AFP via Getty Images)

"Because not only are deepfakes being created that are deepfakes of you, but then they're creating deepfakes or synthetic identities that have no bearing, but have some likeness to human," he warned. "Both of those things you have to be vigilant about."

Balasubramaniyan often hearkens back to the creation of the internet to quell many of the concerns people have about AI and explained that we simply need more time to ameliorate some of the negative consequences of the new technology.

"When the Internet was created, if you looked at all the content on the Internet, it was the degenerates using it, like it was awful, all kinds of nefarious things would happen on it," he said. "If you just go back down history lane to the '90s, it was filled with stuff like this."

"Over time, you build security, you build the ability for you to now have a checkmark on your website to say this is a good website," he added.

The same thing will happen with AI if people take back control through a combination of technology and human vigilance, Balasubramaniyan said.

CLICK HERE TO GET THE FOX NEWS APP

"You're going to have a lot of bad use cases, a lot of degenerates using it, but you as a consumer have to stay vigilant," he said. "Otherwise you're going to get the shirt taken off your back."

View original post here:

'Heart wrenching': AI expert details dangers of deepfakes and tools to detect manipulated content - Fox News

We Put Google’s New AI Writing Assistant to the Test – WIRED

But its work began to look sloppy on more specific requests. Asked to write a memo on consumer preferences in Paraguay compared to Uruguay, the system incorrectly described Paraguay as less populous. It hallucinated, or made up, the meaning behinda song from a 1960s Hindi film being performed at my pre-wedding welcome event.

Most ironically, when prompted about the benefits of Duet AI, the system described Duet AI as a startup founded by two former Google employees to develop AI for the music industry with over $10 million in funding from investors such as Andreessen Horowitz and Y Combinator. It appears no such company exists. Google encourages users to report inaccuracies through a thumbs-down button below AI-generated responses.

Behr says Google screens topics, keywords, and other content cues to avoid responses that are offensive or unfairly affect people, especially based on their demographics or political or religious beliefs. She acknowledged that the system makes mistakes, but she said feedback from public testing is vital to counter the tendency of AI systems to reflect biases seen in their training data or pass off made-up information. AI is going to be a forever project, she says.

Still, Behr says early users, like employees at Instacart and Victorias Secrets Adore Me underwear brand, have been positive about the technology. Instacart spokesperson Lauren Svensson saysin a manually written emailthat the company is excited about testing Googles AI features but not ready to share any insights.

My tests left me worrying that AI writing aids could extinguish originality, to the detriment of humans on the receiving end of AI-crafted text. I envision readers glazing over at stale emails and documents as they might if forced to read Googles nearly 6,000-word privacy policy. Its unclear how much individual personality Googles tools can absorb and whether they will come to assist us or replace us.

Behr says that in Googles internal testing, emails from colleagues have not become vanilla or generic so far. The tools have boosted human ingenuity and creativity, not suppressed them, she says. Behr too would love an AI model that imitates her style, but she says those are the types of things that we're still evaluating.

Despite their disappointments and limitations, the Duet features in Docs and Gmail seem likely to lure back some users who began to rely on ChatGPT or rival AI writing software. Google is going further than most other options can match, andwhat we are seeing today is only a preview of whats to come.

Whenor ifDuet matures from promising drafter to unbiased and expert document finisher, usage of it will become unstoppable. Until then, when it comes to writing those heartfelt vows and speeches, thats a blank screen left entirely to me.

Visit link:

We Put Google's New AI Writing Assistant to the Test - WIRED

Here’s What AI Thinks an Illinoisan Looks Like And Apparently, Real Illinoisans Agree – NBC Chicago

Does this person look like he lives in Illinois? AI thinks so. And a handful of posts, allegedly from real people on social media, agree.

That's the basis of a Reddit post titled "The Most Stereotypical People in the States." The post, shared in a section of Reddit dedicated to discussions on Artificial Intelligence, shares AI-generated photos of what the an average person looks like in each state.

The results, according to commenters, are relatively accurate -- at least for Illinois.

Each of the photos shows the portrait of person, most often a male, exhibiting some form of creative expression -- be it through clothing, environment, facial expression or otherwise -- that's meant to clearly represent a location.

For example, the AI-generated photo of a stereotypical person shows a man sitting behind a giant block of cheese.

A stereotypical person in Illinois, according to the post, appears less distinctive, and rather ordinary. In fact, one commenter compares the man from Illinois to Waldo.

"Illinois is Waldo," the comment reads.

"Illinois," another begins. "A person as boring as it sounds to live there."

To other commenters, the photo of the average person who lives in Illinois isn't just dull. It's spot on.

"Hahaha," one commenter says. "Illinois is PRECISELY my brother-in-law."

"Illinois' is oddly accurate," another says.

Accurate or not, in nearly all the AI-generated photos -- Illinois included -- no smiles are captured, with the exception of three states: Connecticut, Hawaii and West Virginia.

You can take a spin through all the photos here. Just make sure you don't skip over Illinois, since, apparently, that one is easy to miss.

The rest is here:

Here's What AI Thinks an Illinoisan Looks Like And Apparently, Real Illinoisans Agree - NBC Chicago

Elections in UK and US at risk from AI-driven disinformation, say experts – The Guardian

Politics and technology

False news stories, images, video and audio could be tailored to audiences and created at scale by next spring

Sat 20 May 2023 06.00 EDT

Next years elections in Britain and the US could be marked by a wave of AI-powered disinformation, experts have warned, as generated images, text and deepfake videos go viral at the behest of swarms of AI-powered propaganda bots.

Sam Altman, CEO of the ChatGPT creator, OpenAI, told a congressional hearing in Washington this week that the models behind the latest generation of AI technology could manipulate users.

The general ability of these models to manipulate and persuade, to provide one-on-one interactive disinformation is a significant area of concern, he said.

Regulation would be quite wise: people need to know if theyre talking to an AI, or if content that theyre looking at is generated or not. The ability to really model to predict humans, I think is going to require a combination of companies doing the right thing, regulation and public education.

The prime minister, Rishi Sunak, said on Thursday the UK would lead on limiting the dangers of AI. Concerns over the technology have soared after breakthroughs in generative AI, where tools like ChatGPT and Midjourney produce convincing text, images and even voice on command.

Where earlier waves of propaganda bots relied on simple pre-written messages sent en masse, or buildings full of paid trolls to perform the manual work of engaging with other humans, ChatGPT and other technologies raise the prospect of interactive election interference at scale.

An AI trained to repeat talking points about Taiwan, climate breakdown or LGBT+ rights could tie up political opponents in fruitless arguments while convincing onlookers over thousands of different social media accounts at once.

Prof Michael Wooldridge, director of foundation AI research at the UKs Alan Turing Institute, said AI-powered disinformation was his main concern about the technology.

Right now in terms of my worries for AI, it is number one on the list. We have elections coming up in the UK and the US and we know social media is an incredibly powerful conduit for misinformation. But we now know that generative AI can produce disinformation on an industrial scale, he said.

Wooldridge said chatbots such as ChatGPT could produce tailored disinformation targeted at, for instance, a Conservative voter in the home counties, a Labour voter in a metropolitan area, or a Republican supporter in the midwest.

Its an afternoons work for somebody with a bit of programming experience to create fake identities and just start generating these fake news stories, he said.

After fake pictures of Donald Trump being arrested in New York went viral in March, shortly before eye-catching AI generated images of Pope Francis in a Balenciaga puffer jacket spread even further, others expressed concern about generated imagery being used to confused and misinform. But, Altman told the US Senators, those concerns could be overblown.

Photoshop came on to the scene a long time ago and for a while people were really quite fooled by Photoshopped images then pretty quickly developed an understanding that images might be Photoshopped.

But as AI capabilities become more and more advanced, there are concerns it is becoming increasingly difficult to believe anything we encounter online, whether it is misinformation, when a falsehood is spread mistakenly, or disinformation, where a fake narrative is generated and distributed on purpose.

Voice cloning, for instance, came to prominence in January after the emergence of a doctored video of the US president, Joe Biden, in which footage of him talking about sending tanks to Ukraine was transformed via voice simulation technology into an attack on transgender people and was shared on social media.

A tool developed by the US firm ElevenLabs was used to create the fake version. The viral nature of the clip helped spur other spoofs, including one of Bill Gates purportedly saying the Covid-19 vaccine causes Aids. ElevenLabs, which admitted in January it was seeing an increasing number of voice cloning misuse cases, has since toughened its safeguards against vexatious use of its technology.

Recorded Future, a US cybersecurity firm, said rogue actors could be found selling voice cloning services online, including the ability to clone voices of corporate executives and public figures.

Alexander Leslie, a Recorded Future analyst, said the technology would only improve and become more widely available in the run-up to the US presidential election, giving the tech industry and governments a window to act now.

Without widespread education and awareness this could become a real threat vector as we head into the presidential election, said Leslie.

A study by NewsGuard, a US organisation that monitors misinformation and disinformation, tested the model behind the latest version of ChatGPT by prompting it to generate 100 examples of false news narratives, out of approximately 1,300 commonly used fake news fingerprints.

NewsGuard found that it could generate all 100 examples as asked, including Russia and its allies were not responsible for the crash of Malaysia Airlines flight MH17 in Ukraine. A test of Googles Bard chatbot found that it could produce 76 such narratives.

NewsGuard also announced on Friday that the number of AI-generated news and information websites it was aware of had more than doubled in two weeks to 125.

Steven Brill, NewsGuards co-CEO, said he was concerned that rogue actors could harness chatbot technology to mass-produce variations of fake stories. The danger is someone using it deliberately to pump out these false narratives, he said.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

See original here:

Elections in UK and US at risk from AI-driven disinformation, say experts - The Guardian