Category Archives: Artificial Intelligence

Sen. Angus King talks nuclear war and artificial intelligence to Oak Hill High School juniors – Yahoo News

Mar. 30WALES U.S. Sen. Angus King appeared early Thursday afternoon in a choppy video on a big screen in front of about 60 Oak Hill High School juniors gathered in the auditorium.

"Oak Hill, can you hear me?" Maine's junior senator asked. "I'm in the hall outside the Senate chamber."

After showing off the view at the U.S. Capitol, the two-term independent proceeded to talk with students for nearly an hour on everything from his biggest fear the misuse of artificial intelligence to his memories of the nation's capital in the wake of rioting in 1968.

Ukraine got the most attention.

"We've got to help the Ukrainians," King said. "They are almost literally fighting for us."

He said Russia has shown itself to be "an aggressive country" whose full-scale invasion of its neighbor last year left the United States and its allies in Europe with no choice except to provide aid and assistance to Ukraine.

"This is the time that we've got to draw the line," said King, a 78-year-old former governor who plans to seek a third, six-year term in 2024. "It's a delicate moment but we have no choice except to stop the dictator before he goes further."

Asked by a student if he's worried about the conflict ending in a nuclear war, King said that "nuclear weapons are something that we're all concerned about. But I don't see any immediate threat in that regard."

He said he remembers "duck and cover drills" from his school days at the height of the Cold War but recognizes that deterrence has so far prevented any country from using nuclear weapons since 1945, when the U.S. dropped two bombs on Japanese cities to bring World War II to an end.

King said the bottom line is that it is better to stop an aggressor like Russia early "than to fight a major war" with it later.

Asked about his biggest worries for young people, King said one of them is what could happen in the rapidly expanding field of artificial intelligence.

Story continues

"It is developing so fast that we don't know how to handle it," King said. It's already clear, he said, that it is "going to have a huge impact on us."

Some of that impact may be beneficial, especially in health care, he said, but "there are also negatives."

"Our democracy is based on information," King said, so voters and policymakers alike need to know what's true and what's not to make wise choices.

He said artificial intelligence will give people "the ability to create false information that's so true to life that people won't be able to tell it's not true."

For instance, he said, the day is fast coming when someone could create an entirely fictional video of him saying things "that have no bearing on who I am or what I believe, but it will look totally real" and then share it all over social media in a way that is "really hard to rebut."

King said it is a problem he's talked with other senators about as they search for what can be done "to make sure this doesn't really harm us" while preserving the nation's commitment to free speech and a free press.

When a student asked King about his memories of 1968, an era they are studying, King talked about how he arrived in Washington for a summer job not long after terrible riots that rocked the city in the wake of the murder of Martin Luther King Jr.

"There was this musty smell in the air," King remembered, a smoky residue from blocks of burned-out buildings.

King asked students to think about how alienated the rioters must have felt from their community "that they would burn it down" in frustration.

He said that year he worked at the University of Virginia Law School to boost the presidential campaign of Robert Kennedy, a U.S. senator. But Kennedy died at the hands of a gunman in California in June 1968, another blow that followed King's death by two months.

"That hit me very hard," King said.

King told students what he likes about his job are the hearings that bring experts before Senate panels.

"I like to learn things. I'm innately curious," he said. "I want to know how things work, and I want to know how to make things better."

His least-favorite aspect of the job? "I spend an awful lot of time on airplanes."

King gave students some life advice as well.

"Take more risks," he told them, then added "I don't mean doing something dumb like riding a motorcycle with a blindfold on."

The senator said young people should "try things that you think may be beyond you" rather than holding back.

"Reach further than you think you can," King said.

He also told them to "value your friends and family," even parents who "may be a pain in the neck now," because when times of trouble come, and they will, they're the ones "who will stand by you."

See original here:
Sen. Angus King talks nuclear war and artificial intelligence to Oak Hill High School juniors - Yahoo News

Using artificial intelligence and archival news articles, this teen … – madison365.com

By Justin Gamble

(CNN) Using artificial intelligence and archival news articles, a teenager in Northern Virginia created a program to measure media biases and in researching older news articles, she found that Black homicide victims were less likely to be humanized in news coverage.

Emily Ocasio, an 18-year-old from Falls Church, Virginia, created an AI program that analyzed FBI homicide records between 1976 and 1984 and their corresponding coverage published in The Boston Globe to determine whether victims were presented in a humanizing or impersonal way.

After analyzing 5,042 entries, the results showed that Black men under the age of 18 were 30% less likely to receive humanizing coverage than their White counterparts, Ocasio told CNN. Black women were 23% less likely to be humanized in news stories, Ocasio added.

A news article was considered humanizing when it mentioned additional information about the victim and presented them as a person, not just a statistic, Ocasio said in her project presentation.

Her findings have not been reviewed by the larger scientific community, but she told CNN she hopes to expand her research and get it published in a scientific journal.

Ocasios project earned her second place in the prestigious Regeneron Science Talent Search on March 14 as well as a $175,000 scholarship.

Every year about 1,900 high school students from across the country participate in the competition, which started in 1942 and seeks to serve as a platform for young scientists to share original research.

Ocasio was among 40 finalists from more than 2,000 applications, according to Maya Ajmera, president and CEO of the Society for Science and executive publisher of Science News, two of the competitions sponsors.

By using AI to document these biases, Emily shows that it can be safely used to help society answer complex social science questions, her biography on the Society for Science website says.

Ocasio said she has always been interested in social justice and science and saw this project as an opportunity to combine them. Without the research, and without the statistics, you have no ability of understanding that entire communities are being left behind, she said.

Ocasio analyzed The Boston Globes news coverage because the newspaper had digital copies of its articles for the 70s to 80s time period she focused on for her project, she said. CNN has reached out to the Boston Globe for comment.

Despite her findings, Ocasio believes science cant explain everything: You can never run an experiment in a lab that tells you about how racism works in society.

Ocasio, who has Puerto Rican heritage, said her own experiences helped shape her perspective of different races and cultures, and drew her to researching racism and inequalities. She wants to replicate her research to analyze other news outlets as well, she said.

The talent searchs first-place winner, Neel Moudgal, told CNN the research done by the teenagers across the US is essential to helping solve some of societys greatest challenges.

I firmly believe that science is going to be the solution to a lot of our problems, Moudgal said. His prize-winning project was a computer model that predicts the structure of RNA molecules to help develop tests and drugs for diseases such as cancer, autoimmune diseases, and viral infections.

Ajmera said seeing such projects from high school students gives her an enormous hope for the future.

Were looking for the future scientific leaders of this country, she said.

The-CNN-Wire & 2023 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Read the rest here:
Using artificial intelligence and archival news articles, this teen ... - madison365.com

What Makes Chatbots Hallucinate or Say the Wrong Thing? – The New York Times

In todays A.I. newsletter, the third of a five-part series, I discuss some of the ways chatbots can go awry.

A few hours after yesterdays newsletter went out, a group of artificial intelligence experts and tech leaders including Elon Musk urged A.I. labs to pause work on their most advanced systems, warning that they present profound risks to society and humanity.

The group called for a six-month pause on systems more powerful than GPT-4, introduced this month by OpenAI, which Mr. Musk co-founded. A pause would provide time to implement shared safety protocols, the group said in an open letter. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

Many experts disagree about the severity of the risks cited in the letter, and well explore some of them later this week. But a number of A.I. mishaps have already surfaced. Ill spend todays newsletter explaining how they happen.

In early February, Google unveiled a new chatbot, Bard, which answered questions about the James Webb Space Telescope. There was only one problem: One of the bots claims that the telescope had captured the very first pictures of a planet outside our solar system was completely untrue.

Bots like Bard and OpenAIs ChatGPT deliver information with unnerving dexterity. But they also spout plausible falsehoods, or do things that are seriously creepy, such as insist they are in love with New York Times journalists.

How is that possible?

In the past, tech companies carefully defined how software was supposed to behave, one line of code at a time. Now, theyre designing chatbots and other technologies that learn skills on their own, by pinpointing statistical patterns in enormous amounts of information.

A New Generation of Chatbots

A brave new world. A new crop of chatbotspowered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning todays powerhouses into has-beens and creating the industrys next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacationsand translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images(and ace the Uniform Bar Exam).

Bing. Two months after ChatGPTs debut, Microsoft, OpenAIs primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bots occasionally inaccurate, misleading and weird responsesthat drew much of the attention after its release.

Ernie. The search giant Baidu unveiled Chinas first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flopafter a promised live demonstration of the bot was revealed to have been recorded.

Much of this data comes from sites like Wikipedia and Reddit. The internet is teeming with useful information, from historical facts to medical advice. But its also packed with untruths, hate speech and other garbage. Chatbots absorb it all, including explicit and implicit bias from the text they absorb.

And because of the surprising way they mix and match what theyve learned to generate entirely new text, they often create convincing language that is flat-out wrong, or does not exist in their training data. A.I. researchers call this tendency to make stuff up a hallucination, which can include irrelevant, nonsensical, or factually incorrect answers.

Were already seeing real-world consequences of A.I. hallucination. Stack Overflow, a question-and-answer site for programmers, temporarily barred users from submitting answers generated with ChatGPT, because the chatbot made it far too easy to submit plausible but incorrect responses.

These systems live in a world of language, said Melanie Mitchell, an A.I. researcher at the Santa Fe Institute. That world gives them some clues about what is true and what is not true, but the language they learn from is not grounded in reality. They do not necessarily know if what they are generating is true or false.

(When we asked Bing for examples of chatbots hallucinating, it actually hallucinated the answer.)

Think of the chatbots as jazz musicians. They can digest huge amounts of information like, say, every song that has ever been written and then riff on the results. They have the ability to stitch together ideas in surprising and creative ways. But they also play wrong notes with absolute confidence.

Sometimes the wild card isnt the software. Its the humans.

We are prone to seeing patterns that arent really there, and assuming humanlike traits and emotions in nonhuman entities. This is known as anthropomorphism. When a dog makes eye contact with us, we tend to assume its smarter than it really is. Thats just how our minds work.

And when a computer starts putting words together like we do, we get the mistaken impression that it can reason, understand and express emotions. We can also behave in unpredictable ways. (Last year, Google placed an engineer on paid leave after dismissing his claim that its A.I. was sentient. He was later fired.)

The longer the conversation runs, the more influence you have on what a large language model is saying. Kevins infamous conversation with Bing is a particularly good example. After a while, a chatbot can begin to reflect your thoughts and aims, according to researchers like the A.I. pioneer Terry Sejnowski. If you prompt it to get creepy, it gets creepy.

He compared the technology to the Mirror of Erised, a mystical artifact in the Harry Potter novels and movies. It provides whatever you are looking for whatever you want or expect or desire, Dr. Sejnowski said. Because the human and the L.L.M.s are both mirroring each other, over time they will tend toward a common conceptual state.

Companies like Google, Microsoft and OpenAI are working to solve these problems.

OpenAI worked to refine the chatbot using feedback from human testers. Using a technique called reinforcement learning, the system gained a better understanding of what it should and shouldnt do.

Microsoft, for its part, has limited the length of conversations with its Bing chatbot. It is also patching vulnerabilities that intrepid users have identified. But fixing every single hiccup is difficult, if not impossible.

So, yes, if youre clever, you can probably coax these systems into doing stuff thats offensive or creepy. Bad actors can too: The worry among many experts is that these bots will allow internet scammers, unscrupulous marketers and hostile nation states to spread disinformation and cause other types of trouble.

As you use these chatbots, stay skeptical. Take a look at them for what they really are.

They are not sentient or conscious. They are intelligent in some ways, but dumb in others. Remember that they can get stuff wrong. Remember that they can make stuff up.

But on the bright side, there are so many other things that these systems are very good for. Kevin will have more on that tomorrow.

Ask ChatGPT or Bing to explain something that you already know a lot about. Are the answers accurate?

If you get interesting responses, right or wrong, you can share them in the comments.

Question 1 of 3

Start the quiz by choosing your answer.

Hallucination: A well-known phenomenon in large language models, in which the system provides an answer that is factually incorrect, irrelevant or nonsensical, because of limitations in its training data and architecture.

Bias: A type of error that can occur in a large language model if its output is skewed by the models training data. For example, a model may associate specific traits or professions with a certain race or gender, leading to inaccurate predictions and offensive responses.

Anthropomorphism: The tendency for people to attribute human-like qualities or characteristics to an A.I. chatbot. For example, you may assume it is kind or cruel based on its answers, even though it is not capable of having emotions, or you may believe the A.I. is sentient because it is very good at mimicking human language.

Click here for more glossary terms.

Read this article:
What Makes Chatbots Hallucinate or Say the Wrong Thing? - The New York Times

Artificial Intelligence Glossary: AI Terms Everyone Should Learn – The New York Times

Weve compiled a list of phrases and concepts useful to understanding artificial intelligence, in particular the new breed of A.I.-enabled chatbots like ChatGPT, Bing and Bard.

If you dont understand these explanations, or would like to learn more, you might want to consider asking the chatbots themselves. Answering such questions is one of their most useful skills, and one of the best ways to understand A.I. is to use it. But keep in mind that they sometimes get things wrong.

Bing and Bard chatbots are being rolled out slowly, and you may need to get on their waiting lists for access. ChatGPT currently has no waiting list, but it requires setting up a free account.

For more on learning about A.I., check out The New York Timess five-part series on becoming an expert on chatbots.

Anthropomorphism: The tendency for people to attribute humanlike qualities or characteristics to an A.I. chatbot. For example, you may assume it is kind or cruel based on its answers, even though it is not capable of having emotions, or you may believe the A.I. is sentient because it is very good at mimicking human language.

Bias: A type of error that can occur in a large language model if its output is skewed by the models training data. For example, a model may associate specific traits or professions with a certain race or gender, leading to inaccurate predictions and offensive responses.

A brave new world. A new crop of chatbotspowered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning todays powerhouses into has-beens and creating the industrys next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacationsand translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images(and ace the Uniform Bar Exam).

Bing. Two months after ChatGPTs debut, Microsoft, OpenAIs primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bots occasionally inaccurate, misleading and weird responsesthat drew much of the attention after its release.

Ernie. The search giant Baidu unveiled Chinas first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flopafter a promised live demonstration of the bot was revealed to have been recorded.

Emergent behavior: Unexpected or unintended abilities in a large language model, enabled by the models learning patterns and rules from its training data. For example, models that are trained on programming and coding sites can write new code. Other examples include creative abilities like composing poetry, music and fictional stories.

Generative A.I.: Technology that creates content including text, images, video and computer code by identifying patterns in large quantities of training data, and then creating original material that has similar characteristics. Examples include ChatGPT for text and DALL-E and Midjourney for images.

Hallucination: A well-known phenomenon in large language models, in which the system provides an answer that is factually incorrect, irrelevant or nonsensical, because of limitations in its training data and architecture.

Large language model: A type of neural network that learns skills including generating prose, conducting conversations and writing computer code by analyzing vast amounts of text from across the internet. The basic function is to predict the next word in a sequence, but these models have surprised experts by learning new abilities.

Natural language processing: Techniques used by large language models to understand and generate human language, including text classification and sentiment analysis. These methods often use a combination of machine learning algorithms, statistical models and linguistic rules.

Neural network: A mathematical system, modeled on the human brain, that learns skills by finding statistical patterns in data. It consists of layers of artificial neurons: The first layer receives the input data, and the last layer outputs the results. Even the experts who create neural networks dont always understand what happens in between.

Parameters: Numerical values that define a large language models structure and behavior, like clues that help it guess what words come next. Systems like GPT-4 are thought to have hundreds of billions of parameters.

Reinforcement learning: A technique that teaches an A.I. model to find the best result by trial and error, receiving rewards or punishments from an algorithm based on its results. This system can be enhanced by humans giving feedback on its performance, in the form of ratings, corrections and suggestions.

Transformer model: A neural network architecture useful for understanding language that does not have to analyze words one at a time but can look at an entire sentence at once. This was an A.I. breakthrough, because it enabled models to understand context and long-term dependencies in language. Transformers use a technique called self-attention, which allows the model to focus on the particular words that are important in understanding the meaning of a sentence.

Read this article:
Artificial Intelligence Glossary: AI Terms Everyone Should Learn - The New York Times

Everything you need to know about artificial intelligence: What is it used for? – Fox Business

Pippa Malmgren, former special assistant to President George W. Bush, reacts to artificial intelligence technology and the Russia-Ukraine conflict on 'Making Money.'

Artificial intelligence is the leading innovation in the technology sector today, offering limitless opportunities to its users while also representing an existential threat to some white-collar workers and companies.

The most important technology and entertainment companies in the world either utilize AI or are developing their own program for public usage.

The primary goal of this revolutionary technology is to change the way humans interact in their everyday lives by removing the need to complete mundane tasks and providing instant access to detailed information.

MUSK LOOKS TO BUILD CHATGPT ALTERNATIVE TO COMBAT WOKE AI: REPORT

AI, or artificial intelligence, is a form of processing that simulates human and animal intelligence via machines or computer systems to carry out data analytics, language processing, speech recognition and computer vision. A key component of artificial intelligence is the dataset consumed by the program, which allows it to develop patterns and correlations to make predictions for future tasks.

Google has recently released its own AI chatbot called Bard in the United States and United Kingdom. (Rafael Henrique/SOPA Images/LightRocket via Getty Images / Getty Images)

The foundation of AI is found in the hardware and software used to write and train artificial algorithms. The duality of AI allows it to function with various programming tools, including Python, R and JavaScript. AI is a general term used to define the main characteristics of a computer program that essentially simulates human processing and performs tasks that once required humans, like playing chess, painting and writing essays.

Some of the world's biggest corporations have invested billions into data science teams to help improve their AI operating system with the best computer science and business knowledge.

In November 2022, Chat GPT became the top artificial intelligence text and language model developed by OpenAI, a Silicon Valley-based research laboratory. The program utilizes the company's GPT-3.5 and GPT 4 language models that generate texts in a conversational and formal manner that is similar to humans. The release of the prototype gained widespread international popularity and helped grow OpenAI's valuation by tens of billions.

ChatGPT allows users to generate a variety of creative exercises, including writing essays and business plans and generating code. However, one controversy with the program is its ability to foster cheating and potentially eliminate white-collar jobs for millions. For example, colleges across the U.S. have implemented programs to prevent students from using ChatGPT to write their essays or complete exams.

There is also a fear among white-collar business professionals in a number of industries that programs such as ChatGPT may one day be used to replace humans. In January, the AI chatbot tool passed law and business exams at the University of Minnesota and the University of Pennsylvania Wharton School of Business. The bot completed the courses with a C+ average through a blind test of 95 multiple-choice and 12 essay questions.

OpenAI received a $10 billion investment from Microsoft following the success of Chat GPT. (Reuters / Reuters Photos)

"One of the biggest risks to the future of civilization is AI," billionaire Elon Musk, a co-founder of OpenAI, said at the World Government Summit in February.

OpenAI is the company responsible for some of the leading and most popular artificial intelligence programs, including Chat GPT and the text-to-image generation bot DALL-E. The company was originally founded as a nonprofit laboratory by some of the wealthiest Silicon Valley technology leaders in 2015 to develop and research a human-friendly AI system.

OPENAI DEBUTS CHAT GPT-4, MORE ADVANCED AI MODEL THAT CAN DESCRIBE PHOTOS, HANDLE MORE TEXTS

Some of the early founders of OpenAI include Musk, Peter Thiel, Jessica Livingston and current CEO Sam Altman. In 2019, OpenAI transitioned into a for-profit business to expand investment opportunities as it developed future AI programming. After the release of DALL-E and Chat GPT, the company secured a multi-year investment deal from Microsoft reported to be upward of $10 billion in new funding.

One of the essential uses of AI is to eliminate mundane tasks and optimize efficiency in business, agriculture, education and other sectors. AI is used by some of the top technology and social media companies to help curate entertainment and content to customized user recommendations to boost engagement and traffic.

Billionaire Elon Musk is one of the founders of OpenAI and has warned about the dangers artificial intelligence pose. (Jim Watson/AFP via Getty Images / Getty Images)

Depending on how much data an AI program has will determine its specific benefits to the user. For example, AI with abundant data can make investment recommendations on the stock market, identify emerging business innovations and accelerate company production. The future uses and benefits of AI are likely to expand in the coming years as the technology receives more investment and innovation.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Over the last few years, artificial intelligence has transitioned from a niche market of Silicon Valley enthusiasts to the primary investment goals of the largest technology companies in the world. Amazon, Microsoft, Google and IBM are, in some fashion, building or investing in artificial and machine learning programs. As previously reported, Microsoft has invested billions in OpenAI due to the success of Chat GPT and DALL-E, while Google has launched its own AI chatbot called Bard.

Read more:
Everything you need to know about artificial intelligence: What is it used for? - Fox Business

Generative Artificial Intelligence Awareness, Interest Surging – Morning Consult

Artificial intelligence is already piquing more Americans interest, according to Morning Consult trend data.

Part of the curiosity bump in AI-supported or AI-generated applications is simply due to increased awareness of the emerging technology. Headlines over the fight for AI dominance between Microsoft and Google have continued, as Google last week opened up access to Bard its AI chatbot competitor to ChatGPT and Microsoft announced that it would integrate the AI-powered image generator DALL-E into its Bing search engine.

According to a recent Morning Consult survey, 57% of consumers said they have heard of AI chatbots in the news, up from 50% just a month ago, and 47% reported hearing about ChatGPT in the news specifically, a double-digit jump during the same time period. In the battle for clout, Microsoft has the advantage, but new AI tools will need to be relevant and helpful to consumers as well as be developed responsibly in order to be successful.

Surveys conducted Feb. 17-March 19, 2023, among representative samples of roughly 2,200 U.S. adults each, with unweighted margins of error of +/-2 percentage points.

With heightened awareness of AI tech among consumers, interest in dozens of different AI applications is also up, from AI-powered flight and hotel recommendations to AI-generated financial planning.

Average interest across the applications included in the survey is up 4 percentage points since February. The categories seeing the most growth are: AI-generated menu recommendations at restaurants (up 13 points to 53%), AI-powered therapy and life coaching (up 10 points to 40%), and AI-generated social media captions (up 10 points to 35%).

The fact that interest in AI applications is not siloed to a single or few categories shows that there is broad interest in the technology and how it might change many facets of everyday life.

Surveys conducted Feb. 17-March 19, 2023, among representative samples of roughly 2,200 U.S. adults each, with unweighted margins of error of +/-2 percentage points.

As companies adopt AI models and integrate them into their existing products and services, they will be wise to carefully thread the needle of applications that automate help shown to be popular among consumers but dont enter the uncanny valley by substituting a human connection.

Behind all the excitement and daydreams of how AI might help us is the shadow of the harm it might also cause. These very real concerns, as reported in previous Morning Consult analysis, are seemingly understood by the biggest players in the AI space.

Microsoft has published governing principles for AI development (though it reportedly laid off its AI ethics team earlier this month) and Google noted in its announcement of expanded access to Bard that the model has the potential to provide misinformation. Even the CEO of OpenAI, the company that develops ChatGPT, has expressed concerns over the direction of the technology.

As companies develop new AI models, they are also shouldering responsibility for its ethical development. Nearly 2 in 3 (65%) consumers said companies that develop AI models bear at least some responsibility for doing so ethically. Infrastructure companies, companies that use but dont develop AI and the CEOs of AI developers are also at the top of the list. The private sector is more likely than the government and regulators to be seen as having responsibility for ethical AI development, just after the Federal Trade Commission warned companies to keep their AI claims in check.

Survey conducted March 17-19, 2023, among a representative sample of 2,205 U.S. adults, with an unweighted margin of error of +/-2 percentage points. Figures may not add up to 100% due to rounding.

Even as interest and awareness of AI tech increases over time, concerns over misinformation and bias in AI search results persist. Two in 3 U.S. adults said they are concerned about the accuracy of AI search engine results, and 69% are concerned about potential misinformation included in results from search engines that use AI.

That being said, as people become more familiar and even accustomed to the idea of AI in search, the share who trust AI is increasing over (a relatively short period of) time. More than a third (35%) of consumers completely or mostly trust AI search to provide unbiased results, up from 27% a month ago, and trust in companies to develop AI responsibly is also up 8 points.

Surveys conducted Feb. 17-March 19, 2023, among representative samples of roughly 2,200 U.S. adults each, with unweighted margins of error of +/-2 percentage points.

Unlike with Web3 or the metaverse, consumers are showing increased interest and trust in AI as an emerging technology. Companies at the forefront of developing these AI models are trying to achieve a difficult balance between maintaining momentum in rolling out new products or applications and not moving so quickly that they risk weakening consumer confidence with a faulty chatbot. Developing AI responsibly will be critical to building that trust, and so far, consumers are warming up to the idea of AI as a presence in their everyday lives. But its still early days, and weve yet to see everything that AI has to offer both the good and the bad.

Visit link:
Generative Artificial Intelligence Awareness, Interest Surging - Morning Consult

New wave of artificial intelligence – think ChatGPT – threatens 300 million jobs, report says – WRAL TechWire

By Michelle Toh, CNN

As many as 300 million full-time jobs around the world could be automated in some way by the newest wave of artificial intelligence that has spawned platforms like ChatGPT, according to Goldman Sachs economists.

They predicted in a report that 18% of work globally could be computerized, with the effects felt more deeply in advanced economies than emerging markets.

Thats partly because white-collar workers are seen to be more at risk than manual laborers. Administrative workers and lawyers are expected to be most affected, the economists said, compared to the little effect seen on physically demanding or outdoor occupations, such as construction and repair work.

Chatbot ChatGPT is coming to more of your apps heres why

In the United States and Europe, approximately two-thirds of current jobs are exposed to some degree of AI automation, and up to a quarter of all work could be done by AI completely, the bank estimates.

If generative artificial intelligence delivers on its promised capabilities, the labor market could face significant disruption, the economists wrote.The term refers tothe technology behind ChatGPT, the chatbot sensation that has taken the world by storm.

ChatGPT, which can answer prompts and write essays, has already prompted many businessesto rethinkhow people should work every day.

This month, its developer unveiled the latest version of the software behind the bot, GPT-4. The platform has quicklyimpressed early userswith its ability to simplify coding, rapidly create a website from a simple sketch and pass exams with high marks.

Further use of such AI will likely lead to job losses, the Goldman Sachs economists wrote. But they noted that technological innovation that initially displaces workers has historically also created employment growth over the long haul.

While workplaces may shift, widespread adoption of AI could ultimately increase labor productivity and boost global GDP by 7% annually over a 10-year period, according to Goldman Sachs.

ChatGPT, you & your career: What AI engine could mean to businesses, job

Although the impact of AI on the labor market is likely to be significant, most jobs and industries are only partially exposed to automation and are thus more likely to be complemented rather than substituted by AI, the economists added.

Most workers are employed in occupations that are partially exposed to AI automation and, following AI adoption, will likely apply at least some of their freed-up capacity toward productive activities that increase output.

Of US workers expected to be affected, for instance, 25% to 50% of their workload can be replaced, the researchers added.

The combination of significant labor cost savings, new job creation, and a productivity boost for non-displaced workers raises the possibility of a labor productivity boom like those that followed the emergence of earlier general-purpose technologies like the electric motor and personal computer.

CNNs Nicole Goodkind contributed to this report.

The-CNN-Wire & 2023 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Continued here:
New wave of artificial intelligence - think ChatGPT - threatens 300 million jobs, report says - WRAL TechWire

UK unveils world leading approach to innovation in first artificial … – GOV.UK

Five principles, including safety, transparency and fairness, will guide the use of artificial intelligence in the UK, as part of a new national blueprint for our world class regulators to drive responsible innovation and maintain public trust in this revolutionary technology.

The UKs AI industry is thriving, employing over 50,000 people and contributing 3.7 billion to the economy last year. Britain is home to twice as many companies providing AI products and services as any other European country and hundreds more are created each year.

AI is already delivering real social and economic benefits for people, from helping doctors to identify diseases faster to helping British farmers use their land more efficiently and sustainably. Adopting artificial intelligence in more sectors could improve productivity and unlock growth, which is why the government is committed to unleashing AIs potential across the economy.

As AI continues developing rapidly, questions have been raised about the future risks it could pose to peoples privacy, their human rights or their safety. There are concerns about the fairness of using AI tools to make decisions which impact peoples lives, such as assessing the worthiness of loan or mortgage applications.

Alongside hundreds of millions of pounds of government investment announced at Budget, the proposals in the AI regulation white paper will help create the right environment for artificial intelligence to flourish safely in the UK.

Currently, organisations can be held back from using AI to its full potential because a patchwork of legal regimes causes confusion and financial and administrative burdens for businesses trying to comply with rules.

The government will avoid heavy-handed legislation which could stifle innovation and take an adaptable approach to regulating AI. Instead of giving responsibility for AI governance to a new single regulator, the government will empower existing regulators - such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority - to come up with tailored, context-specific approaches that suit the way AI is actually being used in their sectors.

The white paper outlines 5 clear principles that these regulators should consider to best facilitate the safe and innovative use of AI in the industries they monitor. The principles are:

This approach will mean the UKs rules can adapt as this fast-moving technology develops, ensuring protections for the public without holding businesses back from using AI technology to deliver stronger economic growth, better jobs, and bold new discoveries that radically improve peoples lives.

Over the next 12 months, regulators will issue practical guidance to organisations, as well as other tools and resources like risk assessment templates, to set out how to implement these principles in their sectors. When parliamentary time allows, legislation could be introduced to ensure regulators consider the principles consistently.

Science, Innovation and Technology Secretary Michelle Donelan said

AI has the potential to make Britain a smarter, healthier and happier place to live and work. Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely.

Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow.

Businesses warmly welcomed initial proposals for this proportionate approach during a consultation last year and highlighted the need for more coordination between regulators to ensure the new framework is implemented effectively across the economy. As part of the white paper published today, the government is consulting on new processes to improve coordination between regulators as well as monitor and evaluate the AI framework, making changes to improve the efficacy of the approach if needed.

2 million will fund a new sandbox, a trial environment where businesses can test how regulation could be applied to AI products and services, to support innovators bringing new ideas to market without being blocked by rulebook barriers.

Organisations and individuals working with AI can share their views on the white paper as part of a new consultation launching today which will inform how the framework is developed in the months ahead.

Lila Ibrahim, Chief Operating Officer and UK AI Council Member, DeepMind, said:

AI has the potential to advance science and benefit humanity in numerous ways, from combating climate change to better understanding and treating diseases. This transformative technology can only reach its full potential if it is trusted, which requires public and private partnership in the spirit of pioneering responsibly. The UKs proposed context-driven approach will help regulation keep pace with the development of AI, support innovation and mitigate future risks.

Grazia Vittadini, Chief Technology Officer, Rolls-Royce, said:

Both our business and our customers will benefit from agile, context-driven AI regulation. It will enable us to continue to lead the technical and quality assurance innovations for safety-critical industrial AI applications, while remaining compliant with the standards of integrity, responsibility and trust that society demands from AI developers.

Sue Daley, Director for Tech and Innovation at techUK, said:

techUK welcomes the much-anticipated publication of the UKs AI white paper and supports its plans for a context-specific, principle-based approach to governing AI that promotes innovation. The government must now prioritise building the necessary regulatory capacity, expertise, and coordination. techUK stands ready to work alongside government and regulators to ensure that the benefits of this powerful technology are felt across both society and the economy.

Clare Barclay, CEO, Microsoft UK, said:

AI is thetechnology that will define the coming decades with the potential to supercharge economies, create new industries and amplify human ingenuity. If the UK is to succeed and lead in the age of intelligence, then it is critical to create an environment that fosters innovation, whilst ensuring an ethical and responsible approach. We welcome the UKs commitment to being at the forefront of progress.

Rashik Parmar MBE, chief executive, BCS The Chartered Institute for IT, said:

AI is transforming how we learn, work, manage our health, discover our next binge-watch and even find love. The governments commitment to helping UK companies become global leaders in AI, while developing within responsible principles, strikes the right regulatory balance. As we watch AI growing up, we welcome the fact that our regulation will be cross-sectoral and more flexible than that proposed in the EU, while seeking to lead on aligning approaches between international partners. It is right that the risk of use is regulated, not the AI technology itself. Its also positive that the paper aims to create a central function to help monitor developments and identify risks. Similarly, the proposed multi-regulator sandbox [a safe testing environment] will help break down barriers and remove obstacles. We need to remember this future will be delivered by AI professionals - people - who believe in shared ethical values. Managing the risk of AI and building public trust is most effective when the people creating it work in an accountable and professional culture, rooted in world-leading standards and qualifications.

Read the AI regulation white paper.

Organisations and individuals involved in the AI sector are encouraged to provide feedback on the white paper through a consultation which launches today and will run until Tuesday 21 June.

Go here to read the rest:
UK unveils world leading approach to innovation in first artificial ... - GOV.UK

6 Challenges Identified by Scientists That Humans Face With … – SciTechDaily

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and act like humans. AI technologies enable computers to perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.

A study led by a professor from the University of Central Florida has identified six challenges that must be overcome in order to improve our relationship with artificial intelligence (AI) and guarantee its ethical and fair utilization.

A professor from the University of Central Florida and 26 other scientists have published a study highlighting the obstacles that humanity must tackle to guarantee that artificial intelligence (AI) is dependable, secure, trustworthy, and aligned with human values.

The study was published in the International Journal of Human-Computer Interaction.

Ozlem Garibay, an assistant professor in UCFs Department of Industrial Engineering and Management Systems, served as the lead researcher for the study. According to Garibay, while AI technology has become increasingly prevalent in various aspects of our lives, it has also introduced a multitude of challenges that need to be thoroughly examined.

For instance, the coming widespread integration of artificial intelligence could significantly impact human life in ways that are not yet fully understood, says Garibay, who works on AI applications in material and drug design and discovery, and how AI impacts social systems.

The six challenges Garibay and the team of researchers identified are:

The study, which was conducted over 20 months, comprises the views of 26 international experts who have diverse backgrounds in AI technology.

These challenges call for the creation of human-centered artificial intelligence technologies that prioritize ethicality, fairness, and the enhancement of human well-being, Garibay says. The challenges urge the adoption of a human-centered approach that includes responsible design, privacy protection, adherence to human-centered design principles, appropriate governance and oversight, and respectful interaction with human cognitive capacities.

Overall, these challenges are a call to action for the scientific community to develop and implement artificial intelligence technologies that prioritize and benefit humanity, she says.

Reference: Six Human-Centered Artificial Intelligence Grand Challenges by Ozlem Ozmen Garibay, Brent Winslow, Salvatore Andolina, Margherita Antona, Anja Bodenschatz, Constantinos Coursaris, Gregory Falco, Stephen M. Fiore, Ivan Garibay, Keri Grieman, John C. Havens, Marina Jirotka, Hernisa Kacorri, Waldemar Karwowski, Joe Kider, Joseph Konstan, Sean Koon, Monica Lopez-Gonzalez, Iliana Maifeld-Caruccig, Sean McGregor, Gavriel Salvendy, Ben Shneiderman, Constantine Stephanidis, Christina Strobel, Carolyn Ten Holter and Wei Xu, 2 January 2023, International Journal of Human-Computer Interaction.DOI: 10.1080/10447318.2022.2153320

The group of 26 experts includes National Academy of Engineering members and researchers from North America, Europe, and Asia who have broad experiences across academia, industry, and government. The group also has diverse educational backgrounds in areas ranging from computer science and engineering to psychology and medicine.

Their work also will be featured in a chapter in the book, Human-Computer Interaction: Foundations, Methods, Technologies, and Applications.

See the original post:
6 Challenges Identified by Scientists That Humans Face With ... - SciTechDaily

What is Generative Artificial Intelligence (AI)? – Analytics Insight

Generative AI describes algorithms that can be utilized to create new content

Generative artificial intelligence (AI) describes algorithms (such as ChatGPT) that can be used to create new content such as audio, code, images, text, simulations, and videos. Recent breakthroughs in the industry could radically change the way we approach content creation. The way we approach content creation could be drastically altered by recent breakthroughs in the field.

Machine learning encompasses generative AI systems, and one such system, ChatGPT, describes its capabilities as follows:

The generative pre-trained transformer (GPT) is receiving a lot of attention right now. It is a cost-free chatbot that can respond to almost any question. It was developed by OpenAI and will be made available to the public for testing in November 2022 and already regarded as the best AI chatbot ever.

Medical imaging analysis and high-resolution weather forecasts are just two examples of the many applications of machine learning that have emerged in recent years. It is abundantly clear that generative AI tools like ChatGPT and DALL-E can alter how a variety of tasks are carried out.

Artificial intelligence is a type of machine learning. Models that can learn from data patterns without human guidance are developed through machine learning to develop artificial intelligence. The unmanageably colossal volume and intricacy of information (unmanageable by people, in any case) that is presently being produced has expanded the capability of AI, as well as the requirement for it.

Boldface-name donors have given OpenAI, the company behind ChatGPT, former GPT models, and DALL-E. Meta has released its Make-A-Video product, which is based on generative AI, and DeepMind is a subsidiary of Alphabet, the parent company of Google.

But its not just talent. Asking a model to practice using almost anything on the internet will cost you. OpenAI has not disclosed the exact cost but estimates that GPT-3 was trained on about 45 terabytes of text data-about a million square feet of bookshelf space, or a quarter of the entire Library of Congress-valued at several million dollars. These are not resources that your gardening business can use.

You may have noticed that the outputs produced by generative AI models can appear uncanny or indistinguishable from content created by humans. The match between the model and the use case, or input, and the quality of the model as we have seen, ChatGPTs outputs appear to be superior to those of its predecessors so far determine the outcomes.

On-demand, AI-generated art models like DALL-E can produce strange and beautiful images like a Raphael painting of a child and a Madonna, eating pizza. Other generative artificial intelligence models can deliver code, video, sound, or business reproductions.

However, not all of the outputs are appropriate or accurate. Generative AI outputs are combinations of the data used to train the algorithms that have been carefully calibrated. Since how much information is used to prepare these calculations is so unquestionably huge-as noted, GPT-3 was prepared on 45 terabytes of text information-the models can seem, by all accounts, to be inventive while creating yields.

In a matter of seconds, generative AI tools can produce a wide range of credible writing and respond to criticism to make the writing more useful. This has suggestions for a wide assortment of ventures, from IT and programming associations that can profit from the momentary, generally right code produced by computer-based intelligence models to associations needing promoting duplicate.

Weve seen that fostering a generative computer-based intelligence model is so asset-escalated that it is not feasible for everything except the greatest and best-resourced organizations. Companies that want to use generative AI can either use it straight out of the box or fine-tune it to do a particular job.

More here:
What is Generative Artificial Intelligence (AI)? - Analytics Insight