Category Archives: Artificial Super Intelligence
Artificial intelligence poses real and present danger, headteachers warn – Yahoo Sport Australia
AI is a rapidly growing area of innovation (PA)
Artificial intelligence poses the greatestdangertoeducationand the Government is responding too slowlytothe threat, head teachers have claimed.
AIcould bring the biggest benefit since the printing press but the risks are more severe than any threat that has ever faced schools, accordingtoEpsom Colleges principal Sir Anthony Seldon.
Leaders from the countrys top schools have formed a coalition, led by Sir Anthony,towarn of the very real and present hazards anddangers being presented by the technology.
Totackle this, the group has announced the launch of a new bodytoadvise and protect schools from the risks ofAI.
They wish for collaboration between schoolstoensure thatAIserves the best interest of the pupils and teachers rather than those of largeeducationtechnology companies, the Times reported.
The head teachers of dozens of private and state schools support the initiative, including Helen Pike, the master of Magdalen College School in Oxford, and Alex Russell, the chief executive of BourneEducationTrust, which runs nearly 30 state schools.
The potentialtoaid cheating is a minor concern for head teachers whose fears extendtothe impact on childrens mental and physical health and the future of the teaching profession.
Professor Stuart Russell, one of the godfathers ofAIresearch, warned last week that ministers were not doing enoughtoguard against the possibility of a super intelligent machine wiping out humanity.
Rishi Sunak admitted at the G7 summit this week that guard-rails would havetobe put around it.
Read more:
Artificial intelligence poses real and present danger, headteachers warn - Yahoo Sport Australia
Cannes Diary: Will Artificial Intelligence Democratize Creativity or Lead to Certain Doom? – Yahoo News
On May 17, as bodies lined up in the rain outside the Cannes Film Festival Palais for the chance to watch a short film directed byPedro Almodvar, an auteur known most of all for his humanism, a different kind of gathering was underway below the theater. Inside the March, a panel of technologists convened to tell an audience of film professionals how they might deploy artificial intelligence for creating scripts, characters, videos, voices and graphics.
The ideas discussed at the Cannes Next panel AI Apocalypse or Revolution? Rethinking Creativity, Content and Cinema in the Age of Artificial Intelligence make the scene of the Almodvar crowd seem almost poignant, like seeing a species blissfully ignorant of their own coming extinction, dinosaurs contentedly chewing on their dinners 10 minutes before the asteroid hits.
More from The Hollywood Reporter
The only people who should be afraid are the ones who arent going to use these tools, said panelistAnder Saar, a futurist and strategy consultant for Red Bull Media House, the media arm of the parent company of Red Bull energy drinks. Fifty to 70 percent of a film budget goes to labor. If we can make that more efficient, we can do much bigger films at bigger budgets, or do more films.
The panel also includedHovhannes Avoyan, the CEO of Picsart, an image-editing developer powered by AI, andAnna Bulakh, head of ethics and partnerships at Respeecher, an AI startup that makes technology that allows one person to speak using the voice of another person. The audience of about 150 people was full of AI early adopters through a show of hands, about 75 percent said they had an account for ChatGPT, the AI language processing tool.
Story continues
The panelists had more technologies for them to try. Bulakhs company re-createdJames Earl Jones Darth Vader voice as it sounded in 1977 for the 2022 Disney+ seriesObi-Wan Kenobi, andVince Lombardis voice for a 2021 NFL ad that aired during the Super Bowl. Bulakh drew a distinction between Respeechers work and AI that is created to manipulate, otherwise known as deepfakes. We dont allow you to re-create someones voice without permission, and we as a company are pushing for this as a best practice worldwide, Bulakh said. She also spoke about how productions already use Respeechers tools as a form of insurance when actors cant use their voices, and about how actors could potentially grow their revenue streams using AI.
Avoyan said he created his company for his daughter, an artist, and his intention is, he said, democratizing creativity. Its a tool, he said. Dont be afraid. It will help you in your job.
The optimistic conversation unfolding beside the French Riviera felt light years away from the WGA strike taking place in Hollywood, in which writers and studios are at odds over the use of AI, with studios considering such ideas as having human writers punch up drafts of AI-generated scripts, or using AI to create new scripts based on a writers previous work. During contract negotiations, the AMPTP refused union requests for protection from AI use, offering instead, annual meetings to discuss advancements in technology. The March talk also felt far from the warnings of a growing chorus of experts likeEric Horvitz, chief scientific officer at Microsoft, and AI pioneerGeoffrey Hinton, who resigned from his job at Google this month in order to speak freely about AIs risks, which he says include the potential for deliberate misuse, mass unemployment and human extinction.
Are these kinds of worries just moral panic? mused the moderator and head of Cannes NextSten Kristian-Saluveer. That seemed to be the panelists view. Saar dismissed the concerns, comparing the changes AI will bring to adaptations brought by the automobile or the calculator. When calculators came, it didnt mean we dont know how to do math, he said.
One of the panel buzz phrases was hyper-personalized IP, meaning that well all create our own individual entertainment using AI tools. Saar shared a video from a company he is advising, in which a childs drawings came to life and surrounded her on video screens. The characters in the future will be created by the kids themselves, he says. Avoyan said the line between creator and audience will narrow in such a way that we will all just be making our own movies. You dont even need a distribution house, he said.
A German producer and self-described AI enthusiast in the audience said, If the cost of the means of production goes to zero, the amount of produced material is going up exponentially. We all still only have 24 hours. Who or what, the producer wanted to know, would be the gatekeepers for content in this new era? Well, the algorithm, of course. A lot of creators are blaming the algorithm for not getting views, saying the algorithm is burying my video, Saar said. The reality is most of the content is just not good and doesnt deserve an audience.
What wasnt discussed at the panel was what might be lost in a future that looks like this. Will a generation raised on watching videos created from their own drawings, or from an algorithms determination of what kinds of images they will like, take a chance on discovering something new? Will they line up in the rain with people from all over the world to watch a movie made by someone else?
Best of The Hollywood Reporter
Click here to read the full article.
Link:
Sam Altman is plowing ahead with nuclear fusion and his eye-scanning crypto ventureand, oh yeah, OpenAI – Fortune
OpenAI CEO Sam Altman helped bring ChatGPT to the world, which sparked the current A.I. race involving Microsoft, Google, and others.
But hes busy with other ventures that could be no less disruptiveand are linked in some ways. This week, Microsoft announced a purchasing agreement with Helion Energy, a nuclear fusion startup primarily backed by Altman. And Worldcoin, a crypto startup involving eye scans cofounded by Altman in 2019, is close to securing hefty new investments, according to Financial Times reporting on Sunday.
Before becoming OpenAIs leader, Altman served as president of the startup accelerator Y Combinator, so its not entirely surprising that hes involved in more than one venture. But the sheer ambition of the projects, both on their own and collectively, merits attention.
Microsoft announced a deal on Wednesday in which Helion will supply it with electricity from nuclear fusion by 2028. Thats bold considering nobody is yet producing electricity from fusion, and many experts believe its decades away.
During a Stripe conference interview last week, Altman said the audience should be excited about the startups developments and drew a connection between Helion and artificial intelligence.
If you really want to make the biggest, most capable super intelligent system you can, you need high amounts of energy, he explained. And if you have an A.I. that can help you move faster and do better material science, you can probably get to fusion a little bit faster too.
He acknowledged the challenging economics of nuclear fusion, but added, I think we will probably figure it out.
He added, And probably we will get to a world where in addition to the cost of intelligence falling dramatically, the cost of energy falls dramatically, too. And if both of those things happen at the same timeI would argue that they are currently the two most important inputs in the whole economywe get to a super different place.
Worldcoinstill in beta but aiming to launch in the first half of this yearis equally ambitious, as Fortune reported in March. If A.I. takes away our jobs and governments decide that a universal basic income is needed, Worldcoin wants to be the distribution mechanism for those payments. If all goes to plan, itll be bigger than Bitcoin and approved by regulators across the globe.
That might be a long way off if it ever occurs, but in the meantime the startup might have found quicker path to monetization with World ID, a kind of badge you receive after being verified by Worldcoinand a handy way to prove that youre a human rather than an A.I. bot when logging into online platforms. The idea is your World ID would join or replace your user names and passwords.
The only way to really prove a human is a human, the Worldcoin team decided, was via an iris scan. That led to a small orb-shaped device you look into that converts a biometric scanning code into proof of personhood.
When youre scanned, verified, and onboarded to Worldcoin, youre given 25 proprietary crypto tokens, also called Worldcoins. Well over a million people have already participated, though of course the company aims to have tens and then hundreds of millions joining after beta. Naturally such plans have raised a range of privacy concerns, but according to the FT, the firm is now in advanced talks to raise about $100 million.
Go here to see the original:
We need to prepare for the public safety hazards posed by artificial intelligence – The Conversation
For the most part, the focus of contemporary emergency management has been on natural, technological and human-made hazards such as flooding, earthquakes, tornadoes, industrial accidents, extreme weather events and cyber attacks.
However, with the increase in the availability and capabilities of artificial intelligence, we may soon see emerging public safety hazards related to these technologies that we will need to mitigate and prepare for.
Over the past 20 years, my colleagues and I along with many other researchers have been leveraging AI to develop models and applications that can identify, assess, predict, monitor and detect hazards to inform emergency response operations and decision-making.
We are now reaching a turning point where AI is becoming a potential source of risk at a scale that should be incorporated into risk and emergency management phases mitigation or prevention, preparedness, response and recovery.
AI hazards can be classified into two types: intentional and unintentional. Unintentional hazards are those caused by human errors or technological failures.
As the use of AI increases, there will be more adverse events caused by human error in AI models or technological failures in AI based technologies. These events can occur in all kinds of industries including transportation (like drones, trains or self-driving cars), electricity, oil and gas, finance and banking, agriculture, health and mining.
Intentional AI hazards are potential threats that are caused by using AI to harm people and properties. AI can also be used to gain unlawful benefits by compromising security and safety systems.
In my view, this simple intentional and unintentional classification may not be sufficient in case of AI. Here, we need to add a new class of emerging threats the possibility of AI overtaking human control and decision-making. This may be triggered intentionally or unintentionally.
Many AI experts have already warned against such potential threats. A recent open letter by researchers, scientists and others involved in the development of AI called for a moratorium on its further development.
Public safety and emergency management experts use risk matrices to assess and compare risks. Using this method, hazards are qualitatively or quantitatively assessed based on their frequency and consequence, and their impacts are classified as low, medium or high.
Hazards that have low frequency and low consequence or impact are considered low risk and no additional actions are required to manage them. Hazards that have medium consequence and medium frequency are considered medium risk. These risks need to be closely monitored.
Hazards with high frequency or high consequence or high in both consequence and frequency are classified as high risks. These risks need to be reduced by taking additional risk reduction and mitigation measures. Failure to take immediate and proper action may result in sever human and property losses.
Up until now, AI hazards and risks have not been added into the risk assessment matrices much beyond organizational use of AI applications. The time has come when we should quickly start bringing the potential AI risks into local, national and global risk and emergency management.
AI technologies are becoming more widely used by institutions, organizations and companies in different sectors, and hazards associated with the AI are starting to emerge.
In 2018, the accounting firm KPMG developed an AI Risk and Controls Matrix. It highlights the risks of using AI by businesses and urges them to recognize these new emerging risks. The report warned that AI technology is advancing very quickly and that risk control measures must be in place before they overwhelm the systems.
Governments have also started developing some risk assessment guidelines for the use of AI-based technologies and solutions. However, these guidelines are limited to risks such as algorithmic bias and violation of individual rights.
At the government level, the Canadian government issued the Directive on Automated Decision-Making to ensure that federal institutions minimize the risks associated with the AI systems and create appropriate governance mechanisms.
The main objective of the directive is to ensure that when AI systems are deployed, risks to clients, federal institutions and Canadian society are reduced. According to this directive, risk assessments must be conducted by each department to make sure that appropriate safeguards are in place in accordance with the Policy on Government Security.
In 2021, the U.S. Congress tasked the National Institute of Standards and Technology with developing an AI risk management framework for the Department of Defense. The proposed voluntary AI risk assessment framework recommends banning the use of AI systems that present unacceptable risks.
Much of the national level policy focus on AI has been from national security and global competition perspectives the national security and economic risks of falling behind in the AI technology.
The U.S. National Security Commission on Artificial Intelligence highlighted national security risks associated with AI. These were not from the public threats of the technology itself, but from losing out in the global competition for AI development in other countries, including China.
In its 2017 Global Risk Report, the World Economic Forum highlighted that AI is only one of emerging technologies that can exacerbate global risk. While assessing the risks posed by the AI, the report concluded that, at that time, super-intelligent AI systems remain a theoretical threat.
However, the latest Global Risk Report 2023 does not even mention the AI and AI associated risks which means that the leaders of the global companies that provide inputs to the global risk report had not viewed the AI as an immediate risk.
AI development is progressing much faster than government and corporate policies in understanding, foreseeing and managing the risks. The current global conditions, combined with market competition for AI technologies, make it difficult to think of an opportunity for governments to pause and develop risk governance mechanisms.
While we should collectively and proactively try for such governance mechanisms, we all need to brace for major catastrophic AIs impacts on our systems and societies.
If so, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. With the latest scientific discoveries, thoughtful analysis on political issues and research-based life tips, each email is filled with articles that will inform you and often intrigue you.
Get our newsletters
Editor and General Manager
Get news thats free, independent and based on evidence.
Get newsletter
Editor
Find peace of mind, and the facts, with experts. Add evidence-based articles to your news digest. No uninformed commentariat. Just experts. 90,000 of them have written for us. They trust us. Give it a go.
Get our newsletter
If you found the article you just read to be insightful, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. Each newsletter has articles that will inform and intrigue you.
Subscribe now
CEO | Editor-in-Chief
It helps you go deeper into key political issues and also introduces you to the diversity of research coming out of the continent. It's not about breaking news. It's not about unfounded opinions. The Europe newsletter is evidence-based expertise from European scholars, presented by myself in France, and two of my colleagues in Spain and the UK.
Get our newsletter
Head of English section, France edition
Read the original here:
We need to prepare for the public safety hazards posed by artificial intelligence - The Conversation
NFL fans outraged after ChatGPT names best football teams since 2000 including a surprise at No 1… – The US Sun
ARTIFICIAL intelligence has infuriated fans across the nation with its top ten best teams since 2000 ranking.
The controversial list has unsurprisingly angered fans on social media, being labeled "the dumbest take on football I've ever seen."
Leading the way in the list created by ChatGPT for NFL on FOX are the 2007 New England Patriots.
A powerhouse team featuring the likes of Tom Brady, Randy Moss, Asante Samuel, Wes Welker, and Vince Wilfork among others, Bill Belichick's team went undefeated until the bitter end.
Eli Manning's New York Giants ultimately got the better of them in Super Bowl XLII, preventing what would have been only the second perfect season in league history.
The Patriots are followed by the 2013 Seattle Seahawks who were left by then-second-year starting quarterback, Russell Wilson.
Pete Carroll's 13-3 Seahawks team went on to hoist the Lombardi Trophy after the joint-third biggest Super Bowl blowout to date (43-8 over Peyton Manning's Denver Broncos).
Sean Peyton's 2009 New Orleans Saints team rounded out the top three.
Led by Drew Brees in his prime, he too beat a Peyton Manning-led team at the Super Bowl as they beat the Indianapolis Colts 31-17.
New England returned in fourth thanks to their 14-2 2016 team, which led Brady to his fifth ring during one of the most infamous comebacks in league history against the Atlanta Falcons at Super Bowl LI.
Ray Lewis and Rod Woodson's legendary 2000 Baltimore Ravens complete the top five, having guided the franchise to a Super Bowl win in just its fifth season since moving from Cleveland.
The second half of the ranking starts with the second non-Super Bowl-winning team, the 2004 Philadelphia Eagles.
They are followed by another team to fall short at the final hurdle despite having a prime Cam Newton leading the way, the 2015 Carolina Panthers.
Loaded with talent, the 2002 Tampa Bay Buccaneers made the list at eight thanks to their 12-4 record and a Super Bowl XXXVII ring.
The 11-5 Pittsburgh Steelers of 2005, featuring the likes of Ben Roethlisberger and Hines Ward follow, with the Patrick Mahomes-led 2019 Kansas City Chiefs closing out the top ten.
In response to the list, one unimpressed fan tweeted: "Woof. Terrible list. The 05 Steelers won in the most unimpressive season of football in recent memory.
"Them and the Seahawks played a dumpster fire Super Bowl. They won even though Roethlisbergers SB stats were:
"9-21, 123 yards, 2 interceptions."
Another said: "Nope. Where are the Peyton Manning led Broncos or Colts? Green Bay has been a perennial playoff/NFC Championship contender for near 20 years.
"Also no Ny Giants that was led by Eli Manning to the Super Bowl 3 different times and winning twice against Brady's Patriots."
As one added: "Cant accept the top team lost the Super Bowl."
While another simply said: "Absolutely not"
Here is the original post:
These are the top 10 athletes of all time from the state of Iowa, according to ChatGPT – KCCI Des Moines
We asked ChatGPT to name the top 10 athletes of all time from the state of Iowa. The list the artificial intelligence app came up with had some obvious names on it (Dan Gable) but was missing some considered by most to be among the state's best athletes (Bob Feller, Shawn Johnson).The results are based on accomplishments and recognition. Here's what the model came up with. Do you agree with the list?Top 10 athletes of all time from the state of Iowa1. Nile Kinnick- College football player at the University of Iowa and Heisman Trophy winner.2. Kurt Warner - NFL quarterback and Super Bowl MVP.3. Dan Gable - Olympic wrestler, coach and motivational speaker.4. Lolo Jones - Olympic hurdler and bobsledder.5. Tom Brands - Olympic wrestler and coach.6. Jason Momoa - Actor and former college football player at the University of Hawaii. 7. Zach Johnson - Professional golfer and Masters Tournament winner.8. Fred Hoiberg - NBA player and collegiate basketball coach.9. B.J. Armstrong - NBA player and three-time NBA champion.10. Chuck Long - College football quarterback at the University of Iowa and college football analyst.The video above is from a previous report about a Nile Kinnick documentary.
We asked ChatGPT to name the top 10 athletes of all time from the state of Iowa.
The list the artificial intelligence app came up with had some obvious names on it (Dan Gable) but was missing some considered by most to be among the state's best athletes (Bob Feller, Shawn Johnson).
The results are based on accomplishments and recognition.
Here's what the model came up with. Do you agree with the list?
1. Nile Kinnick- College football player at the University of Iowa and Heisman Trophy winner.
2. Kurt Warner - NFL quarterback and Super Bowl MVP.
3. Dan Gable - Olympic wrestler, coach and motivational speaker.
4. Lolo Jones - Olympic hurdler and bobsledder.
5. Tom Brands - Olympic wrestler and coach.
6. Jason Momoa - Actor and former college football player at the University of Hawaii.
7. Zach Johnson - Professional golfer and Masters Tournament winner.
8. Fred Hoiberg - NBA player and collegiate basketball coach.
9. B.J. Armstrong - NBA player and three-time NBA champion.
10. Chuck Long - College football quarterback at the University of Iowa and college football analyst.
The video above is from a previous report about a Nile Kinnick documentary.
Read the original here: