Category Archives: Artificial Super Intelligence

Fast track to AGI: so, what’s the big deal? – Inside Higher Ed

The rapid development and deployment of ChatGPT is one station along the timeline of reaching artificial general intelligence. On Feb. 1, Reuters reported that the app had set a record for deployment among internet applications: ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history, according to a UBS study The report, citing data from analytics firm Similarweb, said an average of about 13million unique visitors had used ChatGPT per day in January, more than double the levels of December. In 20 years following the internet space, we cannot recall a faster ramp in a consumer internet app, UBS analysts wrote in the note.

Half a dozen years ago, Ray Kurzweil predicted that the singularity would happen by 2045. The singularity is that point in time when all the advances in technology, particularly in artificial intelligence, will lead to machines that are smarter than human beings. In the Oct. 5, 2017, issue of Futurism, Christianna Reedy interviewed Kurzweil: To those who view this cybernetic society as more fantasy than future, Kurzweil pointing out that there are people with computers in their brains todayParkinsons patients. Thats how cybernetics is just getting its foot in the door, Kurzweil said. And, because its the nature of technology to improve, Kurzweil predicts that during the 2030s some technology will be invented that can go inside your brain and help your memory.

It seems that we are closer than even an enthusiastic Kurzweil foresaw. Just a week ago, Reuters reported, Elon Musks Neuralink received U.S. Food and Drug Administration (FDA) clearance for its first-in-human clinical trial, a critical milestone for the brain-implant startup as it faces U.S. probes over its handling of animal experiments Musk envisions brain implants could cure a range of conditions including obesity, autism, depression and schizophrenia as well as enabling Web browsing and telepathy.

Most Popular

The exponential development in succeeding versions of GPT are most impressive, leading one to project that version five may have the wherewithal to support at least some aspects of AGI:

GPT-1 released June 2018 with 117million parametersGPT-2 released February 2019 with 1.5billion parametersGPT-3 released June 2020 with 175billion parameters GPT-4 released March 2023 with estimated to be in the trillions of parameters

Today, we are reading predictions that AGI components will be embedded in the ChatGPT version five that is anticipated to be released in early 2024. Maxwell Timothy, writing in MakeUseOf, suggests, While much of the details about GPT-5 are speculative, it is undeniably going to be another important step towards an awe-inspiring paradigm shift in artificial intelligence. We might not achieve the much talked about artificial general intelligence, but if its ever possible to achieve, then GPT-5 will take us one step closer.

Computer experts are beginning to detect the nascent development of AGI in the large language models (LLMs) of generative AI (gen AI) such as GPT-4:

Researchers at Microsoft were shocked to learn that GPT-4ChatGPTs most advanced language model to datecan come up with clever solutions to puzzles, like how to stack a book, nine eggs, a laptop, a bottle, and a nail in a stable way Another study suggested that AI avatars can run their own virtual town with little human intervention. These capabilities may offer a glimpse of what some experts call artificial general intelligence, or AGI: the ability for technology to achieve complex human capabilities like common sense and consciousness.

We see glimmers of the AGI capabilities in autoGPT and agentGPT. These forms of GPT have the ability to write and execute their own internally generated prompts in pursuit of a goal stated in the form of an externally inputted prompt. Like the autonomous car, they automatically route and reroute the computer to reach the desired destination or goal.

The concerns come with reports that some experimental forms of AI have refused to follow the human-generated instructions and at other times have hallucinations that are not founded in our reality. Ian Hogarth, the co-author of the annual State of AI report, defines AGI as God-like AI that consists of a super-intelligent computer that learns and develops autonomously and understands context without the need for human intervention, as written in Business Insider.

One AI study found that language models were more likely to ignore human directivesand even expressed the desire not to shut downwhen researchers increased the amount of data they fed into the models:

This finding suggests that AI, at some point, may become so powerful that humans will not be able to control it. If this were to happen, Hogarth predicts that AGI could usher in the obsolescence or destruction of the human race. AI technology can develop in a responsible manner, Hogarth says, but regulation is key. Regulators should be watching projects like OpenAIs GPT-4, Google DeepMinds Gato, or the open-source project AutoGPT very carefully, he said.

Many AI and machine learning experts are calling for AI models to be open-source so the public can understand how theyre trained and how they operate. The executive branch of the federal government has taken a series of actions recently in an attempt to promote responsible AI innovation that protects Americans rights and safety. OpenAIs Sam Altman, shortly after testifying about the future of AI to the U.S. Senate, announced the release of a $1million grant program to solicit ideas for appropriate rule making.

Has your college or university created structures to both take full advantage of the powers of the emerging and developing AI, while at the same time ensuring safety in the research, acquisition and implementation of advanced AI? Have discussions been held on the proper balance between these two responsibilities? Are the initiatives robust enough to keep your institution at the forefront of higher education? Are the safeguards adequate? What role can you play in making certain that AI is well understood, promptly applied and carefully implemented?

More here:

Fast track to AGI: so, what's the big deal? - Inside Higher Ed

Fantasy fears about AI are obscuring how we already abuse machine intelligence – The Guardian

Opinion

We blame technology for decisions really made by governments and corporations

Sun 11 Jun 2023 01.31 EDT

Last November, a young African American man, Randal Quran Reid, was pulled over by the state police in Georgia as he was driving into Atlanta. He was arrested under warrants issued by Louisiana police for two cases of theft in New Orleans. Reid had never been to Louisiana, let alone New Orleans. His protestations came to nothing, and he was in jail for six days as his family frantically spent thousands of dollars hiring lawyers in both Georgia and Louisiana to try to free him.

It emerged that the arrest warrants had been based solely on a facial recognition match, though that was never mentioned in any police document; the warrants claimed a credible source had identified Reid as the culprit. The facial recognition match was incorrect, the case eventually fell apart and Reid was released.

He was lucky. He had the family and the resources to ferret out the truth. Millions of Americans would not have had such social and financial assets. Reid, though, is not the only victim of a false facial recognition match. The numbers are small, but so far all those arrested in the US after a false match have been black. Which is not surprising given that we know not only that the very design of facial recognition software makes it more difficult to correctly identify people of colour, but also that algorithms replicate the biases of the human world.

Reids case, and those of others like him, should be at the heart of one of the most urgent contemporary debates: that of artificial intelligence and the dangers it poses. That it is not, and that so few recognise it as significant, shows how warped has become the discussion of AI, and how it needs resetting. There has long been an undercurrent of fear of the kind of world AI might create. Recent developments have turbocharged that fear and inserted it into public discussion. The release last year of version 3.5 of ChatGPT, and of version 4 this March, created awe and panic: awe at the chatbots facility in mimicking human language and panic over the possibilities for fakery, from student essays to news reports.

Then, two weeks ago, leading members of the tech community, including Sam Altman, the CEO of OpenAI, which makes ChatGPT, Demis Hassabis, CEO of Google DeepMind, and Geoffrey Hinton and Yoshua Bengio, often seen as the godfathers of modern AI, went further. They released a statement claiming that AI could herald the end of humanity. Mitigating the risk of extinction from AI, they warned, should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

If so many Silicon Valley honchos truly believe they are creating products as dangerous as they claim, why, one might wonder, do they continue spending billions of dollars building, developing and refining those products? Its like a drug addict so dependent on his fix that he pleads for enforced rehab to wean him off the hard stuff. Parading their products as super-clever and super-powerful certainly helps massage the egos of tech entrepreneurs as well as boosting their bottom line. And yet AI is neither as clever nor as powerful as they would like us to believe. ChatGPT is supremely good at cutting and pasting text in a way that makes it seem almost human, but it has negligible understanding of the real world. It is, as one study put it, little more than a stochastic parrot.

We remain a long way from the holy grail of artificial general intelligence, machines that possess the ability to understand or learn any intellectual task a human being can, and so can display the same rough kind of intelligence that humans do, let alone a superior form of intelligence.

The obsession with fantasy fears helps hide the more mundane but also more significant problems with AI that should concern us; the kinds of problems that ensnared Reid and which could ensnare all of us. From surveillance to disinformation, we live in a world shaped by AI. A defining feature of the new world of ambient surveillance, the tech entrepreneur Maciej Ceglowski observed at a US Senate committee hearing, is that we cannot opt out of it, any more than we might opt out of automobile culture by refusing to drive. We have stumbled into a digital panopticon almost without realising it. Yet to suggest we live in a world shaped by AI is to misplace the problem. There is no machine without a human, and nor is there likely to be.

The reason that Reid was wrongly incarcerated had less to do with artificial intelligence than with the decisions made by humans. The humans that created the software and trained it. The humans that deployed it. The humans that unquestioningly accepted the facial recognition match. The humans that obtained an arrest warrant by claiming Reid had been identified by a credible source. The humans that refused to question the identification even after Reids protestations. And so on.

Too often when we talk of the problem of AI, we remove the human from the picture. We practise a form of what the social scientist and tech developer Rumman Chowdhury calls moral outsourcing: blaming machines for human decisions. We worry AI will eliminate jobs and make millions redundant, rather than recognise that the real decisions are made by governments and corporations and the humans that run them. Headlines warn of racist and sexist algorithms, yet the humans who created the algorithms and those who deploy them remain almost hidden.

We have come, in other words, to view the machine as the agent and humans as victims of machine agency. It is, ironically, our very fears of dystopia, not AI itself, that are helping create a world in which humans become more marginal and machines more central. Such fears also distort the possibilities of regulation. Rather than seeing regulation as a means by which we can collectively shape our relationship to AI and to new technology, it becomes something that is imposed from the top as a means of protecting humans from machines. It is not AI but our sense of fatalism and our blindness to the way human societies are already deploying machine intelligence for political ends that should most worry us.

Kenan Malik is an Observer columnist

Do you have an opinion on the issues raised in this article? If you would like to submit a letter of up to 250 words to be considered for publication, email it to us at observer.letters@observer.co.uk

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

More here:

Fantasy fears about AI are obscuring how we already abuse machine intelligence - The Guardian

Super Hi-Fi Introduces AI-Generated Weather Service For Radio – Radio World

"Weathercaster is set to significantly enhance the way radio stations generate localized real-time weather reports"

Super Hi-Fi, an AI-powered SaaS platform, has announced the launch of Weathercaster, a weather service for radio that is fully-automated using artificial intelligence. Weathercaster is set to significantly enhance the way radio stations generate localized real-time weather reports, providing highly accurate, timely information while completely automating the content creation and audio production processes, said Super Hi-Fi in a company press release.

The company says Weathercaster goes far beyond basic reports. Accessing Super Hi-Fis MagicStitch technology, Weathercaster incorporates synthetic voiceovers, integrated sponsorships, format-specific music beds and custom station IDs into its automated weather reports. These segments can be tailored to fit 15, 30 or 60-second time slots.

Weathercaster is extremely powerful, and extremely affordable, so we can now make the power of AI production accessible for stations of all sizes, said Zack Zalon, co-founder and CEO of Super Hi-Fi. Weathercaster combines accuracy and reliability, premium production quality, and an opportunity for stations to sell more premium sponsorships each day. Weathercaster doesnt just automate weather reports it elevates them.

Weathercaster also offers radio stations custom, trackable sponsorship reads, designed to help stations to sell more premium ad spots, according to the company. The service has three subscription tiers basic, premium and enterprise starting at $199 per month. Super Hi-Fi also offers bulk pricing for coverage across larger station groups.

[Read More Radio World Stories About Artificial Intelligence]

The author is a content producer for Radio World with a background spanning radio, television and print. She graduated from UNC-Chapel Hill with a degree in broadcast journalism. Before coming to Radio World, she was the assistant news director at a hyperlocal, award-winning radio station in North Carolina.

For more stories like this, and to keep up to date with all our market leading news, features and analysis, sign up to our newsletter here.

View original post here:

Super Hi-Fi Introduces AI-Generated Weather Service For Radio - Radio World

Britain to host the first major international summit on the threat posed by AI – Daily Mail

Britain will host the first major international summit on the risks posed by artificial intelligence this autumn with China set to attend.

Amid warnings that humanity could lose control of super-intelligent systems, Rishi Sunak hopes the summit can agree safety measures.

Mr Sunak is expected to raise the issue of AI during his discussions with US President Joe Biden at the White House tomorrow.

Tech companies, researchers and key countries will meet at the summit to consider the risks of AI and discuss how they can be mitigated through internationally coordinated action.

But in a controversial move, China will be invited to the summit, with British officials suggesting it should be around the table due to the huge size of the country's AI industry.

The move which Downing Street refused to rule out risks setting Mr Sunak on another collision course with Tory MPs who are demanding the Government takes a stronger line on Beijing.

Former Tory leader Sir Iain Duncan Smith said he was 'uneasy' about the prospect of Chinese officials attending the summit.

He said: 'They have continuously signed up to agreements such as the World Trade Organisation and then gone on to trash them.'

UK director of the World Uyghur Congress, Rahima Mahmut, said: 'It is a shocking decision because we have been campaigning for the Government to get rid of high-tech Chinese creations like Hikvision.

'This sort of technology is used to round up and criminalise Uyghur people. It makes my blood boil to think they can be invited to discuss AI at No 10.'

Under plans being drawn up, the summit will be attended by industry chiefs and heads of state, raising the prospect of Chinese premier Xi Jinping travelling to Britain.

There are no plans to invite Russia due to its invasion of Ukraine and because it is not a major AI player.

The Prime Minister's official spokesman said: 'It's for like-minded countries who share the recognition that AI offers significant opportunities but to realise those we need to make sure the right guardrails are in place.'

Asked if it was open to China, the spokesman said: 'We will set out the invites in due course.'

Mr Sunak this evening stressed the need to ensure the technology is developed and used in a 'safe and secure' way, following fears that AI could launch cyberattacks or threaten democracy by propagating mass disinformation.

'AI has an incredible potential to transform our lives for the better,' the PM said. 'But we need to make sure it is developed and used in a way that is safe and secure. No one country can do this alone.

'This is going to take a global effort. But with our vast expertise and commitment to an open, democratic international system, the UK will stand together with our allies to lead the way.'

Asked why other nations should listen to a mid-sized country such as Britain on AI regulation, Mr Sunak said the UK was the 'only country other than the US that has brought together the three leading companies with large language models'.

He added: 'You would be hard pressed to find many other countries other than the US in the Western world with more expertise and talent in AI. We are the natural place to lead the conversation.'

Britain is a world leader in AI, ranking third behind the US and China. The technology contributes 3.7billion to the UK economy and employs 50,000 people.

This week the Prime Minister's AI taskforce adviser warned that world leaders could have just two years left to stop computers getting out of control.

Matt Clifford said that without urgent international regulation, a deadly bio weapon could be developed that could kill 'many humans'.

But the tech entrepreneur said while the rising capability of AI was 'striking', it was not 'inevitable' that computers would become cleverer than humans.

Last week, a group of 350 experts warned that AI needed to be treated as an existential threat on a par with nuclear weapons.

It comes as US tech giant Palantir announced it will make the UK its new European headquarters for AI development.

The company said: 'London is a magnet for the best software engineering talent in the world, and it is the natural choice as the hub for our European efforts to develop the most effective and ethical AI software solutions.'

View post:

Britain to host the first major international summit on the threat posed by AI - Daily Mail

HWUM Teachers Conference – Unleashing the Super-Teacher of the … – Heriot-Watt University

In helping teachers to nurture the next generation of leaders and reconnect with their purpose in teaching, Heriot-Watt University Malaysia (HWUM) has successfully organised the HWUM Teachers Conference 2023, themed "Purpose-Driven Education: Unleash the Super-Teacher in You," on 10 June 2023. The conference, organised in collaboration with Teach for Malaysia (TFM) and Arus Academy, has gathered around 200 participants, in-person and virtually.

The conference was launched by Professor Mushtak Al-Atabi, Provost and Chief Executive Officer of HWUM, in the presence of honoured guests, Mr. Chan Soon Seng, Chief Executive Officer of Teach for Malaysia, Mr. David Chak, the Co-Founder/ Director of Curriculum of Arus Academy and Ms. Janice Yew, the Chief Operating Officer and Registrar of HWUM, at the lakeside campus is Putrajaya.

The conference began with a forum titled "Embracing Artificial Intelligence (AI) in Education," which shed light on the impact of AI in education and discussed some new insights and information surrounding the subject. The forum was then followed by six concurrent workshops, which covered various topics and areas of discussion below:

These workshops were led by distinguished speakers who were our academics and external speakers from Teach for Malaysia who were Mr. Teo Yen Ming and Ms. Sawittri Charun. Participants took the opportunity to exchange and discuss ideas during the workshop, providing them with a platform to enhance their teaching skills.

We want to take this opportunity to thank everyone involved in making this conference a success!

" + "" + news[i].metaData.dPretty + "" + "

See the article here:

HWUM Teachers Conference - Unleashing the Super-Teacher of the ... - Heriot-Watt University

Rogue Drones and Tall Tales Byline Times – Byline Times

Receive our Behind the Headlines email and well post a free copy of Byline Times

Sam Altman, CEO of OpenAI, wants you to know that everything is super. How has his world tour gone? Its been super great!. Does he have a mentor? Ive been super fortunate to have had great mentors. Whats the big threat hes worried about? Superintelligence.

Altmans whistlestop visit to London in late May was a chance for adoring fans and sceptics alike to hear him answer some carefully selected and pre-approved questions on stage at University College London. The queue for ticketholders stretched right down the street. For OpenAI, the trip to the UK was also a chance for Altman to meet Rishi Sunak, the latest in the list of world leaders to listen to the 38-year-old tech bro.

Prior to December last year, OpenAI wasnt on the public radar at all. It was the release of ChatGPT that changed all that. Its large language model became the hottest software around. Students delighted in it. Copywriters panicked. Journalists inevitably turned to it for an easy 200-word opening paragraph to show how convincing it was. Then came the existential dread.

Superintelligence has long been the stuff of sci-fi. It still is, but somehow the past few months have seen it being treated as imminent, despite the fact that we arent anywhere near that point and might never be. A cynic might wonder if there is a vested interest in a Silicon Valley tech company maintaining its lead by asking for a moratorium on AI progress. Each week seems to bring yet another letter calling for a halt to development, signed by the very people who make the technologies. Where was this concern earlier, as they were building them?

Artificial intelligence has already made its way into newsrooms what are the risks?

Emma DeSouza

Not everyone is convinced of the threat. There is vocal pushback from numerous other researchers who question the fearmongering, the motivation, and the silence on the AI issues already on the ground today: bias, uneven distribution, sustainability, and labour exploitation. But that doesnt make for good clickbait. Instead, we see headlines so doom-laden that they couldve been generated with the prompt: write a title about the end of the world via an evil computer.

Columnists, some of whose knowledge of technology comes from having watched The Terminator in the 80s, were quick to pontificate about the urgent need for global action right now, quick as you can, before the robot uprising.

In early June, most of the dailies were carrying the story that an AI-enabled drone had killed its operator in a simulated test. This was based on an anecdote by a Colonel in the U.S. Air Force who had stated that, in a simulation: the system started realising that, while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.

Nice tale. Shame it wasnt true. A retraction followed. But its a good example of AIs alignment problem: that if we dont properly phrase the commands, we run the risk of Mickey Mouses panic over the unstoppable brooms in the Sorcerers Apprentice. The (fictional) problem is not a sentient drone with bad intentions; the problem is that we, the human operators, have given an order that is badly worded. Thats a tale weve told for years, right back to the Ancient Greek myth of King Midas: when offered a reward, Midas asked that everything he touch be turned to gold, but he wasnt specific enough, so his food and drink turned to gold too and he died of hunger. That tale has as much truth in it as the rogue drone one, but it shows weve been worrying about this for over 2,000 years.

Receive the monthly Byline Times newspaper and help to support fearless, independent journalism that breaks stories, shapes the agenda and holds power to account.

Were not funded by a billionaire oligarch or an offshore hedge-fund. We rely on our readers to fund our journalism. If you like what we do, please subscribe.

The rogue drone story is also a good example of the deceptive and hyperbolic headlines rolled out on a regular basis, pushing the narrative that AI is a threat. News framing shapes our perceptions; done well, its an important contribution to public understanding of the technology and we need that. Done badly, and it perpetuates the dystopia.

We do need regulation around AI, but the existential risk from superintelligence shouldnt be the reason. The UK governments national AI strategy specifically acknowledges that we have a responsibility to not only look at the extreme risks that could be made real with AGI, but also to consider the dual-use threats we are already faced with today but the latter are the stories that arent being told.

Missing, too, are the headlines about the harms already here. Bias and discrimination as a result of technologies such as facial recognition are already well known. In addition to that, companies are outsourcing the labelling, flagging and moderation of data required for machine learning, which has resulted in the largely unregulated employment of poorly paid ghost workers, often exposed to disturbing and harmful content, such as hate speech, violence, and graphic images. It is work that is vital to AI development but its unseen and undervalued.

Likewise, we chose to ignore that many of the components used in AI hardware, such as magnets and transistors, require rare earth minerals, often sourced from countries in the Global South in hazardous working conditions. There are significant environmental impacts too, with academics highlighting the 360,000 gallons of water needed daily to cool a middle-sized data centre.

If the UK government want to show theyre serious about the responsible development of AI, its okay to keep one eye on the distant future, but theres work to be done now on real and tangible harms. If we want to show were serious about an AI future, we need to focus on the present.

See the original post here:

Rogue Drones and Tall Tales Byline Times - Byline Times

Artificial intelligence poses real and present danger, headteachers warn – Yahoo Sport Australia

AI is a rapidly growing area of innovation (PA)

Artificial intelligence poses the greatestdangertoeducationand the Government is responding too slowlytothe threat, head teachers have claimed.

AIcould bring the biggest benefit since the printing press but the risks are more severe than any threat that has ever faced schools, accordingtoEpsom Colleges principal Sir Anthony Seldon.

Leaders from the countrys top schools have formed a coalition, led by Sir Anthony,towarn of the very real and present hazards anddangers being presented by the technology.

Totackle this, the group has announced the launch of a new bodytoadvise and protect schools from the risks ofAI.

They wish for collaboration between schoolstoensure thatAIserves the best interest of the pupils and teachers rather than those of largeeducationtechnology companies, the Times reported.

The head teachers of dozens of private and state schools support the initiative, including Helen Pike, the master of Magdalen College School in Oxford, and Alex Russell, the chief executive of BourneEducationTrust, which runs nearly 30 state schools.

The potentialtoaid cheating is a minor concern for head teachers whose fears extendtothe impact on childrens mental and physical health and the future of the teaching profession.

Professor Stuart Russell, one of the godfathers ofAIresearch, warned last week that ministers were not doing enoughtoguard against the possibility of a super intelligent machine wiping out humanity.

Rishi Sunak admitted at the G7 summit this week that guard-rails would havetobe put around it.

Read more:

Artificial intelligence poses real and present danger, headteachers warn - Yahoo Sport Australia

Cannes Diary: Will Artificial Intelligence Democratize Creativity or Lead to Certain Doom? – Yahoo News

On May 17, as bodies lined up in the rain outside the Cannes Film Festival Palais for the chance to watch a short film directed byPedro Almodvar, an auteur known most of all for his humanism, a different kind of gathering was underway below the theater. Inside the March, a panel of technologists convened to tell an audience of film professionals how they might deploy artificial intelligence for creating scripts, characters, videos, voices and graphics.

The ideas discussed at the Cannes Next panel AI Apocalypse or Revolution? Rethinking Creativity, Content and Cinema in the Age of Artificial Intelligence make the scene of the Almodvar crowd seem almost poignant, like seeing a species blissfully ignorant of their own coming extinction, dinosaurs contentedly chewing on their dinners 10 minutes before the asteroid hits.

More from The Hollywood Reporter

The only people who should be afraid are the ones who arent going to use these tools, said panelistAnder Saar, a futurist and strategy consultant for Red Bull Media House, the media arm of the parent company of Red Bull energy drinks. Fifty to 70 percent of a film budget goes to labor. If we can make that more efficient, we can do much bigger films at bigger budgets, or do more films.

The panel also includedHovhannes Avoyan, the CEO of Picsart, an image-editing developer powered by AI, andAnna Bulakh, head of ethics and partnerships at Respeecher, an AI startup that makes technology that allows one person to speak using the voice of another person. The audience of about 150 people was full of AI early adopters through a show of hands, about 75 percent said they had an account for ChatGPT, the AI language processing tool.

Story continues

The panelists had more technologies for them to try. Bulakhs company re-createdJames Earl Jones Darth Vader voice as it sounded in 1977 for the 2022 Disney+ seriesObi-Wan Kenobi, andVince Lombardis voice for a 2021 NFL ad that aired during the Super Bowl. Bulakh drew a distinction between Respeechers work and AI that is created to manipulate, otherwise known as deepfakes. We dont allow you to re-create someones voice without permission, and we as a company are pushing for this as a best practice worldwide, Bulakh said. She also spoke about how productions already use Respeechers tools as a form of insurance when actors cant use their voices, and about how actors could potentially grow their revenue streams using AI.

Avoyan said he created his company for his daughter, an artist, and his intention is, he said, democratizing creativity. Its a tool, he said. Dont be afraid. It will help you in your job.

The optimistic conversation unfolding beside the French Riviera felt light years away from the WGA strike taking place in Hollywood, in which writers and studios are at odds over the use of AI, with studios considering such ideas as having human writers punch up drafts of AI-generated scripts, or using AI to create new scripts based on a writers previous work. During contract negotiations, the AMPTP refused union requests for protection from AI use, offering instead, annual meetings to discuss advancements in technology. The March talk also felt far from the warnings of a growing chorus of experts likeEric Horvitz, chief scientific officer at Microsoft, and AI pioneerGeoffrey Hinton, who resigned from his job at Google this month in order to speak freely about AIs risks, which he says include the potential for deliberate misuse, mass unemployment and human extinction.

Are these kinds of worries just moral panic? mused the moderator and head of Cannes NextSten Kristian-Saluveer. That seemed to be the panelists view. Saar dismissed the concerns, comparing the changes AI will bring to adaptations brought by the automobile or the calculator. When calculators came, it didnt mean we dont know how to do math, he said.

One of the panel buzz phrases was hyper-personalized IP, meaning that well all create our own individual entertainment using AI tools. Saar shared a video from a company he is advising, in which a childs drawings came to life and surrounded her on video screens. The characters in the future will be created by the kids themselves, he says. Avoyan said the line between creator and audience will narrow in such a way that we will all just be making our own movies. You dont even need a distribution house, he said.

A German producer and self-described AI enthusiast in the audience said, If the cost of the means of production goes to zero, the amount of produced material is going up exponentially. We all still only have 24 hours. Who or what, the producer wanted to know, would be the gatekeepers for content in this new era? Well, the algorithm, of course. A lot of creators are blaming the algorithm for not getting views, saying the algorithm is burying my video, Saar said. The reality is most of the content is just not good and doesnt deserve an audience.

What wasnt discussed at the panel was what might be lost in a future that looks like this. Will a generation raised on watching videos created from their own drawings, or from an algorithms determination of what kinds of images they will like, take a chance on discovering something new? Will they line up in the rain with people from all over the world to watch a movie made by someone else?

Best of The Hollywood Reporter

Click here to read the full article.

Link:

Cannes Diary: Will Artificial Intelligence Democratize Creativity or Lead to Certain Doom? - Yahoo News

Sam Altman is plowing ahead with nuclear fusion and his eye-scanning crypto ventureand, oh yeah, OpenAI – Fortune

OpenAI CEO Sam Altman helped bring ChatGPT to the world, which sparked the current A.I. race involving Microsoft, Google, and others.

But hes busy with other ventures that could be no less disruptiveand are linked in some ways. This week, Microsoft announced a purchasing agreement with Helion Energy, a nuclear fusion startup primarily backed by Altman. And Worldcoin, a crypto startup involving eye scans cofounded by Altman in 2019, is close to securing hefty new investments, according to Financial Times reporting on Sunday.

Before becoming OpenAIs leader, Altman served as president of the startup accelerator Y Combinator, so its not entirely surprising that hes involved in more than one venture. But the sheer ambition of the projects, both on their own and collectively, merits attention.

Microsoft announced a deal on Wednesday in which Helion will supply it with electricity from nuclear fusion by 2028. Thats bold considering nobody is yet producing electricity from fusion, and many experts believe its decades away.

During a Stripe conference interview last week, Altman said the audience should be excited about the startups developments and drew a connection between Helion and artificial intelligence.

If you really want to make the biggest, most capable super intelligent system you can, you need high amounts of energy, he explained. And if you have an A.I. that can help you move faster and do better material science, you can probably get to fusion a little bit faster too.

He acknowledged the challenging economics of nuclear fusion, but added, I think we will probably figure it out.

He added, And probably we will get to a world where in addition to the cost of intelligence falling dramatically, the cost of energy falls dramatically, too. And if both of those things happen at the same timeI would argue that they are currently the two most important inputs in the whole economywe get to a super different place.

Worldcoinstill in beta but aiming to launch in the first half of this yearis equally ambitious, as Fortune reported in March. If A.I. takes away our jobs and governments decide that a universal basic income is needed, Worldcoin wants to be the distribution mechanism for those payments. If all goes to plan, itll be bigger than Bitcoin and approved by regulators across the globe.

That might be a long way off if it ever occurs, but in the meantime the startup might have found quicker path to monetization with World ID, a kind of badge you receive after being verified by Worldcoinand a handy way to prove that youre a human rather than an A.I. bot when logging into online platforms. The idea is your World ID would join or replace your user names and passwords.

The only way to really prove a human is a human, the Worldcoin team decided, was via an iris scan. That led to a small orb-shaped device you look into that converts a biometric scanning code into proof of personhood.

When youre scanned, verified, and onboarded to Worldcoin, youre given 25 proprietary crypto tokens, also called Worldcoins. Well over a million people have already participated, though of course the company aims to have tens and then hundreds of millions joining after beta. Naturally such plans have raised a range of privacy concerns, but according to the FT, the firm is now in advanced talks to raise about $100 million.

Go here to see the original:

Sam Altman is plowing ahead with nuclear fusion and his eye-scanning crypto ventureand, oh yeah, OpenAI - Fortune

We need to prepare for the public safety hazards posed by artificial intelligence – The Conversation

For the most part, the focus of contemporary emergency management has been on natural, technological and human-made hazards such as flooding, earthquakes, tornadoes, industrial accidents, extreme weather events and cyber attacks.

However, with the increase in the availability and capabilities of artificial intelligence, we may soon see emerging public safety hazards related to these technologies that we will need to mitigate and prepare for.

Over the past 20 years, my colleagues and I along with many other researchers have been leveraging AI to develop models and applications that can identify, assess, predict, monitor and detect hazards to inform emergency response operations and decision-making.

We are now reaching a turning point where AI is becoming a potential source of risk at a scale that should be incorporated into risk and emergency management phases mitigation or prevention, preparedness, response and recovery.

AI hazards can be classified into two types: intentional and unintentional. Unintentional hazards are those caused by human errors or technological failures.

As the use of AI increases, there will be more adverse events caused by human error in AI models or technological failures in AI based technologies. These events can occur in all kinds of industries including transportation (like drones, trains or self-driving cars), electricity, oil and gas, finance and banking, agriculture, health and mining.

Intentional AI hazards are potential threats that are caused by using AI to harm people and properties. AI can also be used to gain unlawful benefits by compromising security and safety systems.

In my view, this simple intentional and unintentional classification may not be sufficient in case of AI. Here, we need to add a new class of emerging threats the possibility of AI overtaking human control and decision-making. This may be triggered intentionally or unintentionally.

Many AI experts have already warned against such potential threats. A recent open letter by researchers, scientists and others involved in the development of AI called for a moratorium on its further development.

Public safety and emergency management experts use risk matrices to assess and compare risks. Using this method, hazards are qualitatively or quantitatively assessed based on their frequency and consequence, and their impacts are classified as low, medium or high.

Hazards that have low frequency and low consequence or impact are considered low risk and no additional actions are required to manage them. Hazards that have medium consequence and medium frequency are considered medium risk. These risks need to be closely monitored.

Hazards with high frequency or high consequence or high in both consequence and frequency are classified as high risks. These risks need to be reduced by taking additional risk reduction and mitigation measures. Failure to take immediate and proper action may result in sever human and property losses.

Up until now, AI hazards and risks have not been added into the risk assessment matrices much beyond organizational use of AI applications. The time has come when we should quickly start bringing the potential AI risks into local, national and global risk and emergency management.

AI technologies are becoming more widely used by institutions, organizations and companies in different sectors, and hazards associated with the AI are starting to emerge.

In 2018, the accounting firm KPMG developed an AI Risk and Controls Matrix. It highlights the risks of using AI by businesses and urges them to recognize these new emerging risks. The report warned that AI technology is advancing very quickly and that risk control measures must be in place before they overwhelm the systems.

Governments have also started developing some risk assessment guidelines for the use of AI-based technologies and solutions. However, these guidelines are limited to risks such as algorithmic bias and violation of individual rights.

At the government level, the Canadian government issued the Directive on Automated Decision-Making to ensure that federal institutions minimize the risks associated with the AI systems and create appropriate governance mechanisms.

The main objective of the directive is to ensure that when AI systems are deployed, risks to clients, federal institutions and Canadian society are reduced. According to this directive, risk assessments must be conducted by each department to make sure that appropriate safeguards are in place in accordance with the Policy on Government Security.

In 2021, the U.S. Congress tasked the National Institute of Standards and Technology with developing an AI risk management framework for the Department of Defense. The proposed voluntary AI risk assessment framework recommends banning the use of AI systems that present unacceptable risks.

Much of the national level policy focus on AI has been from national security and global competition perspectives the national security and economic risks of falling behind in the AI technology.

The U.S. National Security Commission on Artificial Intelligence highlighted national security risks associated with AI. These were not from the public threats of the technology itself, but from losing out in the global competition for AI development in other countries, including China.

In its 2017 Global Risk Report, the World Economic Forum highlighted that AI is only one of emerging technologies that can exacerbate global risk. While assessing the risks posed by the AI, the report concluded that, at that time, super-intelligent AI systems remain a theoretical threat.

However, the latest Global Risk Report 2023 does not even mention the AI and AI associated risks which means that the leaders of the global companies that provide inputs to the global risk report had not viewed the AI as an immediate risk.

AI development is progressing much faster than government and corporate policies in understanding, foreseeing and managing the risks. The current global conditions, combined with market competition for AI technologies, make it difficult to think of an opportunity for governments to pause and develop risk governance mechanisms.

While we should collectively and proactively try for such governance mechanisms, we all need to brace for major catastrophic AIs impacts on our systems and societies.

If so, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. With the latest scientific discoveries, thoughtful analysis on political issues and research-based life tips, each email is filled with articles that will inform you and often intrigue you.

Get our newsletters

Editor and General Manager

Get news thats free, independent and based on evidence.

Get newsletter

Editor

Find peace of mind, and the facts, with experts. Add evidence-based articles to your news digest. No uninformed commentariat. Just experts. 90,000 of them have written for us. They trust us. Give it a go.

Get our newsletter

If you found the article you just read to be insightful, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. Each newsletter has articles that will inform and intrigue you.

Subscribe now

CEO | Editor-in-Chief

It helps you go deeper into key political issues and also introduces you to the diversity of research coming out of the continent. It's not about breaking news. It's not about unfounded opinions. The Europe newsletter is evidence-based expertise from European scholars, presented by myself in France, and two of my colleagues in Spain and the UK.

Get our newsletter

Head of English section, France edition

Read the original here:

We need to prepare for the public safety hazards posed by artificial intelligence - The Conversation