Page 1,117«..1020..1,1161,1171,1181,119..1,1301,140..»

UK schools bewildered by AI and do not trust tech firms, headteachers say – The Guardian

Artificial intelligence (AI)

School leaders announce launch of body to protect students from the risks of artificial intelligence

Sat 20 May 2023 05.50 EDT

Schools are bewildered by the fast pace of development in artificial intelligence and do not trust tech firms to protect the interests of students and educational establishments, headteachers have written.

A group of UK school leaders have announced the launch of a body to advise and protect schools from the risks of AI, with their fears not limited to the capacity of chatbots such as ChatGPT to aid cheating. There are also concerns about the impact on childrens mental and physical health as well as the teaching profession itself, according to the Times.

The headteachers fears were outlined in a letter to the Times in which they warned of the very real and present hazards and dangers being presented by AI, which has gripped the public imagination in recent months through the emergence of breakthroughs in generative AI where tools can produce plausible text, images and even voice impersonations on command.

The group of school leaders is led by Sir Anthony Seldon, the head of Epsom College, a fee-paying school, while the AI body is supported by the heads of dozens of private and state schools.

The letter to the Times says: Schools are bewildered by the very fast rate of change in AI and seek secure guidance on the best way forward, but whose advice can we trust? We have no confidence that the large digital companies will be capable of regulating themselves in the interests of students, staff and schools and in the past the government has not shown itself capable or willing to do so.

Signatories to the letter include Seldon, Chris Goodall, the deputy head of Epsom & Ewell High School, and Geoff Barton, general secretary of the Association of School and College Leaders.

It adds that the group is pleased the government is now grasping the nettle on the issue. This week Rishi Sunak said guardrails would have to be put around AI as Downing Street indicated support for a global framework for regulating the technology. However, the letter adds that educational leaders are forming their own advisory body because AI is moving too quickly for politicians to cope.

AI is moving far too quickly for the government or parliament alone to provide the real-time advice schools need. We are thus announcing today our own cross-sector body composed of leading teachers in our schools, guided by a panel of independent digital and AI experts.

Supporters include James Dahl, the head of Wellington College in Berkshire, and Alex Russell, chief executive of the Bourne Education Trust, which runs about two dozen state schools.

The Times reported that the group would create a website led by the heads of science or digital at 15 state and private schools, offering guidance on developments in AI and what technology to avoid or embrace.

Seldon told the Times: Learning is at its best, human beings are at their best, when they are challenged and overcome those challenges. AI will make life easy and strip away learning and teaching unless we get ahead of it.

The Department for Education said: The education secretary has been clear about the governments appetite to pursue the opportunities and manage the risks that exist in this space, and we have already published information to help schools do this. We continue to work with experts, including in education, to share and identify best practice.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read the original post:

UK schools bewildered by AI and do not trust tech firms, headteachers say - The Guardian

Read More..

The Potential of AI in Tax Practice Relies on Understanding its … – Thomson Reuters Tax & Accounting

Curiosity, conversation, and investment into artificial intelligence are quickly gaining traction in the tax community, but proper due diligence requires an acknowledgement of what such tools are and arent yet capable of, as well as an assessment of security and performance risks, according to industry experts.

With the tax world exploring how AI can improve practice and administration; firms, the IRS, and taxpayers alike are in the early stages of considering its potential for streamlining tasks, saving time, and improving access to information. Regardless of ones individual optimism or skepticism about the possible future of AI in the tax space, panelists at an American Bar Association conference in Washington, D.C., this past week suggested that practitioners arm themselves with important fundamentals and key technological differences under the broad-stroke term of AI.

An increasingly popular and publicly available AI tool is ChatGPT. Users can interact with ChatGPT by issuing whatever prompts come to mind, such as telling it to write a script for a screenplay or simply asking a question. As opposed to algorithmic machine learning tools specifically designed with a narrow focus, such as those in development at the IRS to crack down on abusive transactions like conservation easements, ChatGPT is what is called a large language model (LLM).

LLMs, according to PricewaterhouseCoopers Principal Chris Kontaridis, are text-based and use statistical methodologies to create a relationship between your question and patterns of data and text. In other words, the more data an LLM like ChatGPTwhich is currently learning from users across the entire internetabsorbs, the better it can attempt to predict and algorithmically interact with a person. Importantly, however, ChatGPT is not a knowledge model, Kontaridis said. Calling ChatGPT a knowledge model would insinuate that it is going to give you the correct answer every time you put in a question. Because it is not artificial general intelligence, something akin to a Hollywood portrayal of sentient machines overtaking humanity, users should recognize that ChatGPT is not self-reasoning, he said.

Were not even close to having real AGI out there, Kontaridis added.

Professor Abdi Aidid of the University of Toronto Faculty of Law and AI research-focused Blue J Legal, said at the ABA conference that the really important thing when youre using a tool like [ChatGPT] is recognizing its limitations. He explained that it is not providing source material for legal or tax advice. What its doing, and this is very important, is simply making a probabilistic determination about the next likely word. For instance, Aidid demonstrated that if you ask ChatGPT what your name is, it will give you an answer whether it knows it or not. You can rephrase the same question and ask it again, and it might give you a slightly different answer with different words because its responding to a different prompt.

At a separate panel, Ken Crutchfieldvice president and general manager of Legal Markets said he asked ChatGPT who invented the Trapper Keeper binder, knowing in fact his father Bryant Crutchfield is credited with the invention. ChatGPT spit out a random name. In telling the story, Crutchfield said: I went through, and I continued to ask questions, and I eventually convinced ChatGPT that it was wrong, and it admitted it and it said yes, Bryant Crutchfield did invent the Trapper Keeper.' Crutchfield said that when someone else tried asking ChatGPT who invented the Trapper Keeper, it gave yet another name. He tried it again himself more recently, and the answer included his fathers name, but listed his own alma mater. So its getting better and kind of learns through the these back-and-forths with people that are interacting.

Aidid explained that these instances are referred to as hallucinations. That is, when an AI does not know the answer and essentially makes something up on the spot based on the data and patterns it has up to that point. If a user were to ask ChatGPT about the Inflation Reduction Act, it would hallucinate an answer because it currently is limited to knowledge as recent as September 2021. Generative AI ChatGPT is still more sophisticated than more base-level tools that work off of decision trees, such as when a taxpayer interacts with the IRS Tax Assistant Tool, Aidid said. The Tax Assistant Tool, Aidid said, is not generative AI.

Mindy Herzfeld, professor at the University of Florida Levin College of Law, responsed that it is especially problematic because the [Tax Assistant Tool] is implying that it has all that information and its generating responses based on the world of information, but its really not doing that, so its misleading.

The most potential for the application of generative AI is with so-called deep learning tools, which are supposedly more advanced and complex iterations of machine learning platforms. Aidid said deep learning can work with unstructured data. Such technology can not only synthesize and review information, but review new information for us. Its starting to take all that and generate thingsnot simple predictionsbut actually generate things that are in the style and mode of human communication, and thats where were seeing significant investment today.

Herzfeld said that machine learning is already being used in tax on a daily basis, but deep learning is a little harder to see where that is in tax law. These more advanced tools will likely be developed in-house at firms, likely in partnership with AI researchers.

PwC is working with Blue J in pursuit of tax-oriented deep learning generative AI to help reduce much of the clerical work that is all too time-consuming in tax practice, according to Kontaridis. Freeing up staff to focus efforts to other things while AI sifts through mountains of data is a boon, he said.

However, as the saying goes, with great power comes with great responsibility. Here, that means guarding sensitive information and ensuring accuracy. Kontaridis said that its really important to make sure before you deploy something like this to your staff or use it yourself that youre doing it in a safe environment where you are protecting the confidentiality of your personal IP and privilege that you have with your clients.

Herzfeld echoed that practitioners should bear in mind how easily misinformation could be perpetuated through an overreliance or lack of oversight of AI, which she called a very broadly societal risk. Kontaridis assured the audience that he is not worried about generative AI replacing our role and the tax professional this is a tool that will help us do our work better.

Referring to the myth that CPA bots will take over the industry, he said: What Im worried about is the impact it has on our profession at the university level of it, discouraging bright young minds from pursuing careers in tax and accounting consulting.

Get all the latest tax, accounting, audit, and corporate finance news with Checkpoint Edge. Sign up for afree 7-day trialtoday.

Excerpt from:

The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting

Read More..

Zoom Invests in and Partners With Anthropic to Improve Its AI … – PYMNTS.com

Zoomhas become the latest tech company riding this years wave of artificial intelligence (AI) integrations.

The video conferencing platform announced in a Tuesday (May 16) press release that it has teamed with and isinvestingin AI firmAnthropic.

The collaboration will integrate Anthropics AI assistant, Claude, with Zooms platform, beginning withZoom Contact Center.

With Claude guiding agents toward trustworthy resolutions and powering self-service for end-users, companies will be able to take customer relationships to another level, saidSmita Hashim, chief product officer for Zoom, in the release.

Working with Anthropic, Hashim said, furthers the companys goal of a federated approach to AI while also advancing leading-edge companies like Anthropic and helping to drive innovation in the Zoom ecosystem and beyond.

As the next step in evolving the Zoom Contact Center portfolio, (Zoom Virtual Agent, Zoom Contact Center, Zoom Workforce Management), Zoom plans to incorporate Anthropic AI throughout its suite, improving end-user outcomes and enabling superior agent experiences, the news release said.

Zoom said in the release it eventually plans to incorporate Anthropic AI throughout its suite, including products like Team Chat, Meetings, Phone, Whiteboard and Zoom IQ.

Last year, Zoom debutedZoom Virtual Agent, an intelligent conversational AI and chatbot tool that employs natural language processing and machine learning to understand and solve customer issues.

The company did not reveal the amount of its investment in Anthropic, which isbackedbyGoogleto the tune of $300 million.

Zooms announcement came amid a flurry of AI-related news Tuesday, with fraud prevention firmComplyAdvantagelaunching anAI tooland the New York Times digging intoMicrosofts claims that it had made a breakthrough in the realm of artificial general intelligence.

Perhaps the biggest news isOpenAICEO Sam Altmanstestimonybefore a U.S. Senate subcommittee, in which he warned: I think if this technology goes wrong, it can go quite wrong.

Altmans testimony happened as regulators and governments around the world step up their examination of AI in a race tomitigate fearsabout its transformative powers, which have spread in step with the future-fit technologys ongoing integration into the broader business landscape.

Go here to read the rest:

Zoom Invests in and Partners With Anthropic to Improve Its AI ... - PYMNTS.com

Read More..

Navigating artificial intelligence: Red flags to watch out for – ComputerWeekly.com

Lou Steinberg, founder and managing partner of CTM Insights, a cyber security research lab and incubator, doesnt watch movies about artificial intelligence (AI) because he believes what he sees in real life is enough.

Steinberg has also worn other hats, including a six-year tenure as chief technology officer of TD Ameritrade, where he was responsible for technology innovation, platform architecture, engineering, operations, risk management and cyber security.

He has worked with US government officials on cyber issues as well. Recently, after a White House meeting with tech leaders about AI, Steinberg spoke about the benefits and downsides of having AI provide advice and complete tasks.

Businesses with agendas, for example, might try to skew training data to get people to buy their cars, stay in their hotels, or eat at their restaurants.Hackers may also change training data to advise people to buy stocks that are being sold at inflated prices.They may even teach AI to write software with built-in security issues, he contended.

In an interview with Computer Weekly, Steinberg drilled down into these red flags and what organisations can do to mitigate the risks of the growing use of AI.

What would you say are the top three things we should really be worried about right now when it comes to AI?

Steinberg: My short- to medium-term concerns with AI are in three main areas. First, AI- and machine learning-powered chatbots and decision support tools will return inaccurate results that are misconstrued as accurate, as they used untrustworthy training data and lack traceability.

Second, the lack of traceability means we dont know why AI gives the answers it gives though Google is taking an interesting approach by providing links to supporting documentation that a user can assess for credibility.

Third, attempts to slow the progress of AI, while well meaning, will slow the pace of innovation in Western nations while countries like China will continue to advance. While there have been examples of internationally respected bans on research, such as human cloning, AI advancement is not likely to be slowed globally.

How soon can bad actors jail-break AI? And what would that mean for society? Can AI developers pre-empt such dangers?

People have already gotten past guardrails built into tools like ChatGPT through prompt engineering. For example, a chatbot might refuse to generate code that is obviously malware but will happily create one function at a time that can be combined to create malware. Jail-breaking of AI is already happening today, and will continue as both the guardrails and attacks gain in sophistication.

The ability to attack poorly protected training data and bias the outcome is an even larger concern. Combined with the lack of traceability, we have a system without feedback loops to self-correct.

The ability to synthetically recreate a real person's voice and likeness will cause fraud and reputational damage to skyrocket. We need to solve this problem before we can no longer trust what we see or hear, like fake phone calls, fake videos of people appearing to commit crimes and fake investor conferences Lou Steinberg, CTM Insights

When will we get past the black box problem of AI?

Great question. As I said, Google appears to be trying to reinforce answers with pointers to supporting data. That helps, though I would rather see a chain of steps that led to a decision. Transparency and traceability are key.

Who can exploit AI the most? Governments? big tech? Hackers?

All of the above can and will exploit AI to analyse data, support decision-making and synthesise new outputs. Exploiting AI comes down to whether the use cases will be good or bad for society.

If made by a tech company, it will be to gain commercial advantage, ranging from selling you products to detecting fraud to personalising medicine and medical diagnoses. Businesses will also tap cost savings by replacing humans with AI, whether to write movie scripts, drive a delivery truck, develop software, or board an airplane by using facial recognition as a boarding pass.

Many hackers are also profit-seeking, and will try to steal money by guessing bank account passwords or replicating a persons voice and likeness to scam others. Just look at recent examples of realistic, synthesised voices being used to trick people into believing a loved one has been kidnapped.

While autonomous killer robots from science fiction are certainly a concern with some nation states and terrorist groups, governments and some companies sit on huge amounts of data that would benefit from improved pattern detection. Expect governments to analyse and interpret data to better manage everything from public health to air traffic congestion. AI will also allow personalised decision-making at scale, where agencies like the US Internal Revenue Service will look for fraud while authoritarian governments will increase their ability to do surveillance.

What advice would you give to AI developers? As an incubator, does CTM Insights have any special lens here?

There are so many dimensions of protection needed. Training data must be curated and protected from malicious tampering. The ability to synthetically recreate a real persons voice and likeness will cause fraud and reputational damage to skyrocket. We need to solve this problem before we can no longer trust what we see or hear, like fake phone calls, fake videos of people appearing to commit crimes and fake investor conferences.

Similarly, the ability to realistically edit images and evade detection will create cases where even real images, like your medical scans, are untrustworthy. CTM has technology to isolate untrustworthy portions of data and images, without requiring everything to be thrown out. We are working on a new way to detect synthetic deepfakes.

Is synthetic data a good thing or a bad thing if we want to create safer AI?

Synthetic data is mostly a good thing, and we can use it to help create curated training data. The challenge is that attackers can do the same thing.

Will singularity and artificial general intelligence (AGI) be a utopia or a dystopia?

Im an optimist. While most major technology advances can be used to do harm, AI has the ability to eliminate a huge amount of work done by people but still create the value of that work. If the benefits are shared across society, and not concentrated, society will gain broadly.

For example, one of the most common jobs in the US is driving a delivery truck. If autonomous vehicles replace those jobs, society still gets the benefit of having things delivered. If all that does is raise profit margins at delivery companies, then that will be deeply impactful to laid-off drivers. But if some of the benefit is used to help those ex-drivers do something else like construction, then society benefits by getting new buildings.

Data poisoning, adversarial AI, co-evolution of good guys and bad guys how serious have these issues become?

Co-evolution of AI and adversarial AI have already started. There is debate as to the level of data poisoning out there today as many attacks arent made public. Id say they are all in their infancy. Im worried about what happens when they grow up.

If you were to create an algorithm thats water-tight on security, what broad areas would you be careful about?

The system would have traceability built in from the start. The inputs would be carefully curated and protected. The outputs would be signed and have authorised use built in. Today, we focus way too much on identity and authentication of people and not enough on whether those people authorised things.

Have you seen any evidence of AI-driven or assisted attacks?

Yes, deepfake videos exist of Elon Musk and others for financial scams, as well as Ukraines President Zelensky telling his troops to surrender in disinformation campaigns. Synthesised voices of real people have been used in fake kidnapping scams, and fake CEO voices on phone calls have asked employees to transfer money to a fraudsters account. AI is also being used by attackers to exploit vulnerabilities to breach networks and systems.

Whats your favourite Black Mirror episode or movie about AI that feels like a premonition?

I try to not watch stuff that might scare me real life is enough!

Read more from the original source:

Navigating artificial intelligence: Red flags to watch out for - ComputerWeekly.com

Read More..

People warned AI is becoming like a God and a ‘catastrophe’ is … – UNILAD

An artificial intelligence investor has warned that humanity may need to hit the breaks on AI development, claiming it's becoming 'God-like' and that it could cause 'catastrophe' for us in the not-so-distant future.

Ian Hogarth - who has invested in over 50 AI companies - made an ominous statement on how the constant pursuit of increasingly-smart machines could spell disaster in an essay for the Financial Times.

The AI investor and author claims that researchers are foggy on what's to come and have no real plan for a technology with that level of knowledge.

"They are running towards a finish line without an understanding of what lies on the other side," he warned.

Hogarth shared what he'd recently been told by a machine-learning researcher that 'from now onwards' we are on the verge of artificial general intelligence (AGI) coming to the fore.

AGI has been defined as is an autonomous system that can learn to accomplish any intellectual task that human beings can perform and surpass human capabilities.

Hogarth, co-founder of Plural Platform, said that not everyone agrees that AGI is imminent but rather 'estimates range from a decade to half a century or more' for it to arrive.

However, he noted the tension between companies that are frantically trying to advance AI's capabilities and machine learning experts who fear the end point.

The AI investor also explained that he feared for his four-year-old son and what these massive advances in AI technology might mean for him.

He said: "I gradually shifted from shock to anger.

"It felt deeply wrong that consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight."

When considering whether the people in the AGI race were planning to 'slow down' to ' let the rest of the world have a say' Hogarth admitted that it's morphed into a 'them' versus 'us' situation.

Having been a prolific investor in AI startups, he also confessed to feeling 'part of this community'.

Hogarth's descriptions of the potential power of AGI were terrifying as he declared: "A three-letter acronym doesnt capture the enormity of what AGI would represent, so I will refer to it as what is: God-like AI."

Hogarth described it as 'a superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it'.

But even with this knowledge and, despite the fact that it's still on the horizon, he warned that we have no idea of the challenges we'll face and the 'nature of the technology means it is exceptionally difficult to predict exactly when we will get there'.

"God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race," the investor said.

Despite a career spent investing in and supporting the advancement of AI, Hogarth explained that what made him pause for thought was the fact that 'the contest between a few companies to create God-like AI has rapidly accelerated'.

He continued: "They do not yet know how to pursue their aim safely and have no oversight."

Hogarth still plans to invest in startups that pursue AI responsibly, but explained that the race shows no signs of slowing down.

"Unfortunately, I think the race will continue," he said.

"It will likely take a major misuse event - a catastrophe - to wake up the public and governments."

Continue reading here:

People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD

Read More..

‘Godfather’ of AI is now having second thoughts – The B.C. Catholic

Until a few weeks ago British-born Canadian university professor Geoffrey Hinton was little known outside academic circles. His profile became somewhat more prominent in 2019 when he was a co-winner of the A. M. Turing Award, more commonly known as the Nobel Prize for computing.

However, it is events of the past month or so that have made Hinton a bit of a household name, after he stepped down from an influential role at Google.

Hintons life work, particularly that in computing at the University of Toronto, has been deemed groundbreaking and revolutionary in the field of artificial intelligence, AI. Anyone reading this column will surely have encountered numerous pieces on AI in recent months, be it on TV, through radio, or in print, physical and digital. AI applications such as large language model ChatGPT have completely altered the digital landscape in ways unimaginable even a year ago.

While at the U of T, Hinton and graduate students made major advances in deep neural networks, speech recognition, the classification of objects, and deep learning. Some of this work morphed into a technology startup which captured the attention of Google, leading to the acquisition of the business for around $44 million a decade ago.

Eventually, Hinton became a Google vice-president, in charge of running the California companys Toronto AI lab. Leaving that position recently, at the age of 75, led to speculation, particularly in a New York Times interview, that he did so in order to criticize or attack his former employer.

Not so, said Hinton in a tweet. Besides his age being a factor, he suggested he wanted to be free to speak about the dangers of AI, irrespective of Googles involvement in the burgeoning field. Indeed, Hinton noted in his tweet that in his view Google had acted very responsibly.

Underscoring his view of Googles public AI work may be the companys slow response to the adoption of Microsoft-backed ChatGPT in its various incarnations. Googles initial public AI product, Bard, appeared months after ChatGPT began its meteoric adoption in early December. It did not gain much traction at the outset.

In recent weeks weve seen news stories of large employers such as IBM serving notice that about 7,000 positions would be replaced by AI bots such as specialized versions of ChatGPT. Weve also seen stories about individuals turning over significant aspects of their day-to-day life to such bots. One person gained particular attention for giving all his financial, email, and other records to a specialized AI bot with a view to having it find $10,000 in savings and refunds through automated actions.

Perhaps it is these sorts of things that are giving Hinton pause as he looks back at his lifes work. In the NYT interview, he uses expressions such as, It is hard to see how you can prevent the bad actors from using it for bad things, and Most people will not be able to know what is true anymore -- the latter in reaction to AI-created photos, videos, and audio depicting objects or events that didnt occur.

Right now, they are not more intelligent than us, as far as I can tell. But they soon may be, said Hinton, speaking to the BBC about AI machines. He went on to add, Ive come to the conclusion that the kind of intelligence we are developing (via AI) is very different from the intelligence we have.

Hinton went on to note how biological systems (i.e. people) are different from digital systems. The latter, he notes, has many copies of the same set of weights and the same model of the world, and while these copies can learn separately, they can share new knowledge instantly.

In a somewhat enigmatic tweet on March 14 Hinton wrote: Caterpillars extract nutrients which are then converted into butterflies. People have extracted billions of nuggets of understanding and GPT-4 is humanitys butterfly.

Hinton spent the first week of May correcting various lines from interviews he gave to prominent news outlets. He took particular issue with a CBC online headline: Canadas AI pioneer Geoffrey Hinton says AI could wipe out humans. In the meantime, theres money to be made. In a tweet he said: The second sentence was said by a journalist, not me, but you wouldnt know that.

Whether the race to a God-like form of artificial intelligence fully materializes, or not, AI is already being placed alongside climate change and nuclear war as a trio of existential threats to human life. Climate change is being broadly tackled by most nations, and nuclear weapons use has been effectively stifled by the notion of mutually-assured destruction. Perhaps artificial general intelligence needs a similar global focus for regulation and management.

Follow me on Facebook (facebook.com/PeterVogelCA), or on Twitter (@PeterVogel)

[emailprotected]

More:

'Godfather' of AI is now having second thoughts - The B.C. Catholic

Read More..

Artificial intelligence poses real and present danger, headteachers warn – Yahoo Sport Australia

AI is a rapidly growing area of innovation (PA)

Artificial intelligence poses the greatestdangertoeducationand the Government is responding too slowlytothe threat, head teachers have claimed.

AIcould bring the biggest benefit since the printing press but the risks are more severe than any threat that has ever faced schools, accordingtoEpsom Colleges principal Sir Anthony Seldon.

Leaders from the countrys top schools have formed a coalition, led by Sir Anthony,towarn of the very real and present hazards anddangers being presented by the technology.

Totackle this, the group has announced the launch of a new bodytoadvise and protect schools from the risks ofAI.

They wish for collaboration between schoolstoensure thatAIserves the best interest of the pupils and teachers rather than those of largeeducationtechnology companies, the Times reported.

The head teachers of dozens of private and state schools support the initiative, including Helen Pike, the master of Magdalen College School in Oxford, and Alex Russell, the chief executive of BourneEducationTrust, which runs nearly 30 state schools.

The potentialtoaid cheating is a minor concern for head teachers whose fears extendtothe impact on childrens mental and physical health and the future of the teaching profession.

Professor Stuart Russell, one of the godfathers ofAIresearch, warned last week that ministers were not doing enoughtoguard against the possibility of a super intelligent machine wiping out humanity.

Rishi Sunak admitted at the G7 summit this week that guard-rails would havetobe put around it.

Read more:

Artificial intelligence poses real and present danger, headteachers warn - Yahoo Sport Australia

Read More..

Sam Altman is plowing ahead with nuclear fusion and his eye-scanning crypto ventureand, oh yeah, OpenAI – Fortune

OpenAI CEO Sam Altman helped bring ChatGPT to the world, which sparked the current A.I. race involving Microsoft, Google, and others.

But hes busy with other ventures that could be no less disruptiveand are linked in some ways. This week, Microsoft announced a purchasing agreement with Helion Energy, a nuclear fusion startup primarily backed by Altman. And Worldcoin, a crypto startup involving eye scans cofounded by Altman in 2019, is close to securing hefty new investments, according to Financial Times reporting on Sunday.

Before becoming OpenAIs leader, Altman served as president of the startup accelerator Y Combinator, so its not entirely surprising that hes involved in more than one venture. But the sheer ambition of the projects, both on their own and collectively, merits attention.

Microsoft announced a deal on Wednesday in which Helion will supply it with electricity from nuclear fusion by 2028. Thats bold considering nobody is yet producing electricity from fusion, and many experts believe its decades away.

During a Stripe conference interview last week, Altman said the audience should be excited about the startups developments and drew a connection between Helion and artificial intelligence.

If you really want to make the biggest, most capable super intelligent system you can, you need high amounts of energy, he explained. And if you have an A.I. that can help you move faster and do better material science, you can probably get to fusion a little bit faster too.

He acknowledged the challenging economics of nuclear fusion, but added, I think we will probably figure it out.

He added, And probably we will get to a world where in addition to the cost of intelligence falling dramatically, the cost of energy falls dramatically, too. And if both of those things happen at the same timeI would argue that they are currently the two most important inputs in the whole economywe get to a super different place.

Worldcoinstill in beta but aiming to launch in the first half of this yearis equally ambitious, as Fortune reported in March. If A.I. takes away our jobs and governments decide that a universal basic income is needed, Worldcoin wants to be the distribution mechanism for those payments. If all goes to plan, itll be bigger than Bitcoin and approved by regulators across the globe.

That might be a long way off if it ever occurs, but in the meantime the startup might have found quicker path to monetization with World ID, a kind of badge you receive after being verified by Worldcoinand a handy way to prove that youre a human rather than an A.I. bot when logging into online platforms. The idea is your World ID would join or replace your user names and passwords.

The only way to really prove a human is a human, the Worldcoin team decided, was via an iris scan. That led to a small orb-shaped device you look into that converts a biometric scanning code into proof of personhood.

When youre scanned, verified, and onboarded to Worldcoin, youre given 25 proprietary crypto tokens, also called Worldcoins. Well over a million people have already participated, though of course the company aims to have tens and then hundreds of millions joining after beta. Naturally such plans have raised a range of privacy concerns, but according to the FT, the firm is now in advanced talks to raise about $100 million.

Go here to see the original:

Sam Altman is plowing ahead with nuclear fusion and his eye-scanning crypto ventureand, oh yeah, OpenAI - Fortune

Read More..

Cannes Diary: Will Artificial Intelligence Democratize Creativity or Lead to Certain Doom? – Yahoo News

On May 17, as bodies lined up in the rain outside the Cannes Film Festival Palais for the chance to watch a short film directed byPedro Almodvar, an auteur known most of all for his humanism, a different kind of gathering was underway below the theater. Inside the March, a panel of technologists convened to tell an audience of film professionals how they might deploy artificial intelligence for creating scripts, characters, videos, voices and graphics.

The ideas discussed at the Cannes Next panel AI Apocalypse or Revolution? Rethinking Creativity, Content and Cinema in the Age of Artificial Intelligence make the scene of the Almodvar crowd seem almost poignant, like seeing a species blissfully ignorant of their own coming extinction, dinosaurs contentedly chewing on their dinners 10 minutes before the asteroid hits.

More from The Hollywood Reporter

The only people who should be afraid are the ones who arent going to use these tools, said panelistAnder Saar, a futurist and strategy consultant for Red Bull Media House, the media arm of the parent company of Red Bull energy drinks. Fifty to 70 percent of a film budget goes to labor. If we can make that more efficient, we can do much bigger films at bigger budgets, or do more films.

The panel also includedHovhannes Avoyan, the CEO of Picsart, an image-editing developer powered by AI, andAnna Bulakh, head of ethics and partnerships at Respeecher, an AI startup that makes technology that allows one person to speak using the voice of another person. The audience of about 150 people was full of AI early adopters through a show of hands, about 75 percent said they had an account for ChatGPT, the AI language processing tool.

Story continues

The panelists had more technologies for them to try. Bulakhs company re-createdJames Earl Jones Darth Vader voice as it sounded in 1977 for the 2022 Disney+ seriesObi-Wan Kenobi, andVince Lombardis voice for a 2021 NFL ad that aired during the Super Bowl. Bulakh drew a distinction between Respeechers work and AI that is created to manipulate, otherwise known as deepfakes. We dont allow you to re-create someones voice without permission, and we as a company are pushing for this as a best practice worldwide, Bulakh said. She also spoke about how productions already use Respeechers tools as a form of insurance when actors cant use their voices, and about how actors could potentially grow their revenue streams using AI.

Avoyan said he created his company for his daughter, an artist, and his intention is, he said, democratizing creativity. Its a tool, he said. Dont be afraid. It will help you in your job.

The optimistic conversation unfolding beside the French Riviera felt light years away from the WGA strike taking place in Hollywood, in which writers and studios are at odds over the use of AI, with studios considering such ideas as having human writers punch up drafts of AI-generated scripts, or using AI to create new scripts based on a writers previous work. During contract negotiations, the AMPTP refused union requests for protection from AI use, offering instead, annual meetings to discuss advancements in technology. The March talk also felt far from the warnings of a growing chorus of experts likeEric Horvitz, chief scientific officer at Microsoft, and AI pioneerGeoffrey Hinton, who resigned from his job at Google this month in order to speak freely about AIs risks, which he says include the potential for deliberate misuse, mass unemployment and human extinction.

Are these kinds of worries just moral panic? mused the moderator and head of Cannes NextSten Kristian-Saluveer. That seemed to be the panelists view. Saar dismissed the concerns, comparing the changes AI will bring to adaptations brought by the automobile or the calculator. When calculators came, it didnt mean we dont know how to do math, he said.

One of the panel buzz phrases was hyper-personalized IP, meaning that well all create our own individual entertainment using AI tools. Saar shared a video from a company he is advising, in which a childs drawings came to life and surrounded her on video screens. The characters in the future will be created by the kids themselves, he says. Avoyan said the line between creator and audience will narrow in such a way that we will all just be making our own movies. You dont even need a distribution house, he said.

A German producer and self-described AI enthusiast in the audience said, If the cost of the means of production goes to zero, the amount of produced material is going up exponentially. We all still only have 24 hours. Who or what, the producer wanted to know, would be the gatekeepers for content in this new era? Well, the algorithm, of course. A lot of creators are blaming the algorithm for not getting views, saying the algorithm is burying my video, Saar said. The reality is most of the content is just not good and doesnt deserve an audience.

What wasnt discussed at the panel was what might be lost in a future that looks like this. Will a generation raised on watching videos created from their own drawings, or from an algorithms determination of what kinds of images they will like, take a chance on discovering something new? Will they line up in the rain with people from all over the world to watch a movie made by someone else?

Best of The Hollywood Reporter

Click here to read the full article.

Link:

Cannes Diary: Will Artificial Intelligence Democratize Creativity or Lead to Certain Doom? - Yahoo News

Read More..

We need to prepare for the public safety hazards posed by artificial intelligence – The Conversation

For the most part, the focus of contemporary emergency management has been on natural, technological and human-made hazards such as flooding, earthquakes, tornadoes, industrial accidents, extreme weather events and cyber attacks.

However, with the increase in the availability and capabilities of artificial intelligence, we may soon see emerging public safety hazards related to these technologies that we will need to mitigate and prepare for.

Over the past 20 years, my colleagues and I along with many other researchers have been leveraging AI to develop models and applications that can identify, assess, predict, monitor and detect hazards to inform emergency response operations and decision-making.

We are now reaching a turning point where AI is becoming a potential source of risk at a scale that should be incorporated into risk and emergency management phases mitigation or prevention, preparedness, response and recovery.

AI hazards can be classified into two types: intentional and unintentional. Unintentional hazards are those caused by human errors or technological failures.

As the use of AI increases, there will be more adverse events caused by human error in AI models or technological failures in AI based technologies. These events can occur in all kinds of industries including transportation (like drones, trains or self-driving cars), electricity, oil and gas, finance and banking, agriculture, health and mining.

Intentional AI hazards are potential threats that are caused by using AI to harm people and properties. AI can also be used to gain unlawful benefits by compromising security and safety systems.

In my view, this simple intentional and unintentional classification may not be sufficient in case of AI. Here, we need to add a new class of emerging threats the possibility of AI overtaking human control and decision-making. This may be triggered intentionally or unintentionally.

Many AI experts have already warned against such potential threats. A recent open letter by researchers, scientists and others involved in the development of AI called for a moratorium on its further development.

Public safety and emergency management experts use risk matrices to assess and compare risks. Using this method, hazards are qualitatively or quantitatively assessed based on their frequency and consequence, and their impacts are classified as low, medium or high.

Hazards that have low frequency and low consequence or impact are considered low risk and no additional actions are required to manage them. Hazards that have medium consequence and medium frequency are considered medium risk. These risks need to be closely monitored.

Hazards with high frequency or high consequence or high in both consequence and frequency are classified as high risks. These risks need to be reduced by taking additional risk reduction and mitigation measures. Failure to take immediate and proper action may result in sever human and property losses.

Up until now, AI hazards and risks have not been added into the risk assessment matrices much beyond organizational use of AI applications. The time has come when we should quickly start bringing the potential AI risks into local, national and global risk and emergency management.

AI technologies are becoming more widely used by institutions, organizations and companies in different sectors, and hazards associated with the AI are starting to emerge.

In 2018, the accounting firm KPMG developed an AI Risk and Controls Matrix. It highlights the risks of using AI by businesses and urges them to recognize these new emerging risks. The report warned that AI technology is advancing very quickly and that risk control measures must be in place before they overwhelm the systems.

Governments have also started developing some risk assessment guidelines for the use of AI-based technologies and solutions. However, these guidelines are limited to risks such as algorithmic bias and violation of individual rights.

At the government level, the Canadian government issued the Directive on Automated Decision-Making to ensure that federal institutions minimize the risks associated with the AI systems and create appropriate governance mechanisms.

The main objective of the directive is to ensure that when AI systems are deployed, risks to clients, federal institutions and Canadian society are reduced. According to this directive, risk assessments must be conducted by each department to make sure that appropriate safeguards are in place in accordance with the Policy on Government Security.

In 2021, the U.S. Congress tasked the National Institute of Standards and Technology with developing an AI risk management framework for the Department of Defense. The proposed voluntary AI risk assessment framework recommends banning the use of AI systems that present unacceptable risks.

Much of the national level policy focus on AI has been from national security and global competition perspectives the national security and economic risks of falling behind in the AI technology.

The U.S. National Security Commission on Artificial Intelligence highlighted national security risks associated with AI. These were not from the public threats of the technology itself, but from losing out in the global competition for AI development in other countries, including China.

In its 2017 Global Risk Report, the World Economic Forum highlighted that AI is only one of emerging technologies that can exacerbate global risk. While assessing the risks posed by the AI, the report concluded that, at that time, super-intelligent AI systems remain a theoretical threat.

However, the latest Global Risk Report 2023 does not even mention the AI and AI associated risks which means that the leaders of the global companies that provide inputs to the global risk report had not viewed the AI as an immediate risk.

AI development is progressing much faster than government and corporate policies in understanding, foreseeing and managing the risks. The current global conditions, combined with market competition for AI technologies, make it difficult to think of an opportunity for governments to pause and develop risk governance mechanisms.

While we should collectively and proactively try for such governance mechanisms, we all need to brace for major catastrophic AIs impacts on our systems and societies.

If so, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. With the latest scientific discoveries, thoughtful analysis on political issues and research-based life tips, each email is filled with articles that will inform you and often intrigue you.

Get our newsletters

Editor and General Manager

Get news thats free, independent and based on evidence.

Get newsletter

Editor

Find peace of mind, and the facts, with experts. Add evidence-based articles to your news digest. No uninformed commentariat. Just experts. 90,000 of them have written for us. They trust us. Give it a go.

Get our newsletter

If you found the article you just read to be insightful, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. Each newsletter has articles that will inform and intrigue you.

Subscribe now

CEO | Editor-in-Chief

It helps you go deeper into key political issues and also introduces you to the diversity of research coming out of the continent. It's not about breaking news. It's not about unfounded opinions. The Europe newsletter is evidence-based expertise from European scholars, presented by myself in France, and two of my colleagues in Spain and the UK.

Get our newsletter

Head of English section, France edition

Read the original here:

We need to prepare for the public safety hazards posed by artificial intelligence - The Conversation

Read More..