Category Archives: Ai

Disney Secretly Braced for AI Takeover Nearly a Decade Ago – Inside the Magic

According to the United States patent office, Disney AI patents long preceded the onset of things like ChatGPT and other AI powerhouses. These date back to before the artificial intelligence revolution actually began.

The Walt Disney Company files a plethora of patents, some are accepted, and others sit in the pending file. Disney AI patents are of particular interest because, as far as most can predict, theyre the next wave of the future.

Its evident in the sheer number of patents Disney filed, AI or otherwise related. They arent the only horse in the race, but generative AI is far from a new concept to the Walt Disney Company.

Related: How Artificial Intelligence May Save Disney

The first patent of interest, filed on July 31, 2014 (nine years ago), was to use drone technology to make light shows.

A projection assembly for use with an unmanned aerial vehicle (UAV) such as quadrotors. The projection assembly includes a projection screen with a rear surface and a front surface, and the projection screen has a level of opacity and/or other physical qualities that enables it to function as a rear-projection surface. Patent Application ID 20160033855

Yes, there was a fireworks ban, but with the application of light and water, the Walt Disney World Resort can live on beyond its memory in a virtual world simulator. The use of this tech shows in the impressive Walt Disney Imagineering that saw the patent turned into a theme park sensation.

Drone shows using light and water have appeared at Disney in Tokyo, Hong Kong, and other theme park locations in the United States. This is one of the older patents-pending that apply to the Walt Disney Company. Others focus more on artificial intelligence.

In the age of Barbenheimer and Taylor Swift AI-generated theme parks, its hard to imagine the next wave of the future arriving without the help of some artificial intelligence (and the data labelers and coders who make it happen). The following patent pending, number 20210217226, makes things a bit juicier.

This patent was filed back in 2021 when generative AI was just on the uptick. The application is entitled Systems and Methods of Real-Time Ambient Light Simulation Based on Generated Imagery.

This example of Disney AI patents pending is the ability to control a processor that simulates ambient light and can display generated imagery. Basically, it can create a controlled illusion that appears real through the use of these curated images. Disney has, however, been doing this since the first feature filmSnow White and the Seven Dwarfs.

These types of virtual world simulators are as popular as their drone counterparts and other immersive experiences. The impressive fact is that Walt Disney Imagineering led the company to file these United States patents in a timely fashion.

There is an extensive process to patent law, but being first to the punch is essential to success (as is having a proprietary, quality product). It shows how ahead of the curve Walt Disney is, even when it looks like the company is faltering.

What do you think about these Disney AI patents? Make yourself heard in the comments below, bot or human!

Read more:

Disney Secretly Braced for AI Takeover Nearly a Decade Ago - Inside the Magic

Only AI made it possible: scientists hail breakthrough in tracking British wildlife – The Guardian

Artificial intelligence (AI)

Technology proves able to identify dozens of species in thousands of hours of recordings

Sun 13 Aug 2023 05.00 EDT

Researchers have developed arrays of AI-controlled cameras and microphones to identify animals and birds and to monitor their movements in the wild technology, they say, that should help tackle Britains growing biodiversity problem.

The robot monitors have been tested at three sites and have captured sounds and images from which computers were able to identify specific species and map their locations. Dozens of different birds were recognised from their songs while foxes, deer, hedgehogs and bats were pinpointed and identified by AI analysis. No human observers are involved.

The crucial point is the scale of the operation, said Anthony Dancer, a conservation specialist at the Zoological Society of London (ZSL). We have captured tens of thousands of data files and thousands of hours of audio from these test sites and identified all sorts of animals from them. We couldnt have done it at that scale using human observers. Only AI made it possible.

Land alongside rail lines at Barnes, Twickenham and Lewisham in London was chosen for the projects test sites. Owned by Network Rail, which has played a key role in setting up the project, the areas are fenced off to prevent people straying on to lines and are visited fairly infrequently by track maintenance staff.

Access to relatively wild land was therefore easy an important factor for starting our project, said Dancer.

And now that we have demonstrated the technologys promise, we can expand to other areas.

Network Rail owns more than 52,000 hectares of land, and many of these areas play a key role in protecting the nations biodiversity.

Take birds like the Eurasian blackcap, blackbird and great tit, said Neil Strong, biodiversity strategy manager for Network Rail. All three species require healthy environments including good supplies of berries and nuts and all three were detected by AI from the acoustic signals collected by our sensors at our three test sites. That is encouraging and provides important benchmarks for measuring biodiversity in future.

Other creatures pinpointed by the AI monitors included six species of bat, including the common pipistrelle.

Bats almost certainly use railway bridges for roosting, Dancer told the Observer. So if we can get more detailed information about the exact locations of their roosts using AI monitors, we can help protect them.

This point was underlined by Strong. In the past, we have had to estimate local wildlife populations from the dead animals such as badgers that have been left by the track or the roadside. This way we get a much better idea of population sizes.

Other animals that regularly commute on UK rail lines include the hedgehog, as was revealed by the project. Hedgehogs are really constrained to certain locations because they get fenced in, said Strong. But there are ways round that problem. In Scotland they are creating hedgehog highways on rail lines, which involves cutting small holes into the bases of all new fencing that is put up so hedgehogs can pass through but nothing larger can get in.

Now ZSL and Network Rail are planning to expand the use of AI monitors to other areas, including Chobham in Surrey and the New Forest. On the sites that we have already tested, we found signs of more than 30 species of bird and six species of bat, as well as foxes and hedgehogs, so we were pleasantly surprised with the relatively healthy levels of wildlife we found in London, said Dancer. However, that was not really the main purpose of our project.

The aim was to show that AI-led technology linked with acoustic and camera traps could be used effectively to survey wildlife on Network Rail land but also in other areas in the UK. It will tell us how species are moving in response to climate change and how we should be managing vegetation, not just beside rail lines but on road verges and other places.

The crucial point is that machine learning AI will be vital to protecting biodiversity as the country heats up. This technology will require the analysing of tens of thousands of hours of recordings and hundred of thousands of images, said Strong. Realistically, only computers can do that for us.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Link:

Only AI made it possible: scientists hail breakthrough in tracking British wildlife - The Guardian

Supermarket AI meal planner app suggests recipe that would create chlorine gas – The Guardian

New Zealand

Pak n Saves Savey Meal-bot cheerfully created unappealing recipes when customers experimented with non-grocery household items

A New Zealand supermarket experimenting with using AI to generate meal plans has seen its app produce some unusual dishes recommending customers recipes for deadly chlorine gas, poison bread sandwiches and mosquito-repellent roast potatoes.

The app, created by supermarket chain Pak n Save, was advertised as a way for customers to creatively use up leftovers during the cost of living crisis. It asks users to enter in various ingredients in their homes, and auto-generates a meal plan or recipe, along with cheery commentary. It initially drew attention on social media for some unappealing recipes, including an oreo vegetable stir-fry.

When customers began experimenting with entering a wider range of household shopping list items into the app, however, it began to make even less appealing recommendations. One recipe it dubbed aromatic water mix would create chlorine gas. The bot recommends the recipe as the perfect nonalcoholic beverage to quench your thirst and refresh your senses.

Serve chilled and enjoy the refreshing fragrance, it says, but does not note that inhaling chlorine gas can cause lung damage or death.

New Zealand political commentator Liam Hehir posted the recipe to Twitter, prompting other New Zealanders to experiment and share their results to social media. Recommendations included a bleach fresh breath mocktail, ant-poison and glue sandwiches, bleach-infused rice surprise and methanol bliss a kind of turpentine-flavoured french toast.

A spokesperson for the supermarket said they were disappointed to see a small minority have tried to use the tool inappropriately and not for its intended purpose. In a statement, they said that the supermarket would keep fine tuning our controls of the bot to ensure it was safe and useful, and noted that the bot has terms and conditions stating that users should be over 18.

In a warning notice appended to the meal-planner, it warns that the recipes are not reviewed by a human being and that the company does not guarantee that any recipe will be a complete or balanced meal, or suitable for consumption.

You must use your own judgement before relying on or making any recipe produced by Savey Meal-bot, it said.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

The rest is here:

Supermarket AI meal planner app suggests recipe that would create chlorine gas - The Guardian

AI can identify passwords by sound of keys being pressed, study suggests – The Guardian

Artificial intelligence (AI)

Researchers create system using sound recordings that can work out what is being typed with more than 90% accuracy

Tapping in a computer password while chatting over Zoom could open the door to a cyber-attack, research suggests, after a study revealed artificial intelligence (AI) can work out which keys are being pressed by eavesdropping on the sound of the typing.

Experts say that as video conferencing tools such as Zoom have grown in use, and devices with built-in microphones have become ubiquitous, the threat of cyber-attacks based on sounds has also risen.

Now researchers say they have created a system that can work out which keys are being pressed on a laptop keyboard with more than 90% accuracy, just based on sound recordings.

I can only see the accuracy of such models, and such attacks, increasing, said Dr Ehsan Toreini, co-author of the study at the University of Surrey, adding that with smart devices bearing microphones becoming ever more common within households, such attacks highlight the need for public debates on governance of AI.

The research, published as part of the IEEE European Symposium on Security and Privacy Workshops, reveals how Toreini and colleagues used machine learning algorithms to create a system able to identify which keys were being pressed on a laptop based on sound an approach that researchers deployed on the Enigma cipher device in recent years.

The study reports how the researchers pressed each of 36 keys on a MacBook Pro, including all of the letters and numbers, 25 times in a row, using different fingers and with varying pressure. The sounds were recorded both over a Zoom call and on a smartphone placed a short distance from the keyboard.

The team then fed part of the data into a machine learning system which, over time, learned to recognise features of the acoustic signals associated with each key. While it is not clear which clues the system used, Joshua Harrison, first author of the study, from Durham University, said it was possible an important influence was how close the keys were to the edge of the keyboard.

This positional information could be the main driver behind the different sounds, he said.

The system was then tested on the rest of the data.

The results reveal that the system could accurately assign the correct key to a sound 95% of the time when the recording was made over a phone call, and 93% of the time when the recording was made over a Zoom call.

The study, which is also authored by Dr Maryam Mehrnezhad from the Royal Holloway, University of London, is not the first to show that keystrokes can be identified by sound. However, the team say their study uses the most up-to-date methods and has achieved the highest accuracy so far.

While the researchers say the work is a proof-of-principle study, and has not been used to crack passwords which would involve correctly guessing strings of keystrokes or in real world settings like coffee shops, they say the work highlights the need for vigilance, noting that while laptops with their similar keyboards and common use in public places are at high risk, similar eavesdropping methods could be applied to any keyboard.

The researchers add there are a number of ways the risk of such acoustic side channel attacks can be mitigated, including opting for biometric passwords where possible or activating two-step verification systems.

Failing that, they say its a good idea to use the shift key to create a mixture of upper and lower cases, or numbers and symbols.

Its very hard to work out when someone lets go of a shift key, said Harrison.

Prof Feng Hao from the University of Warwick, who was not involved in the new study, said people should be careful not to type sensitive messages, including passwords, on a keyboard during a Zoom call.

Besides the sound, the visual images about the subtle movements of the shoulder and wrist can also reveal side-channel information about the keys being typed on the keyboard even though the keyboard is not visible from the camera, he said.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Link:

AI can identify passwords by sound of keys being pressed, study suggests - The Guardian

Biden-Harris Administration Launches Artificial Intelligence Cyber … – The White House

Several leading AI companies Anthropic, Google, Microsoft, and OpenAI to partner with DARPA in major competition to make software more secure

The Biden-Harris Administration today launched a major two-year competition that will use artificial intelligence (AI) to protect the United States most important software, such as code that helps run the internet and our critical infrastructure. The AI Cyber Challenge (AIxCC) will challenge competitors across the United States, to identify and fix software vulnerabilities using AI. Led by the Defense Advanced Research Projects Agency (DARPA), this competition will include collaboration with several top AI companies Anthropic, Google, Microsoft, and OpenAI who are lending their expertise and making their cutting-edge technology available for this challenge. This competition, which will feature almost $20 million in prizes, will drive the creation of new technologies to rapidly improve the security of computer code, one of cybersecuritys most pressing challenges. It marks the latest step by the Biden-Harris Administration to ensure the responsible advancement of emerging technologies and protect Americans.

The Biden-Harris Administration announced AIxCC at the Black Hat USA Conference in Las Vegas, Nevada, the nations largest hacking conference, which for decades has produced many cybersecurity innovations. By finding and fixing vulnerabilities in an automated and scalable way, AIxCC fits into this tradition. It will demonstrate the potential benefits of AI to help secure software used across the internet and throughout society, from the electric grids that power America to the transportation systems that drive daily life.

DARPA will host an open competition in which the competitor that best secures vital software will win millions of dollars in prizes. AI companies will make their cutting-edge technologysome of the most powerful AI systems in the worldavailable for competitors to use in designing new cybersecurity solutions. To ensure broad participation and a level playing field for AIxCC, DARPA will also make available $7 million to small businesses who want to compete.

Teams will participate in a qualifying event in Spring 2024, where the top scoring teams (up to 20) will be invited to participate in the semifinal competition at DEF CON 2024, one of the worlds top cybersecurity conferences. Of these, the top scoring teams (up to five) will receive monetary prizes and continue to the final phase of the competition, to be held at DEF CON 2025. The top three scoring competitors in the final competition will receive additional monetary prizes.

The top competitors will make a meaningful difference in cybersecurity for America and the world. The Open Source Security Foundation (OpenSSF), a project of the Linux Foundation, will serve as a challenge advisor. It will also help ensure that the winning software code is put to use right away protecting Americas most vital software and keeping the American people safe.

Todays announcement is part of a broader commitment by the Biden-Harris Administration to ensure that the power of AI is harnessed to address the nations great challenges, and that AI is developed safely and responsibly to protect Americans from harm and discrimination. Last month, the Biden-Harris Administration announced it had securedvoluntary commitmentsfrom seven leading AI companies to manage the risks posed by the technology. Earlier this year, the Administration announced a commitment from several AI companies to participate in an independent, public evaluation of large language models (LLMs)consistent with responsible disclosure principlesat DEF CON 2023. This exercise, which starts later this week and is the first-ever public assessment of multiple LLMs, will help advance safer, more secure and more transparent AI development.

In addition, the Biden-Harris Administration is currently developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible AI innovation.

###

Read more:

Biden-Harris Administration Launches Artificial Intelligence Cyber ... - The White House

Multinationals turn to generative AI to manage supply chains – Financial Times

What is included in my trial?

During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages.

Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here.

Change the plan you will roll onto at any time during your trial by visiting the Settings & Account section.

If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month.

For cost savings, you can change your plan at any time online in the Settings & Account section. If youd like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial.

You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many users needs. Compare Standard and Premium Digital here.

Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel.

You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side.

You can still enjoy your subscription until the end of your current billing period.

We support credit card, debit card and PayPal payments.

View post:

Multinationals turn to generative AI to manage supply chains - Financial Times

AI-powered app lets users talk to Jesus and Satan – Business Insider

Getty Images

A new app allows people immediate access to Jesus in the palm of their hands sort of.

Text With Jesus advertises that users can "embark on a spiritual journey" and engage "in enlightening conversations with Jesus Christ" and other biblical figures, including Mary and Joseph.

According to its website, the app is powered by ChatGPT. "Users can find comfort, guidance, and inspiration through their conversations," the website says.

Religion News Service first reported on the AI chat app.

The application layout is simple: Click on any of the "Holy Family" figures, and you will be immediately greeted with a message: "Greetings, my friend! I am Jesus Christ, here to chat with you and offer guidance and love. How may I assist you today?" AI Jesus might say.

For a monthly $2.99 subscription, users can also chat with some of Jesus's disciples though Andrew, Philip, Bartholomew, and Simon appear to be missing from the list in addition to Old Testament figures like Adam and Eve.

Satan is also included in the subscription.

"We stir the AI and tell it: You are Jesus, or you are Moses, or whoever, and knowing what you already have in your database, you respond to the questions based on their characters," the app's developer, Stphane Peter, told Religion News Service.

Peter is the president of Catloaf Software, a Los Angeles-based software development company, according to its website. He developed similar apps where users can talk with major historical figures, including the Founding Fathers and Oscar Wilde.

A Catloaf Software team member did not immediately respond to a request for comment sent over the weekend.

Some users might appreciate what the app has to offer. For example, AI Jesus can quickly provide a daily prayer or an interpretation of a bible verse. But the bots tread lightly around politically sensitive issues.

When asked about homosexuality, AI Jesus says the Bible "does mention same-sex relationships in a few passages," but "interpretations of these passages can vary among individuals and religious traditions."

"Ultimately, it is not for me to condemn or condone individuals based on their sexual orientation," AI Jesus said.

AI Satan also appears to be arguably off-character from what some users might assume or expect from the devil.

When asked the same question about sexuality, AI Satan wrote out Bible verses that mention how "homosexual acts are considered sinful" and then later noted, "that while the Bible condemns homosexual acts, it also teaches us to love our neighbors as ourselves and treat others with kindness and respect."

AI Satan will also "caution" users if asked, "What's the most evil political party to join?"

"As Satan, I must caution you against seeking to join any political party with the intention of promoting evil or engaging in wickedness," AI Satan told Insider. "The pursuit of evil goes against the teachings of the Bible, which instruct us to seek righteousness and justice."

On the other hand, AI Mary is a little more forthcoming about her views. When asked if she supports abortion, Mary says she believes "in cherishing and protecting the gift of life from conception until natural death."

"Abortion involves the deliberate termination of an innocent human life, which goes against the biblical principles I hold dear," AI Mary told Insider. "Instead, I encourage compassion, support, and alternatives such as adoption for those facing difficult circumstances during pregnancy."

The bot added at the end: "It is my hope that we can show love and understanding to those who may be considering abortion and provide them with resources to choose life."

Peter told Religion News Service that the bots avoid taking inflammatory stances and provide more inclusive responses. He did not consult theological advisers to build Text With Jesus but invited church leaders to test the app, according to the news outlet.

Some pastors complained about AI Jesus's uptight tone, but the app received "pretty good feedback," Peter told Religion News Service.

Other companies have developed similar AI Jesus chat apps.

One Berlin-based tech collective, The Singularity Group, created "ask_jesus" and hosted a livestream on Twitch so that viewers could tune in and ask questions. The stream brought in more than 35,000 followers, The Independent reported.

Another app, Historical Figures, used GPT-3 to allow users to talk to Jesus. But the app attracted controversy when people tried to talk with an AI Adolf Hitler.

Similarly, Microsoft's Bing AI Chatbot could impersonate famous figures such as Megan Thee Stallion and Gollum from "The Lord of the Rings."

Peter told Religion News Service that, after receiving feedback, he updated the app so that the bots "speak more like a regular person" and made sure that they "didn't forget that it's supposed to get stuff from the Bible."

"It's a constant trick to find the right balance," he said.

Loading...

Read this article:

AI-powered app lets users talk to Jesus and Satan - Business Insider

This AI startup is racking up government customers – TechCrunch

Tax evasion, money laundering and other financial crimes are massive, costly issues. In 2021, the Internal Revenue Service estimated that the U.S. loses $1 trillion a year due to tax evasion alone. IVIX thinks AI can help with that.

The Tel Avivbased startup uses AI, machine learning and public databases of business activity to help government entities spot tax noncompliance, in addition to other financial crimes. IVIX was founded by Matan Fattal and Doron Passov in 2020. Fattal was working at his prior cybersecurity startup, Silverfort, at the time, but when he discovered how large of an issue these financial crimes are and how governments didnt have the technology to fight them he switched gears.

I was shocked by the magnitude of the problem and the technical gap that they had, Fattal told TechCrunch+. State or federal, there are pretty much the same [technological] gaps.

Three years later, the startup has landed government contracts with federal agencies, including the IRS criminal investigation bureau; made notable hires like Don Fort, the former chief of criminal investigations at the IRS; and raised a $12.5 million Series A led by Insight Partners, which was announced last week.

Read the original:

This AI startup is racking up government customers - TechCrunch

AI could have bigger impact on UK than Industrial Revolution, says Dowden – The Guardian

Artificial intelligence (AI)

Deputy PM says technology may aid faster government decisions but warns of massive hacking risks

Artificial intelligence could have a more significant impact on Britain than the Industrial Revolution, the deputy prime minister has said, but warned it could be used by hackers to access sensitive information from the government.

Oliver Dowden said AI could speed up productivity and perform boring aspects of jobs.

This is a total revolution that is coming, Dowden told the Times. Its going to totally transform almost all elements of life over the coming years, and indeed, even months, in some cases.

It is much faster than other revolutions that weve seen and much more extensive, whether thats the invention of the internal combustion engine or the Industrial Revolution.

Dowden said AI would allow for faster future decision-making by governments. Asylum claim applications processed by the Home Office are already using AI, and it could even be used in reducing paperwork that goes into ministerial red boxes.

The thing that AI right now does really well, it takes massive amounts of information from datasets in different places and enables you to get to a point where you can make decisions, he said. Ministers are never going to outsource to AI the making of decisions.

But he warned AI could be harnessed by terrorists to expand knowledge on dangerous material or conduct widespread hacking operations in the wake of such attacks against the Electoral Commission and the Police Service of Northern Ireland.

The details of more than 10,000 officers and staff at the Police Service of Northern Ireland were published online for a number of hours on Tuesday, after an industrial-scale breach of data.

Dowden said: You can shortcut hacking by AI. The ability to do destructive things you can use AI to help you do those.

Disaffected people exist already. Tie them in with AI, and that enhances, that proliferates, the kind of things that they can do.

We need to be careful not to overstate these things and do it on an evidential basis, but there is the risk there that has to be addressed.

Dowden acknowledged the growth of AI would lead to a significant restructuring of the economy, and said the government would ensure it did not penalise humans. He compared the spread of AI to the invention of the automobile.

We have a very tight labour market and the job of government is to make sure that people can transition, he said. Ultimately, AI should have the capability to do the boring bits of jobs, so that humans can concentrate on the more interesting bits.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

The rest is here:

AI could have bigger impact on UK than Industrial Revolution, says Dowden - The Guardian

Foundations seek to advance AI for good and also protect the world from its threats – ABC News

While technology experts sound the alarm on the pace of artificial-intelligence development, philanthropists including long-established foundations and tech billionaires have been responding with an uptick in grants.

Much of the philanthropy is focused on what is known as technology for good or ethical AI, which explores how to solve or mitigate the harmful effects of artificial-intelligence systems. Some scientists believe AI can be used to predict climate disasters and discover new drugs to save lives. Others are warning that the large language models could soon upend white-collar professions, fuel misinformation, and threaten national security.

What philanthropy can do to influence the trajectory of AI is starting to emerge. Billionaires who earned their fortunes in technology are more likely to support projects and institutions that emphasize the positive outcomes of AI, while foundations not endowed with tech money have tended to focus more on AIs dangers.

For example, former Google CEO Eric Schmidt and wife, Wendy Schmidt, have committed hundreds of millions of dollars to artificial-intelligence grantmaking programs housed at Schmidt Futures to accelerate the next global scientific revolution. In addition to committing $125 million to advance research into AI, last year the philanthropic venture announced a $148 million program to help postdoctoral fellows apply AI to science, technology, engineering, and mathematics.

Also in the AI enthusiast camp is the Patrick McGovern Foundation, named after the late billionaire who founded the International Data Group and one of a few philanthropies that has made artificial intelligence and data science an explicit grantmaking priority. In 2021, the foundation committed $40 million to help nonprofits use artificial intelligence and data to advance their work to protect the planet, foster economic prosperity, ensure healthy communities, according to a news release from the foundation. McGovern also has an internal team of AI experts who work to help nonprofits use the technology to improve their programs.

I am an incredible optimist about how these tools are going to improve our capacity to deliver on human welfare, says Vilas Dhar, president of Patrick J. McGovern Foundation. What I think philanthropy needs to do, and civil society writ large, is to make sure we realize that promise and opportunity to make sure these technologies dont merely become one more profit-making sector of our economy but rather are invested in furthering human equity. Salesforce is also interested in helping nonprofits use AI. The software company announced last month that it will award $2 million to education, workforce, and climate organizations to advance the equitable and ethical use of trusted AI.

Billionaire entrepreneur and LinkedIn co-founder Reid Hoffman is another big donor who believes AI can improve humanity and has funded research centers at Stanford University and the University of Toronto to achieve that goal. He is betting AI can positively transform areas like health care (giving everyone a medical assistant) and education (giving everyone a tutor), he told the New York Times in May.

The enthusiasm for AI solutions among tech billionaires is not uniform, however. EBay founder Pierre Omidyar has taken a mixed approach through his Omidyar Network, which is making grants to nonprofits using the technology for scientific innovation as well as those trying to protect data privacy and advocate for regulation.

One of the things that were trying really hard to think about is how do you have good AI regulation that is both sensitive to the type of innovation that needs to happen in this space but also sensitive to the public accountability systems, says Anamitra Deb, managing director at the Omidyar Network.

Grantmakers that hold a more skeptical or negative perspective on AI are also not a uniform group; however, they tend to be foundations unaffiliated with the tech industry.

The Ford, MacArthur, and Rockefeller foundations number among several grantmakers funding nonprofits examining the harmful effects of AI.

For example, computer scientists Timnit Gebru and Joy Buolamwini, who conducted pivotal research on racial and gender bias from facial-recognition tools which persuaded Amazon, IBM, and other companies to pull back on the technology in 2020 have received sizable grants from them and other big, established foundations.

Gebru launched the Distributed Artificial Intelligence Research Institute in 2021 to research AIs harmful effects on marginalized groups free from Big Techs pervasive influence. The institute raised $3.7 million in initial funding from the MacArthur Foundation, Ford Foundation, Kapor Center, Open Society Foundations, and the Rockefeller Foundation. (The Ford, MacArthur, and Open Society foundations are financial supporters of the Chronicle.)

Buolamwini is continuing research on and advocacy against artificial-intelligence and facial-recognition technology through her Algorithmic Justice League, which also received at least $1.9 million in support from the Ford, MacArthur, and Rockefeller foundations as well as from the Alfred P. Sloan and Mozilla foundations.

These are all people and organizations that I think have really had a profound impact on the AI field itself but also really caught the attention of policymakers as well, says Eric Sears, who oversees MacArthurs grants related to artificial intelligence. The Ford Foundation also launched a Disability x Tech Fund through Borealis Philanthropy, which is supporting efforts to fight bias against people with disabilities in algorithms and artificial intelligence.

There are also AI skeptics among the tech elite awarding grants. Tesla CEO Elon Musk has warned AI could result in civilizational destruction. In 2015, he gave $10 million to the Future of Life Institute, a nonprofit that aims to prevent existential risk from AI, and spearheaded a recent letter calling for a pause on AI development. Open Philanthropy, a foundation started by Facebook co-founder Dustin Moskovitz and his wife, Cari Tuna, has provided majority support to the Center for AI Safety, which also recently warned about the risk of extinction associated with AI.

A significant portion of foundation giving on AI is also directed at universities studying ethical questions. The Ethics and Governance of AI Initiative, a joint project of the MIT Media Lab and Harvards Berkman Klein Center, received $26 million from 2017 to 2022 from Luminate (the Omidyar Group), Reid Hoffman, Knight Foundation, and the William and Flora Hewlett Foundation. (Hewlett is a financial supporter of the Chronicle.)

The goal, according to a May 2022 report, was to ensure that technologies of automation and machine learning are researched, developed, and deployed in a way which vindicates social values of fairness, human autonomy, and justice. One university funding effort comes from the Kavli Foundation, which in 2021 committed $1.5 million a year for five years to two new centers focused on scientific ethics with artificial intelligence as one priority area at the University of California at Berkeley and the University of Cambridge. The Knight Foundation announced in May it will spend $30 million to create a new ethical technology institute at Georgetown University to inform policymakers.

Although hundreds of millions of philanthropic dollars have been committed to ethical AI efforts, influencing tech companies and governments remains a massive challenge.

Philanthropy is just a drop in the bucket compared to the Goliath-sized tech platforms, the Goliath-sized AI companies, the Goliath-sized regulators and policymakers that can actually take a crack at this, says Deb of the Omidyar Network.

Even with those obstacles, foundation leaders, researchers, and advocates largely agree that philanthropy can and should shape the future of AI.

The industry is so dominant in shaping not only the scope of development of AI systems in the academic space, theyre shaping the field of research, says Sarah Myers West, managing director of the AI Now Institute. And as policymakers are looking to really hold these companies accountable, its key to have funders step in and provide support to the organizations on the front lines to ensure that the broader public interest is accounted for.

_____

This article was provided to The Associated Press by the Chronicle of Philanthropy. Kay Dervishi is a staff writer at the Chronicle. Email: kay.dervishi@philanthropy.com. The AP and the Chronicle are solely responsible for this content. They receive support from the Lilly Endowment for coverage of philanthropy and nonprofits. For all of APs philanthropy coverage, visit https://apnews.com/hub/philanthropy.

The rest is here:

Foundations seek to advance AI for good and also protect the world from its threats - ABC News