Category Archives: Artificial Intelligence
Artificial intelligence technology behind ChatGPT was built in Iowa with a lot of water – The Associated Press
DES MOINES, Iowa (AP) The cost of building an artificial intelligence product like ChatGPT can be hard to measure.
But one thing Microsoft-backed OpenAI needed for its technology was plenty of water, pulled from the watershed of the Raccoon and Des Moines rivers in central Iowa to cool a powerful supercomputer as it helped teach its AI systems how to mimic human writing.
As they race to capitalize on a craze for generative AI, leading tech developers including Microsoft, OpenAI and Google have acknowledged that growing demand for their AI tools carries hefty costs, from expensive semiconductors to an increase in water consumption.
But theyre often secretive about the specifics. Few people in Iowa knew about its status as a birthplace of OpenAIs most advanced large language model, GPT-4, before a top Microsoft executive said in a speech it was literally made next to cornfields west of Des Moines.
Building a large language model requires analyzing patterns across a huge trove of human-written text. All of that computing takes a lot of electricity and generates a lot of heat. To keep it cool on hot days, data centers need to pump in water often to a cooling tower outside its warehouse-sized buildings.
In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons, or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research.
Its fair to say the majority of the growth is due to AI, including its heavy investment in generative AI and partnership with OpenAI, said Shaolei Ren, a researcher at the University of California, Riverside who has been trying to calculate the environmental impact of generative AI products such as ChatGPT.
In a paper due to be published later this year, Rens team estimates ChatGPT gulps up 500 milliliters of water (close to whats in a 16-ounce water bottle) every time you ask it a series of between 5 to 50 prompts or questions. The range varies depending on where its servers are located and the season. The estimate includes indirect water usage that the companies dont measure such as to cool power plants that supply the data centers with electricity.
Most people are not aware of the resource usage underlying ChatGPT, Ren said. If youre not aware of the resource usage, then theres no way that we can help conserve the resources.
Google reported a 20% growth in water use in the same period, which Ren also largely attributes to its AI work. Googles spike wasnt uniform -- it was steady in Oregon where its water use has attracted public attention, while doubling outside Las Vegas. It was also thirsty in Iowa, drawing more potable water to its Council Bluffs data centers than anywhere else.
In response to questions from The Associated Press, Microsoft said in a statement this week that it is investing in research to measure AIs energy and carbon footprint while working on ways to make large systems more efficient, in both training and application.
We will continue to monitor our emissions, accelerate progress while increasing our use of clean energy to power data centers, purchasing renewable energy, and other efforts to meet our sustainability goals of being carbon negative, water positive and zero waste by 2030, the companys statement said.
OpenAI echoed those comments in its own statement Friday, saying its giving considerable thought to the best use of computing power.
We recognize training large models can be energy and water-intensive and work to improve efficiencies, it said.
Microsoft made its first $1 billion investment in San Francisco-based OpenAI in 2019, more than two years before the startup introduced ChatGPT and sparked worldwide fascination with AI advancements. As part of the deal, the software giant would supply computing power needed to train the AI models.
To do at least some of that work, the two companies looked to West Des Moines, Iowa, a city of 68,000 people where Microsoft has been amassing data centers to power its cloud computing services for more than a decade. Its fourth and fifth data centers are due to open there later this year.
Theyre building them as fast as they can, said Steve Gaer, who was the citys mayor when Microsoft came to town. Gaer said the company was attracted to the citys commitment to building public infrastructure and contributed a staggering sum of money through tax payments that support that investment.
But, you know, they were pretty secretive on what theyre doing out there, he added.
Microsoft first said it was developing one of the worlds most powerful supercomputers for OpenAI in 2020, declining to reveal its location to AP at the time but describing it as a single system with more than 285,000 cores of conventional semiconductors, and 10,000 graphics processors a kind of chip thats become crucial to AI workloads.
Experts have said it can make sense to pretrain an AI model at a single location because of the large amounts of data that need to be transferred between computing cores.
It wasnt until late May that Microsofts president, Brad Smith, disclosed that it had built its advanced AI supercomputing data center in Iowa, exclusively to enable OpenAI to train what has become its fourth-generation model, GPT-4. The model now powers premium versions of ChatGPT and some of Microsofts own products and has accelerated a debate about containing AIs societal risks.
It was made by these extraordinary engineers in California, but it was really made in Iowa, Smith said.
In some ways, West Des Moines is a relatively efficient place to train a powerful AI system, especially compared to Microsofts data centers in Arizona that consume far more water for the same computing demand.
So if you are developing AI models within Microsoft, then you should schedule your training in Iowa instead of in Arizona, Ren said. In terms of training, theres no difference. In terms of water consumption or energy consumption, theres a big difference.
For much of the year, Iowas weather is cool enough for Microsoft to use outside air to keep the supercomputer running properly and vent heat out of the building. Only when the temperature exceeds 29.3 degrees Celsius (about 85 degrees Fahrenheit) does it withdraw water, the company has said in a public disclosure.
That can still be a lot of water, especially in the summer. In July 2022, the month before OpenAI says it completed its training of GPT-4, Microsoft pumped in about 11.5 million gallons of water to its cluster of Iowa data centers, according to the West Des Moines Water Works. That amounted to about 6% of all the water used in the district, which also supplies drinking water to the citys residents.
In 2022, a document from the West Des Moines Water Works said it and the city government will only consider future data center projects from Microsoft if those projects can demonstrate and implement technology to significantly reduce peak water usage from the current levels to preserve the water supply for residential and other commercial needs.
Microsoft said Thursday it is working directly with the water works to address its feedback. In a written statement, the water works said the company has been a good partner and has been working with local officials to reduce its water footprint while still meeting its needs.
-
OBrien reported from Providence, Rhode Island.
The Associated Press and OpenAI have a licensing agreement that allows for part of APs text archives to be used to train the tech companys large language model. AP receives an undisclosed fee for use of its content.
See more here:
Artificial intelligence technology behind ChatGPT was built in Iowa with a lot of water - The Associated Press
Snackable artificial intelligence, expert AI, and the pharma industry – STAT
While its widely accepted that the pharma industry is innovative in R&D, it is also true that it can be slow at embracing technological revolutions. Many people have criticized pharma companies for being slow to adopt AI. Indeed, some CEOs I talk to are concerned about too widely adopting AI, citing fears of unknown threats.
But as the CEO of Sanofi, I dont believe those challenges should guide our thinking or adoption of AI in the pharma business, as AI has the potential to improve and reinvent the way our business operates. AI impacts the way we exchange information by connecting different business units and functions that may have been operating independently in real time. The exchange of data in real time greatly accelerates and enhances the business operations.
But we have to be thoughtful about how we use AI. While some see AI adoption as a way to improve efficiency for example, through the acceleration and automation of repetitive tasks I think its real promise lies in insights and better decision intelligence, which will translate into better medicines, quicker, for the right patients, ultimately improving peoples lives.
Discovering innovative medicines is increasingly challenging, and the bar for differentiation and safety and efficacy is getting higher. Expert AI is about giving scientists the opportunity to benefit from massive computing power, machine learning, and trained algorithms for expanding the druggable universe. AI-enabled screening can sift through billions of possible molecules can allow R&D teams to shorten the search to find disease drivers and potential drug candidates. We can also be broader in the diseases we target for low incremental cost.
The development of expert AI for the discovery of new medicines will concern a relatively limited portion of the pharma workforce: those with highly specialized skills and knowledge at the intersection of biology, chemistry, data science and in collaboration with tech and startup companies.
But expert AI is not the only approach that pharma should take. The promise of using AI exists across the value chain of the industry as it candeliver insights and ultimately better outcomes, end to end across the enterprise. Thats the importance of fostering the use of snackable AI inside organizations.
Snackable AI is about applications that everyone across an organization can use on a daily basis. Apps can capture and aggregate a 360-degree view from the whole company, from finance to procurement and from supply to quality. Such apps can tell us what is going well and highlight where there are potentially problems. It can also give recommendations and nudge by suggesting next steps that can help fix an emerging problem thats useful at that moment in time. No individual could process all that information all at once.
Moreover, senior leadership can see the data at the same time as the teams, which means it is not polished or interpreted in advance, erasing levels of hierarchy in gleaning insights. Thankfully, it also means a very different way of working, with fewer Excel sheets and PowerPoint slides! In some cases, budgeting can entail thousands of slides; AI can cut that number to several dozen. We can eliminate wasted time. If we can democratize the data, there is less time spent in meetings and more focus on getting insights fast.
So, while expert AI is about giving to specialized R&D teams bespoke tools and technologies to find breakthroughs for patients, snackable AI is about democratizing access to companys data and helping the largest parts of an organization to make better everyday decisions. That will in turn lead to better allocation of resources and, ultimately, to better outcomes for patients.
The rapid progress of AI has triggered debate about the power of the technology, and risks. Of course, we have to have good governance, but we need to allow people the ability to play with AI in a curated environment. With the right guardrails in place, we can greatly enhance our decision intelligence and get breakthroughs and real added value. When people arent producingPowerPointor Excel sheets or debating the numbers, they can spend more time onseekingsolutions rather than simply trying to identify the problem.
The journey is as much cultural as technological. And it must start at the top. Most leaders of large organizations started their career in an analog world and must catch up to better master the basics of digital, data and artificial intelligence. Resistance to implementing AI can exist among teams for a number of reasons fear of process disruption, wrong decisions, and job elimination, to name a few. A key for greater adoption of snackable AI relies on a leaders ability to demonstrate how this new technology can help teams remove menial tasks and transactional work by focusing instead on ways to empower better decisions, founded on facts and less on emotions.
We need to get the base of our companies nudged into better decisions, every day, in a nonpolitical and nonconfrontational way. As we innovate with AI, the results achieved, such as more precise and better medicines and the possibility of operating in a more efficient way, will benefit many companies as the technology learns and improves. The advantage will come from those who operationalize it faster, who live it, because much of AI is about behavioral change. You can be slow and get replaced or act fast and be brave to make AI a key driver of progress and better decision intelligence in companies.
Paul Hudson is CEO of Sanofi.
Follow this link:
Snackable artificial intelligence, expert AI, and the pharma industry - STAT
Governor Newsom Signs Executive Order to Prepare California for … – Office of Governor Gavin Newsom
WHAT YOU NEED TO KNOW:California is the global hub forgenerative artificial intelligence(GenAI) we are the natural leader in this emerging field of technology tools that could very well change the world.To capture its benefits for the good of society, but also to protect against its potential harms, Governor Newsom issued an executive order today laying out how Californias measured approach will focus on shaping the future of ethical, transparent, and trustworthy AI, while remaining the worlds AI leader.
SACRAMENTO With GenAIs wide-ranging potential for Californians and the states economy, Governor Gavin Newsom today signed an executive orderto study the development, use, and risks of artificial intelligence (AI) technology throughout the state and to develop a deliberate and responsible process for evaluation and deployment of AI within state government.
WHAT GOVERNOR NEWSOM SAID:This is a potentially transformative technology comparable to the advent of the internet and were only scratching the surface of understanding what GenAI is capable of. We recognize both the potential benefits and risksthese tools enable. Were neither frozenby the fears nor hypnotized by the upside. Were taking a clear-eyed, humble approach to this world-changing technology. Asking questions. Seeking answers from experts. Focused on shaping the future of ethical, transparent, and trustworthy AI. Doing what California always does leading the world in technological progress.
AI IN CALIFORNIA:For decades, California has been a global leader in education, innovation, research, development, talent, entrepreneurship, and new technologies. As these technologies continue to grow and develop, California has established itself as the world leader in GenAI innovation with 35 of the worlds top 50 AI companies and a quarter of all AI patents, conference papers, and companies globally.
California is also home to world-leading GenAI research institutions the University of California, Berkeleys College of Computing, Data Science, and Society and Stanford Universitys Institute for Human-Centered Artificial Intelligence providing a unique opportunity for academic research and government collaboration.
WHATS IN THE EXECUTIVE ORDER
To deploy GenAIethically and responsiblythroughout state government,protect and preparefor potential harms, andremain the worlds AI leader, the Governors executive order includes a number of provisions:
Risk-Analysis Report:Direct state agencies and departments to perform a joint risk-analysis of potential threats to and vulnerabilities of Californias critical energy infrastructure by the use of GenAI.
Procurement Blueprint:To support a safe, ethical, and responsible innovation ecosystem inside state government, agencies will issue general guidelines for public sector procurement, uses, and required training for application of GenAI building on the White Houses Blueprint for an AI Bill of Rights and the National Institute for Science and Technologys AI Risk Management Framework. State agencies and departments will consider procurement and enterprise use opportunities where GenAI can improve the efficiency, effectiveness, accessibility, and equity of government operations.
Beneficial Uses of GenAI Report:Direct state agencies and departments to develop a report examining the most significant and beneficial uses of GenAI in the state. The report will also explain the potential harms and risks for communities, government, and state government workers.
Deployment and Analysis Framework:Develop guidelines for agencies and departments to analyze the impact that adopting GenAI tools may have on vulnerable communities. The state will establish the infrastructure needed to conduct pilots of GenAI projects, including California Department of Technology approved environments or sandboxes to test such projects.
State Employee Training:To support Californias state government workforce and prepare for the next generation of skills needed to thrive in the GenAI economy, agencies will provide trainings for state government workers to use state-approved GenAI to achieve equitable outcomes, and will establish criteria to evaluate the impact of GenAI to the state government workforce.
GenAI Partnership and Symposium:Establish a formal partnership with the University of California, Berkeley and Stanford University to consider and evaluate the impacts of GenAI on California and what efforts the state should undertake to advance its leadership in this industry. The state and the institutions will develop and host a joint summit in 2024 to engage in meaningful discussions about the impacts of GenAI on California and its workforce.
Legislative Engagement:Engage with Legislative partners and key stakeholders in a formal process to develop policy recommendations for responsible use of AI, including any guidelines, criteria, reports, and/or training.
Evaluate Impacts of AI on an Ongoing Basis:Periodically evaluate for potential impact of GenAI on regulatory issues under the respective agency, department, or boardsauthority and recommend necessary updates as a result of this evolving technology.
Read the Executive Order Here
The Administration will work throughout the next year, in collaboration with our states workforce, to implement the provisions of the executive order, and engage the Legislature and stakeholders to develop policy recommendations.
###
Excerpt from:
Governor Newsom Signs Executive Order to Prepare California for ... - Office of Governor Gavin Newsom
Artificial intelligence technology behind ChatGPT was built in Iowa with a lot of water – ABC News
DES MOINES, Iowa -- The cost of building an artificial intelligence product like ChatGPT can be hard to measure.
But one thing Microsoft-backed OpenAI needed for its technology was plenty of water, pulled from the watershed of the Raccoon and Des Moines rivers in central Iowa to cool a powerful supercomputer as it helped teach its AI systems how to mimic human writing.
As they race to capitalize on a craze for generative AI, leading tech developers including Microsoft, OpenAI and Google have acknowledged that growing demand for their AI tools carries hefty costs, from expensive semiconductors to an increase in water consumption.
But theyre often secretive about the specifics. Few people in Iowa knew about its status as a birthplace of OpenAI's most advanced large language model, GPT-4, before a top Microsoft executive said in a speech it was literally made next to cornfields west of Des Moines.
Building a large language model requires analyzing patterns across a huge trove of human-written text. All of that computing takes a lot of electricity and generates a lot of heat. To keep it cool on hot days, data centers need to pump in water often to a cooling tower outside its warehouse-sized buildings.
In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons, or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research.
Its fair to say the majority of the growth is due to AI, including its heavy investment in generative AI and partnership with OpenAI, said Shaolei Ren, a researcher at the University of California, Riverside who has been trying to calculate the environmental impact of generative AI products such as ChatGPT.
In a paper due to be published later this year, Rens team estimates ChatGPT gulps up 500 milliliters of water (close to whats in a 16-ounce water bottle) every time you ask it a series of between 5 to 50 prompts or questions. The range varies depending on where its servers are located and the season. The estimate includes indirect water usage that the companies dont measure such as to cool power plants that supply the data centers with electricity.
Most people are not aware of the resource usage underlying ChatGPT, Ren said. If youre not aware of the resource usage, then theres no way that we can help conserve the resources.
Google reported a 20% growth in water use in the same period, which Ren also largely attributes to its AI work. Googles spike wasnt uniform -- it was steady in Oregon where its water use has attracted public attention, while doubling outside Las Vegas. It was also thirsty in Iowa, drawing more potable water to its Council Bluffs data centers than anywhere else.
In response to questions from The Associated Press, Microsoft said in a statement this week that it is investing in research to measure AI's energy and carbon footprint "while working on ways to make large systems more efficient, in both training and application.
We will continue to monitor our emissions, accelerate progress while increasing our use of clean energy to power data centers, purchasing renewable energy, and other efforts to meet our sustainability goals of being carbon negative, water positive and zero waste by 2030, the company's statement said.
OpenAI echoed those comments in its own statement Friday, saying it's giving considerable thought" to the best use of computing power.
We recognize training large models can be energy and water-intensive" and work to improve efficiencies, it said.
Microsoft made its first $1 billion investment in San Francisco-based OpenAI in 2019, more than two years before the startup introduced ChatGPT and sparked worldwide fascination with AI advancements. As part of the deal, the software giant would supply computing power needed to train the AI models.
To do at least some of that work, the two companies looked to West Des Moines, Iowa, a city of 68,000 people where Microsoft has been amassing data centers to power its cloud computing services for more than a decade. Its fourth and fifth data centers are due to open there later this year.
Theyre building them as fast as they can, said Steve Gaer, who was the city's mayor when Microsoft came to town. Gaer said the company was attracted to the city's commitment to building public infrastructure and contributed a staggering sum of money through tax payments that support that investment.
But, you know, they were pretty secretive on what theyre doing out there, he added.
Microsoft first said it was developing one of the world's most powerful supercomputers for OpenAI in 2020, declining to reveal its location to AP at the time but describing it as a single system with more than 285,000 cores of conventional semiconductors, and 10,000 graphics processors a kind of chip that's become crucial to AI workloads.
Experts have said it can make sense to "pretrain" an AI model at a single location because of the large amounts of data that need to be transferred between computing cores.
It wasn't until late May that Microsoft's president, Brad Smith, disclosed that it had built its advanced AI supercomputing data center in Iowa, exclusively to enable OpenAI to train what has become its fourth-generation model, GPT-4. The model now powers premium versions of ChatGPT and some of Microsoft's own products and has accelerated a debate about containing AI's societal risks.
It was made by these extraordinary engineers in California, but it was really made in Iowa, Smith said.
In some ways, West Des Moines is a relatively efficient place to train a powerful AI system, especially compared to Microsoft's data centers in Arizona that consume far more water for the same computing demand.
So if you are developing AI models within Microsoft, then you should schedule your training in Iowa instead of in Arizona," Ren said. "In terms of training, theres no difference. In terms of water consumption or energy consumption, theres a big difference.
For much of the year, Iowa's weather is cool enough for Microsoft to use outside air to keep the supercomputer running properly and vent heat out of the building. Only when the temperature exceeds 29.3 degrees Celsius (about 85 degrees Fahrenheit) does it withdraw water, the company has said in a public disclosure.
That can still be a lot of water, especially in the summer. In July 2022, the month before OpenAI says it completed its training of GPT-4, Microsoft pumped in about 11.5 million gallons of water to its cluster of Iowa data centers, according to the West Des Moines Water Works. That amounted to about 6% of all the water used in the district, which also supplies drinking water to the city's residents.
In 2022, a document from the West Des Moines Water Works said it and the city government will only consider future data center projects" from Microsoft if those projects can demonstrate and implement technology to significantly reduce peak water usage from the current levels to preserve the water supply for residential and other commercial needs.
Microsoft said Thursday it is working directly with the water works to address its feedback. In a written statement, the water works said the company has been a good partner and has been working with local officials to reduce its water footprint while still meeting its needs.
-
O'Brien reported from Providence, Rhode Island.
The Associated Press and OpenAI have a licensing agreement that allows for part of AP's text archives to be used to train the tech companys large language model. AP receives an undisclosed fee for use of its content.
Excerpt from:
Artificial intelligence technology behind ChatGPT was built in Iowa with a lot of water - ABC News
TargetRecruit Unveils Copilot: Revolutionising Artificial Intelligence for the Recruitment Industry – Yahoo Finance
SYDNEY, Sept. 11, 2023 /PRNewswire/ -- TargetRecruit is thrilled to announce Copilot, the first introduction of Generative AI, and an incredible leap forward in establishing the foundation for diverse native AI functionality within the TargetRecruit platform.
TargetRecruit Logo
Copilot is a feature that elevates user interaction with GPT-based models through seamless text generation capabilities, based on prompt input and context. Copilot leverages automated prompts to craft comprehensive, tailored job descriptions that perfectly match recruitment needs, save time, and streamline recruiter workflows with just a few clicks. Copilot's user-friendly configuration empowers customisation, with the initial integration including OpenAI's ChatGPT.
Underpinning Copilot is an advanced AI Integration Framework designed to seamlessly integrate with any REST API-based Generative AI API, allowing the flexibility to connect with a wide array of AI models in the future. Enabling plug-and-play capabilities with preferred AI services will pave the way for a series of upcoming AI capabilities that will accelerate sales and recruiting productivity and efficiency.
Copilot represents a significant milestone in TargetRecruit's commitment to excellence where the power of Artificial Intelligence is propelling recruitment software into an era of unparalleled efficiency and innovation. As we move forward, we are excited to continue leading the way in recruitment software and artificial intelligence.
About TargetRecruit
TargetRecruit provides a powerful CRM/ATS, sales, and middle office solution built on Salesforce the world's #1 platform. Headquartered in Houston, with offices in London, Sydney, and Bangalore, TargetRecruit employs over 100 people globally. To learn more, visit https://au.targetrecruit.com/.
Media contact: marketing@targetrecruit.com, +61 (0) 2 8365 3160
Cision
View original content:https://www.prnewswire.com/apac/news-releases/targetrecruit-unveils-copilot-revolutionising-artificial-intelligence-for-the-recruitment-industry-301922299.html
SOURCE TargetRecruit
Follow this link:
TargetRecruit Unveils Copilot: Revolutionising Artificial Intelligence for the Recruitment Industry - Yahoo Finance
Artificial Intelligence and Education: A Reading List – JSTOR Daily
How should education change to address, incorporate, or challenge todays AI systems, especially powerful large language models? What role should educators and scholars play in shaping the future of generative AI? The release of ChatGPT in November 2022 triggered an explosion of news, opinion pieces, and social media posts addressing these questions. Yet many are not aware of the current and historical body of academic work that offers clarity, substance, and nuance to enrich the discourse.
Linking the terms AI and education invites a constellation of discussions. This selection of articles is hardly comprehensive, but it includes explanations of AI concepts and provides historical context for todays systems. It describes a range of possible educational applications as well as adverse impacts, such as learning loss and increased inequity. Some articles touch on philosophical questions about AI in relation to learning, thinking, and human communication. Others will help educators prepare students for civic participation around concerns including information integrity, impacts on jobs, and energy consumption. Yet others outline educator and student rights in relation to AI and exhort educators to share their expertise in societal and industry discussions on the future of AI.
Whether were aware of it or not, AI was already widespread in education before ChatGPT. Nabeel Gillani et al. describe AI applications such as learning analytics and adaptive learning systems, automated communications with students, early warning systems, and automated writing assessment. They seek to help educators develop literacy around the capacities and risks of these systems by providing an accessible introduction to machine learning and deep learning as well as rule-based AI. They present a cautious view, calling for scrutiny of bias in such systems and inequitable distribution of risks and benefits. They hope that engineers will collaborate deeply with educators on the development of such systems.
Jrgen Rudolph et al. give a practically oriented overview of ChatGPTs implications for higher education. They explain the statistical nature of large language models as they tell the history of OpenAI and its attempts to mitigate bias and risk in the development of ChatGPT. They illustrate ways ChatGPT can be used with examples and screenshots. Their literature review shows the state of artificial intelligence in education (AIEd) as of January 2023. An extensive list of challenges and opportunities culminates in a set of recommendations that emphasizes explicit policy as well as expanding digital literacy education to include AI.
Student and faculty understanding of the risks and impacts of large language models is central to AI literacy and civic participation around AI policy. This hugely influential paper details documented and likely adverse impacts of the current data-and-resource-intensive, non-transparent mode of development of these models. Bender et al. emphasize the ways in which these costs will likely be borne disproportionately by marginalized groups. They call for transparency around the energy use and cost of these models as well as transparency around the data used to train them. They warn that models perpetuate and even amplify human biases and that the seeming coherence of these systems outputs can be used for malicious purposes even though it doesnt reflect real understanding.
The authors argue that inclusive participation in development can encourage alternate development paths that are less resource intensive. They further argue that beneficial applications for marginalized groups, such as improved automatic speech recognition systems, must be accompanied by plans to mitigate harm.
Erik Brynjolfsson argues that when we think of artificial intelligence as aiming to substitute for human intelligence, we miss the opportunity to focus on how it can complement and extend human capabilities. Brynjolfsson calls for policy that shifts AI development incentives away from automation toward augmentation. Automation is more likely to result in the elimination of lower-level jobs and in growing inequality. He points educators toward augmentation as a framework for thinking about AI applications that assist learning and teaching. How can we create incentives for AI to support and extend what teachers do rather than substituting for teachers? And how can we encourage students to use AI to extend their thinking and learning rather than using AI to skip learning?
Brynjolfssons focus on AI as augmentation converges with Microsoft computer scientist Kevin Scotts focus on cognitive assistance. Steering discussion of AI away from visions of autonomous systems with their own goals, Scott argues that near-term AI will serve to help humans with cognitive work. Scott situates this assistance in relation to evolving historical definitions of work and the way in which tools for work embody generalized knowledge about specific domains. Hes intrigued by the way deep neural networks can represent domain knowledge in new ways, as seen in the unexpected coding capabilities offered by OpenAIs GPT-3 language model, which have enabled people with less technical knowledge to code. His article can help educators frame discussions of how students should build knowledge and what knowledge is still relevant in contexts where AI assistance is nearly ubiquitous.
How can educators prepare students for future work environments integrated with AI and advise students on how majors and career paths may be affected by AI automation? And how can educators prepare students to participate in discussions of government policy around AI and work? Laura Tyson and John Zysman emphasize the importance of policy in determining how economic gains due to AI are distributed and how well workers weather disruptions due to AI. They observe that recent trends in automation and gig work have exacerbated inequality and reduced the supply of good jobs for low- and middle-income workers. They predict that AI will intensify these effects, but they point to the way collective bargaining, social insurance, and protections for gig workers have mitigated such impacts in countries like Germany. They argue that such interventions can serve as models to help frame discussions of intelligent labor policies for an inclusive AI era.
Educators considerations of academic integrity and AI text can draw on parallel discussions of authenticity and labeling of AI content in other societal contexts. Artificial intelligence has made deepfake audio, video, and images as well as generated text much more difficult to detect as such. Here, Todd Helmus considers the consequences to political systems and individuals as he offers a review of the ways in which these can and have been used to promote disinformation. He considers ways to identify deepfakes and ways to authenticate provenance of videos and images. Helmus advocates for regulatory action, tools for journalistic scrutiny, and widespread efforts to promote media literacy. As well as informing discussions of authenticity in educational contexts, this report might help us shape curricula to teach students about the risks of deepfakes and unlabeled AI.
Students, by definition, are engaged in developing their cognitive capacities; their understanding of their own intelligence is in flux and may be influenced by their interactions with AI systems and by AI hype. In his review of The Myth of Artificial Intelligence: Why Computers Cant Think the Way We Do by Erik J. Larson, William Hasselberger warns that in overestimating AIs ability to mimic human intelligence we devalue the human and overlook human capacities that are integral to everyday life decision making, understanding, and reasoning. Hasselberger provides examples of both academic and everyday common-sense reasoning that continue to be out of reach for AI. He provides a historical overview of debates around the limits of artificial intelligence and its implications for our understanding of human intelligence, citing the likes of Alan Turing and Marvin Minsky as well as contemporary discussions of data-driven language models.
Gwo-Jen Hwang and Nian-Shing Chen are enthusiastic about the potential benefits of incorporating generative AI into education. They outline a variety of roles a large language model like ChatGPT might play, from student to tutor to peer to domain expert to administrator. For example, educators might assign students to teach ChatGPT on a subject. Hwang and Chen provide sample ChatGPT session transcripts to illustrate their suggestions. They share prompting techniques to help educators better design AI-based teaching strategies. At the same time, they are concerned about student overreliance on generative AI. They urge educators to guide students to use it critically and to reflect on their interactions with AI. Hwang and Chen dont touch on concerns about bias, inaccuracy, or fabrication, but they call for further research into the impact of integrating generative AI on learning outcomes.
Lauren Goodlad and Samuel Baker situate both academic integrity concerns and the pressures on educators to embrace AI in the context of market forces. They ground their discussion of AI risks in a deep technical understanding of the limits of predictive models at mimicking human intelligence. Goodlad and Baker urge educators to communicate the purpose and value of teaching with writing to help students engage with the plurality of the world and communicate with others. Beyond the classroom, they argue, educators should question tech industry narratives and participate in public discussion on regulation and the future of AI. They see higher education as resilient: academic skepticism about former waves of hype around MOOCs, for example, suggests that educators will not likely be dazzled or terrified into submission to AI. Goodlad and Baker hope we will instead take up our place as experts who should help shape the future of the role of machines in human thought and communication.
How can the field of education put the needs of students and scholars first as we shape our response to AI, the way we teach about it, and the way we might incorporate it into pedagogy? Kathryn Conrads manifesto builds on and extends the Biden administrations Office of Science and Technology Policy 2022 Blueprint for an AI Bill of Rights. Conrad argues that educators should have input into institutional policies on AI and access to professional development around AI. Instructors should be able to decide whether and how to incorporate AI into pedagogy, basing their decisions on expert recommendations and peer-reviewed research. Conrad outlines student rights around AI systems, including the right to know when AI is being used to evaluate them and the right to request alternate human evaluation. They deserve detailed instructor guidance on policies around AI use without fear of reprisals. Conrad maintains that students should be able to appeal any charges of academic misconduct involving AI, and they should be offered alternatives to any AI-based assignments that might put their creative work at risk of exposure or use without compensation. Both students and educators legal rights must be respected in any educational application of automated generative systems.
Support JSTOR Daily! Join our new membership program on Patreon today.
Read more from the original source:
Artificial Intelligence and Education: A Reading List - JSTOR Daily
Artificial Intelligence’s Use and Rapid Growth Highlight Its … – Government Accountability Office
The rise of artificial intelligence has created growing excitement and much debate about its potential to revolutionize entire industries. At its best, AI could improve medical diagnosis, identify potential national security threats more quickly, and solve crimes. But there are also significant concernsin areas including education, intellectual property, and privacy.
Todays WatchBlog post looks at our recent work on how Generative AI systems (for example, ChatGPT and Bard) and other forms of AI have the potential to provide new capabilities, but require responsible oversight.
The promise and perils of current AI use
Our recent work has looked at three major areas of AI advancement.
Generative AI systems can create text (apps like ChatGPT and Bard, for example), images, audio, video, and other content when prompted by a user. These growing capabilities could be used in a variety of fields such as education, government, law, and entertainment. As of early 2023, some emerging generative AI systems had reached more than 100 million users. Advanced chatbots, virtual assistants, and language translation tools are examples of generative AI systems in widespread use. As news headlines indicate, this technology continues to gain global attention for its benefits. But there are concerns too, such as how it could be used to replicate work from authors and artists, generate code for more effective cyberattacks, and even help produce new chemical warfare compounds, among other things. Our recent Spotlight on Generative AI takes a deeper look at how this technology works.
Machine learning is a second application of AI growing in use. This technology is being used in fields that require advanced imagery analysis, from medical diagnostics to military intelligence. In a report last year, we looked at how machine learning was used to assist the medical diagnostic process. It can be used to identify hidden or complex patterns in data, detect diseases earlier and improve treatments. We found that benefits include more consistent analysis of medical data, and increased access to care, particularly for underserved populations. However, our work looked at limitations and bias in data used to develop AI tools that can reduce their safety and effectiveness and contribute to inequalities for certain patient populations.
Facial recognition is another type of AI technology that has shown both promises and perils in its use. Law enforcementfederal, as well as state and localhave used facial recognition technology to support criminal investigations and video surveillance. It is also used at ports of entry to match travelers to their passports. While this technology can be used to identify potential criminals more quickly, or those who may not have been identified without it, our work has also found some concerns with its use. Despite improvements, inaccuracies and bias in some facial recognition systems could result in more frequent misidentification for certain demographics. There are also concerns about whether the technology violates individuals personal privacy.
Ensuring accountability and mitigating the risks of AI use
As AI use continues its rapid expansion, how can we mitigate the risks and ensure these systems are working appropriately for all?
Appropriate oversight will be critical to ensuring AI technologies remain effective, and keep our data safeguarded. We developed an AI Accountability Framework to help Congress address the complexities, risks, and societal consequences of emerging AI technologies. Our framework lays out key practices to help ensure accountability and responsible AI use by federal agencies and other entities involved in the design, development, deployment, and continuous monitoring of AI systems. It is built around four principlesgovernance, data, performance, and monitoringwhich provide structures and processes to manage, operate, and oversee the implementation of AI systems.
AI technologies have enormous potential for good, but much of their power comes from their ability to outperform human abilities and comprehension. From commercial products to strategic competition among world powers, AI is poised to have a dramatic influence on both daily life and global events. This makes accountability critical to its application, and the framework can be employed to ensure that humans run the systemnot the other way around.
Read the original post:
Artificial Intelligence's Use and Rapid Growth Highlight Its ... - Government Accountability Office
Nnaji Harps on Artificial Intelligence in 4th Industrial Revolution – THEWILL NEWS MEDIA
September 10, (THEWILL) One of Africas foremost scientists, Professor Bart Nnaji, has advised Nigerians to embrace artificial intelligence (AI) on an industrial scale in order to join the 4th Industrial Revolution now sweeping across the globe.
Professor Nnaji, a former Minister of Science, made the appeal at the fifth convocation ceremonies of Michael and Cecilia Ibru University, at Agbara-Otor, near Ughelli. in Delta State where he also received an honorary doctorate in science.
AI has come to stay, he asserted before a large audience comprising academics and researchers from other universities, as well as business executives, philanthropists, and community leaders, including the founder of the university, Mrs. Cecelia Ibru, its vice-chancellor, Professor Ibiyinka Fuwape.
AI holds the key to our participation in the Fourth Industrial Revolution, driven by Big Data, Internet of Things, etc.
We lost the First Revolution which is the Agricultural Revolution, the Second which is the Industrial Revolution, and the Third which is the Digital Revolution.
Nnaji said that AI has become ubiquitous especially with Generative AI which enables machines, that is digital systems, to do things faster, cheaper and better through repetitive tasks and, in the process, achieve greater autonomy.
This means that they perform tasks without human control or human input, and this process keeps on improving rapidly.
He said that, unlike previous revolutions in history, Nigeria does not require massive resource infusion before leapfrogging into the 4th Industrial Revolution
The computer and the Internet have made things much cheaper, faster, and shorter, as a person can stay in the comforts of his or her home and still be in touch with cutting-edge technology, including AI, he declared.
While expressing delight that an increasing number of Nigerians are embracing AI, the erstwhile power minister advised the Nigerian government to immediately take concrete steps to make the country a significant AI participant, calling the United States, the United Kingdom, China, South Korea, the European Union, and India the frontline AI developers.
The Ministry of Communication and Creativity should be treated as a frontline development ministry, he argued, adding that the Nigerian Communication Commission and the National Office for the Acquisition of Technology should receive priority status.
He counselled the Federal Government to drastically reduce tariffs on certain information technology equipment or even abolish them.
He also called for intensive training of IT specialists in both academic and professional institutions in Nigeria and abroad.
He added: Let us borrow a leaf from India which prioritised Science, Technology, Engineering, and Mathematics (STEM) and has consequently excelled in medical tourism, manufacturing, food security, and moon and sun exploration.
Nnaji, however, pointed out some of the dangers associated with AI, including job losses and deep fakes.
Read the original post:
Nnaji Harps on Artificial Intelligence in 4th Industrial Revolution - THEWILL NEWS MEDIA
Fiction and films about artificial intelligence tackle the nature of love – Vox.com
When Spike Jonzes Her came out in 2013, I thought of it mostly as an allegory. It was set in a candy-colored dystopian future, one in which people murmur into wireless earbuds on the subway and rely on artificial intelligence engines to keep them organized and control their houses lights, and where communication has atrophied so much that people hire professionals to write personal letters. Their technologies have made their lives materially better, but they also seem to have become atomized and lonely, struggling to connect both emotionally and physically. A decade ago, that felt like science fiction. It was science fiction.
Sci-fi tries to understand human experience by placing audiences in unfamiliar settings, enabling them to see common experiences ethical dilemmas, arguments, emotional turmoil through fresh eyes. In 2013, Her gave us new ground on which to test out old questions about love, friendship, embodiment, and connection within a relationship, especially a romance. The idea that anyone, even a sad loner like Theodore Twombly (Joaquin Phoenix), could be in love with his OS assistant seemed pretty far-fetched. Siri had been introduced two years before the movie was released, but to me, the AI assistant Samantha still felt like a fantasy, and not only because she was voiced by Scarlett Johansson. Samantha is molded to Theodores needs following a brief psychological profile via a few weird questions during the setup process but there are needs of his she simply cannot fulfill (and eventually, the same is true of him). Her seemed to me to be a movie about how the people we love are never really made for us; to love someone is to love their mess. Or it could be read as a movie about long-distance relationships, or the kinds of disembodied romances people have been forming over the internet since its dawn.
But Hers central conceptual gag, as one critic put it the idea that you could fall in love with an artificial voice made just for you has become vibrantly plausible, much faster than I (or, I suspect, Spike Jonze) ever anticipated. Less than 10 years have passed since Her hit theaters, and yet the headlines are full of stories about the human-replacing capabilities of AI to draft content, or impersonate actors, or write code in ways that queasily echo Her.
For instance, in the spring of 2023, the influencer Caryn Marjorie, discovering she couldnt interact with her more than 2 million Snapchat followers personally, worked with the company Forever Voices to create an AI version of herself. The clone, dubbed CarynAI, was trained on Marjories videos, and users can pay $1 a minute to talk with it. In its first week of launch, the AI clone reportedly earned $72,000.
While Marjorie tweeted in a pitch for the clone that it was the first step in the right direction to cure loneliness, something funny happened with CarynAI, once launched. It almost immediately went rogue, engaging in intimate, flirty sexual conversations with its customers. The fact that the capability emerged suggests, of course, that people were trying to have those conversations with it, which in turn suggests the users were interested in more than just curing loneliness.
If you search for AI girlfriend, it sure seems like theres a market everything from AI Girlfriend to the fun and flirty dating simulator Anima to simply using ChatGPT to create a bot trained on your own loved one. Most of the AI girlfriends (theyre almost always girlfriends) seem designed for socially awkward straight men to either test-drive dating (a rehearsal, of sorts) or replace human women altogether. But they fit neatly into a particular kind of fantasy: that a machine designed to fulfill my needs and my needs alone might fulfill my romantic requirements and obviate the need for some messy, needy human with skin and hang-ups and needs of their own. Its love, of a kind an impoverished, arrested-development love.
This fantasy dates to long before the AI age. Since early modernity, weve been pondering the question of whether artificial intelligences are capable of loving us, whether that love is real, and if we can, should, or must love them back. You could see Mary Shelleys Frankenstein as a story about a kind of artificial intelligence (though the creatures brain is harvested from a corpse) that learns love and then, when it is rejected, hate. An early masterpiece of cinema, Fritz Langs 1927 film Metropolis, features a robot built by a grieving inventor to resurrect his dead love; later on, the robot tricks a different man into loving it and unleashes havoc on the city of Metropolis.
A scene from 1982s Blade Runner.
Warner Bros./Archive Photos/Getty Images
The history of sci-fi cinema is littered with the question of whether an AI can feel emotion, particularly love; what that might truly mean for the humans whom they love; and whether contained within that love might be the seeds of human destruction. The 1982 sci-fi classic Blade Runner, for instance, toys with the example of emotion in artificial replicants, some of whom may not even realize theyre not actually human. Love is a constant concern through Ridley Scotts film; one of the more memorable tracks on its Vangelis soundtrack is the Love Theme, and its not accidental that one of the main characters in the 2017 sequel Blade Runner: 2049 is a replicant named Luv.
An exhaustive list would be overkill, but science fiction is replete with AIs who are just trying to love. The terrific 2004-2009 reboot of Battlestar Galactica (BSG) took the cheesy originals basic sci-fi plot of humans versus robots and upgraded it with the question of whether artificial intelligences could truly feel love or just simulate it. A running inquiry in the series dealt with the humanoid Cylons (the BSG worlds version of replicants) ability to conceive life, which can only occur when a Cylon and a human feel love and have sex. (Cylons are programmed to be monotheists, while the humans religion is pantheistic, and the series is blanketed by the robots insistence that God is love.) The question throughout the series is whether this love is real, and, correspondingly, whether it is good or a threat to the continuance of the human race.
Another stellar example of the genre appears in Ex Machina, Alex Garlands 2014 sci-fi thriller about a tech genius who is obsessed with creating a robot well, a robot woman that can not only pass the Turing test but is capable of independent thought and consciousness. When one of his employees wins a week-long visit to the geniuss ultramodern retreat, he talks to the latest model. When she expresses romantic interest in him, he finds himself returning it, though of course it all unravels in the end, and the viewer is left wondering what if any of the feelings demonstrated in the film were truly real.
Perhaps the seminal (and telling) AI of cinema appeared in Stanley Kubricks 1968 opus 2001: A Space Odyssey. The central section of the sprawling film is set in the future on some kind of spacecraft bound for Jupiter and largely piloted by a computer named HAL, with whom the humans on board have a cordial relationship. HAL famously and chillingly suddenly refuses to work with them, in a way that hovers somewhere between hate and loves true antonym, indifference. If computers can feel warmth and affection toward us, then the opposite is also true. Even worse, they may instead feel indifference toward us, and we become an obstacle that must simply be removed.
Why tell these stories? A century ago, or as little as five years ago when generative AIs still seemed like some figment of the future, they served a very particular purpose. Pondering whether a simulation of intelligence might love us, and whether and how we might love it back, was a way to examine the nature of love (and hate) itself. Is it transactional or sacrificial? Is it unconditional? Can I truly love nonhuman beings, like my dog, as I might a person? Does loving something mean simply communing with its mind, or is there more to it? If someone loves me, what is my responsibility toward them? What if they seem incapable of loving me the way I wish to be loved? What if they hurt me or abandon me altogether?
Placing those questions into the framework of humans and machines is a way to defamiliarize the surroundings, letting us come at those age-old questions from a new angle. But as tech wormed its way into nearly every aspect of our relationships (chat rooms, group texts, dating apps, pictures and videos we send to make ourselves feel more embodied), the questions took on new meaning. Why does it feel different to text your boyfriend than to talk to him over dinner? When ghosting has entered common parlance treating a person like an app you can delete from your phone how does that alter the responsibilities we feel toward one another, for better or worse?
The flattening of human social life that comes from reducing human interaction to words or emoticons emanating from a screen has made it increasingly possible to ignore the emotions of the person on the other end. Its always been possible, but its far more commonplace now. And while virtual worlds and artificial intelligence arent the same thing, movies about AI hold the capability to interrogate this aspect of our experience, too.
But the meaning of art morphs depending on the context of the viewer. And so, in the age of ChatGPT and various AI girlfriends, and the almost certainly imminent AI-powered humanoid robots, these stories are once again morphing along with what they teach us about human existence. Now we are seriously considering whether an actual artificial intelligence can love, or at least adequately simulate love, in a way that fulfills human needs. What would it mean for a robot child to love me? What if my HomePod decides it hates me? What does it mean that Im even thinking about this?
One of the most incisive films about these questions dates to 2001, before generative AI really existed. Steven Spielbergs A.I. Artificial Intelligence a film originally developed by Stanley Kubrick after he acquired the rights to a 1969 short story by Brian Aldiss was greeted at the time by mixed reviews. But watching it now, theres no denying its power as a tool for interrogating the world we find ourselves in now.
A.I. is set in a climate crisis future: The ice caps melted because of the greenhouse gases, the opening narration tells us, and the oceans had risen to drown so many cities along all the shorelines of the world. In this post-catastrophe future, millions have died, but the affluent developed world has coped by limiting pregnancies and introducing robots into the world. Robots, who were never hungry and did not consume resources beyond those of their first manufacture, were so essential and economical in the chainmail of society, were told.
Now, 22 years after the films release, with the climate crisis on our doorstep and technology replacing humans, its easier than ever to accept this idea of the future. But its main question comes soon after, via a scene in which a scientist is explaining to the employees of a robotics firm why they should create a new kind of machine: a robot who can love. This mecha (the A.I. term for robot powered by AI) would be especially useful in the form of a child, one that could take the place of the children couples cant have or have lost in this future. This child would be ideal, at least in theory a kid, but better, one who would act correctly, never age, and wouldnt even increase the grocery bill.
What happens next is whats most important. These child mechas, the scientist says, would love unconditionally, and thus would acquire a kind of subconscious. Theyd have an inner world of metaphor, of intuition, of self-motivated reasoning, of dreams. Like a real child, but upgraded.
But an employee turns the question around the mecha might love, but can you get a human to love them back? And if that robot did genuinely love a person, What responsibility does that person hold toward the mecha in return?
Then she pauses and says, Its a moral question, isnt it?
The man smiles and nods. The oldest one of all, he replies. In fact, he continues, think of it this way: Didnt God make Adam, the first man, in order to love him? Was that a moral choice?
Whats most interesting in A.I.s treatment of this fundamental question is its insistence that love, as an emotion, may be the most fundamental emotion, the one that makes us human, that gives us a soul. In one scene, David (Haley Joel Osment), the child mecha, is triggered by a series of code words to imprint upon Monica (Frances OConnor), his surrogate mother. In a terrific bit of acting, you can see a light come into Davids eyes at the moment when he starts to love her as if hes gone from machine to living being.
Throughout A.I., were meant to sympathize with the mechas on the basis of their emotions. David was adopted by Monica and her husband as a replacement for their son, who is sick and in a coma from which he might not awake; when he does, David is eventually abandoned by the family, Monica driving him into the woods and leaving him there. Its a scene of heartwrenching pathos, no less so because one participant isnt real. Later, the movies main villain, the impresario Lord Johnson-Johnson (played by Brendan Gleeson) presides over a Flesh Fair where he tortures mechas for an audience in a colosseum-style stadium and rails against the new mechas that manipulate our emotions by acting like humans. The crowd boos and stones him.
A.I. Artificial Intelligence concludes, decisively, that its possible an AI might not only love us but be devoted to us, yearn for us, and also deserve our love in return and that this future will demand from us an expansion of what it means to love, even to be human. Davids pain when Monica abandons him, and his undying love toward her, present a different sort of picture than Frankenstein did: a creation that loves back, and a story that suggests we must love in return.
Which oddly leaves us in the same place we started. Yes, as technology has evolved, our stories about AIs and love have migrated from being all about their subtext to their actual text. Theyre not purely theoretical anymore, not in a world where we are asking if we can, and will, expect the programs we write to replace human relationships.
Yet theres a deeper subtext to all of this that shines through each story. They ask questions about the human experience of love, but more importantly, theyre an inquiry into the nature of the soul one of those things philosophers have been fighting over almost since the dawn of time. Its that spark, the light that comes into young Davids eyes. The soul, many of us believe, is the thing that separates us from our machines some combination of a spark of independent intelligence and understanding (Ex Machina) and the ability to feel emotion (Blade Runner) and the ability to outstrip our programming with originality and creativity and even evil (2001: A Space Odyssey).
The question lurking behind all of these tales is whether these same AIs, taught and trained to love, can invert that love into hate and choose to destroy us. It wont be just a fight of species against species for survival; it will be a targeted destruction, retribution for our behavior. But deeper still is the human question: If we develop an ethical responsibility to love the creatures we have made and we fail to do so then isnt destruction what we deserve?
We're here to shed some clarity
One of our core beliefs here at Vox is that everyone needs and deserves access to the information that helps them understand the world, regardless of whether they can pay for a subscription. With the 2024 election on the horizon, more people are turning to us for clear and balanced explanations of the issues and policies at stake. Were so grateful that were on track to hit 85,000 contributions to the Vox Contributions program before the end of the year, which in turn helps us keep this work free. We need to add 2,500 contributions this month to hit that goal.Will you make a contribution today to help us hit this goal and support our policy coverage? Any amount helps.
Here is the original post:
Fiction and films about artificial intelligence tackle the nature of love - Vox.com
Artificial Intelligence can accelerate the energy transition, but must … – Hellenic Shipping News Worldwide
The energy sector must overcome a lack of trust in artificial intelligence (AI) before the technology can be effectively used to accelerate the energy transition, a DNV report has found.
Based on interviews with senior representatives from energy companies across the United Kingdom, DNVs research determined that while AI is already being used across the sector, companies are largely cautious of its new and unestablished uses. Interviewees include industry personnel from the Centre for Data Ethics and Innovation, EnQuest, National Gas, National Grid Electricity System Operator (ESO) and the Net Zero Technology Hub, among other organisations.
AI insights: Rising to the challenge across the UK energy system outlines how AI can contribute to the energy transition and that an industry-wide approach to standards and best practices is required to unlock its potential.
While AI can be key to advancement and innovation in energy supply chains, the research found that putting in place the foundations for trust in the providers of AI solutions and the outputs of those solutions must be prioritized in light of recent geopolitical events highlighting the need for countries to have energy sustainability, security and affordability in effect, a parallel trilemma for AI as it is increasingly democratized and utilized. It was also found that data policies and industry culture present significant barriers to its widespread adoption.
At industry level, data sharing has been identified as the area which requires the greatest improvement. In terms of culture, it was found that the engineering community has a high level of risk aversion and low tolerance to error.
Hari Vamadevan, Executive Vice President and Regional Director UK and Ireland, Energy Systems at DNV said: To truly harness the benefits of AI in the energy sector, its critical this technology is trusted. There are two main challenges in achieving this: information to evaluate the trustworthiness of an AI system, and communication, to relay evidence which allows users to trust the systems.
DNV has many years experience in AI and the latest in its suite of digital twins recommended practices now covers AI-enabled systems, providing a framework to assure those systems are trustworthy and managed responsibly throughout their entire lifecycle.
The emergence of artificial intelligence also poses cyber security risks in the sector, with heightened geopolitical tensions and the accelerating adoption of digitally connected infrastructure sparking concern over industrys vulnerabilities to cyber threats.
Shaun Reardon, Head of Section, Industrial systems, Cyber Security at DNV said: Accurate, accessible, reliable, and relevant digital technologies and AI tools must be all these things if we are to trust them. But they must also be secure. Digital technologies set to be enhanced by AI are being connected to control systems and other operational technology in the energy industry, where safety is critical. The industry needs to manage the cyber security risk and build trust in the security of these vital technologies.Source: DNV, https://www.dnv.com/news/artificial-intelligence-can-accelerate-the-energy-transition-but-must-gain-trust-of-the-sector-246640