Category Archives: Ai
Neuroscience, Artificial Intelligence, and Our Fears: A Journey of … – Neuroscience News
Summary: As artificial intelligence (AI) evolves, its intersection with neuroscience stirs both anticipation and apprehension. Fears related to AI loss of control, privacy, and human value stem from our neural responses to unfamiliar and potentially threatening situations.
We explore how neuroscience helps us understand these fears and suggests ways to address them responsibly. This involves dispelling misconceptions about AI consciousness, establishing ethical frameworks for data privacy, and promoting AI as a collaborator rather than a competitor.
Key Facts:
Source: Neuroscience News
Fear of the unknown is a universal human experience. With the rapid advancements in artificial intelligence (AI), our understanding and perceptions of this technologys potential and its threats are evolving.
The intersection of neuroscience and AI raises both excitement and fear, feeding our imagination with dystopian narratives about sentient machines or providing us hope for a future of enhanced human cognition and medical breakthroughs.
Here, we explore the reasons behind these fears, grounded in our understanding of neuroscience, and propose paths toward constructive dialogue and responsible AI development.
The Neuroscience of Fear
Fear, at its core, is a primal emotion rooted in our survival mechanism. It serves to protect us from potential harm, creating a heightened state of alertness.
The amygdala, a small almond-shaped region deep within the brain, is instrumental in our fear response. It processes emotional information, especially related to threats, and triggers fear responses by communicating with other brain regions.
Our understanding of AI, a complex and novel concept, creates uncertainty, a key element that can trigger fear.
AI and Neuroscience: A Dialectical Relationship
AIs development and its integration into our lives is a significant change, prompting valid fears. The uncanny similarity between AI and human cognition can induce fear, partly due to the human brains tendency to anthropomorphize non-human entities.
This cognitive bias, deeply ingrained in our neural networks, can make us perceive AI as a potential competitor or threat.
Furthermore, recent progress in AI development has been fueled by insights from neuroscience. Machine learning algorithms, particularly artificial neural networks, are loosely inspired by the human brains structure and function.
This bidirectional relationship between AI and neuroscience, where neuroscience inspires AI design and AI, in turn, offers computational models to understand brain processes, has led to fears about AI achieving consciousness or surpassing human intelligence
The Fear of AI
The fear of AI often boils down to the fear of loss loss of control, loss of privacy, and loss of human value. The perception of AI as a sentient being out of human control is terrifying, a fear perpetuated by popular media and science fiction.
Moreover, AI systems capabilities for data analysis, coupled with their lack of transparency, raise valid fears about privacy and surveillance.
Another fear is the loss of human value due to AI outperforming humans in various tasks. The impact of AI on employment and societal structure has been a significant source of concern, considering recent advancements in robotics and automation).
The fear that AI might eventually replace humans in most areas of life challenges our sense of purpose and identity.
Addressing Fears and Building Responsible AI
While these fears are valid, it is crucial to remember that AI is a tool created by humans and for humans. AI does not possess consciousness or emotions; it only mimics cognitive processes based on its programming and available data. This understanding is vital in dispelling fears of a sentient AI.
Addressing privacy concerns requires establishing robust legal and ethical frameworks for data handling and algorithmic transparency.
Furthermore, interdisciplinary dialogue between neuroscientists, AI researchers, ethicists, and policymakers is crucial in navigating the societal impacts of AI and minimizing its risks.
Emphasizing the concept of human-in-the-loop AI, where AI assists rather than replaces humans, can alleviate fears of human obsolescence. Instead of viewing AI as a competitor, we can view it as a collaborator augmenting human capabilities.
The fear of AI, deeply rooted in our neural mechanisms, reflects our uncertainties about this rapidly evolving technology. However, understanding these fears and proactively addressing them is crucial for responsible AI development and integration.
By fostering constructive dialogue, establishing ethical guidelines, and promoting the vision of AI as a collaborator, we can mitigate these fears and harness AIs potential responsibly and effectively.
Author: Neuroscience News CommunicationsSource: Neuroscience NewsContact: Neuroscience News Communications Neuroscience NewsImage: The image is credited to Neuroscience News
Citations:
Patiency is not a virtue: the design of intelligent systems and systems of ethics by Joanna J. Bryson. Ethics and Information Technology
Hopes and fears for intelligent machines in fiction and reality by Stephen Cave et al. Nature Machine Intelligence
What AI can and cant do (yet) for your business by Chui, M et al. McKinsey Quarterly
What is consciousness, and could machines have it? by Dehaene, S et al. Science
On seeing human: a three-factor theory of anthropomorphism by Epley, N et al. Psychological Review
Neuroscience-inspired artificial intelligence by Hassabis, D et al. Neuron
Feelings: What are they & how does the brain make them? by Joseph E. LeDoux. Daedalus
Evidence that neural information flow is reversed between object perception and object reconstruction from memory by Juan Linde-Domingo et al. Nature Communications
On the origin of synthetic life: attribution of output to a particular algorithm by Roman V Yampolskiy. Physica Scripta
Read more from the original source:
Neuroscience, Artificial Intelligence, and Our Fears: A Journey of ... - Neuroscience News
Amazon Wants to Teach Its Cloud Customers About AI, and It’s Yet … – The Motley Fool
Amazon (AMZN -0.63%) has dropped out of the spotlight this year as its big tech peers, Microsoftand Google parent Alphabet, fight an intense battle over artificial intelligence (AI).
Microsoft recently acquired a large stake in OpenAI, and it has integrated the ChatGPT chatbot into its Bing search engine and Azure cloud platform. Google has fired back with a chatbot of its own, called Bard.
But investors shouldn't ignore Amazon as a major player in this emerging industry, because Amazon Web Services (AWS) is the world's largest cloud platform, and the cloud is where most AI applications are developed and deployed.
Now, AWS plans to open a new program to support businesses in crafting their AI strategies, and it could be a major growth driver for the company going forward. Here's why it's time for investors to buy in.
Image source: Getty Images.
Artificial intelligence comes in many forms, and it's often used to ingest mountains of data in order to make predictions about future events. But generative AI is the version many consumers have become familiar with this year, and is capable of generating new content whether it's text, sound, images, or videos. Platforms like ChatGPT and Bard fall into that category.
OpenAI CEO Sam Altman says he's already seeing many businesses double their productivity using generative AI, and it has the potential to eventually deliver an increase of 30 times. It's because the technology can be prompted to instantly write computer code, or even generate creative works. It can also rapidly analyze thousands of pages of information and deliver answers to complex questions, which saves the user from scrolling through search engine results.
On Thursday, June 23, Amazon announced it was launching the AWS Generative AI Innovation Center with $100 million in funding. The program will connect businesses with AWS strategists, data scientists, engineers, and solution architects to help them design generative AI strategies and deploy the technology effectively and responsibly.
The program will provide no-cost workshops, engagements, and training to teach businesses how to use some of the most powerful AI tools available on AWS, like CodeWhisperer, which serves as a copilot for computer programmers to help them write software significantly faster.
Amazon says a handful of companies were already signed up, including customer engagement platform Twilio, which is using generative AI to help businesses provide deeper value to the people they serve.
AWS is one of the oldest data center customers of Nvidia, which currently produces the most powerful chips in the industry designed for AI workloads. The two companies recently signed a new deal to power AWS' new EC2 P5 infrastructure, which will allow its cloud customers to scale from 10,000 Nvidia H100 GPUs to 20,000, giving them access to supercomputer-like performance.
Overall, this could enable them to train larger AI language models than ever before, with far more precision and speed.
Here's my point: The more customers using that data center infrastructure, the more money AWS makes. Therefore, Amazon's $100 million investment in the Generative AI Innovation Center could result in multitudes of that amount coming back as revenue, as more businesses learn how to train and deploy AI. Also, the free training and access to expert engineers could make AWS an attractive on-ramp into the world of AI for many organizations, prompting them to shun other providers like Microsoft Azure and Google Cloud initially.
Estimates about the future value of the AI industry are wide-ranging, but staggering even at the low end. Research firm McKinsey & Company thinks the technology will add $13 trillion to global economic output by 2030, whereas Cathie Wood's Ark Investment Management places that figure at a whopping $200 trillion.
Therefore, it's no surprise tech giants are jostling for AI leadership, but AWS will approach the opportunity from a position of strength since it already sits atop the cloud industry.
Amazon is a very diverse business beyond the cloud; it also dominates e-commerce globally, has a fast growing digital advertising segment, and is a leader in streaming through its Prime and Twitch platforms. The company has generated $524 billion in revenue over the last four quarters, which is far more than Microsoft and Alphabet, yet Amazon stock trades at a cheaper price-to-sales (P/S) ratio than both of them.
Company
2022 Revenue (Billions)
Price-to-Sales Ratio
Amazon
$524
2.5
Microsoft
$208
12.0
Alphabet (Google)
$284
5.5
Source: Company filings.
As a result, investors can buy Amazon stock now at a very attractive valuation relative to its peers, which theoretically means it could deliver more upside in the long run. In fact, I think Amazon is on its way to a $5 trillion valuation within the next decade, and AI could supercharge its progress to reach that target even more quickly.
John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fools board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. Anthony Di Pizio has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet, Amazon.com, Microsoft, Nvidia, and Twilio. The Motley Fool has a disclosure policy.
Read more:
Amazon Wants to Teach Its Cloud Customers About AI, and It's Yet ... - The Motley Fool
HIMSSCast: When AI is involved in decision making, how does man … – Healthcare IT News
A lot of people today are having troubles trusting artificial intelligence not to become sentient and take over the world ala "The Terminator." For the time being, in healthcare, one of the big questions for clinicians is: Can I trust AI to help me make decisions about patients?
Today's podcast seeks to provide some answers with a prominent member of the health IT community, Dr. Blackford Middleton.
Middleton is an independent consultant, currently working on AI issues at the University of Michigan department of learning health systems. Previously he served as chief medical information officer at Stanford Health Care, CIO at Vanderbilt University Medical Center, corporate director of clinical informatics at Partners HealthCare System, assistant professor at Harvard Medical School, and chairman of the board at HIMSS.
Like what you hear? Subscribe to the podcast on Apple Podcasts, Spotify or Google Play!
Talking points:
One of the biggest considerations the industry faces with AI is trust.
How can executives at healthcare provider organizations convince clinicians and others to trust an AI system they want to implement?
What must vendors of AI systems for healthcare do to foster trust?
It's extremely important to ensure all parties involved are comfortable with collaboration between man and machine for decision making.
How do healthcare organizations foster such comfort?
What must provider organization health IT leaders know about patient-facing AI tools?
What do the next five years look like with AI in healthcare? What must CIOs and other leaders brace for?
More about this episode:
Where AI is making a difference in healthcare now
UNC Health's CIO talks generative AI work with Epic and Microsoft
Penn Medicine uses AI chatbot 'Penny' to improve cancer care
Healthcare must set guardrails around AI for transparency and safety
How ChatGPT can boost patient engagement and communication
Follow Bill's HIT coverage on LinkedIn: Bill SiwickiEmail him:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.
Read the original post:
HIMSSCast: When AI is involved in decision making, how does man ... - Healthcare IT News
What is ‘ethical AI’ and how can companies achieve it? – The Ohio State University News
In the absence of legal guidelines, companies need to establish internal processes for responsible use of AI. Oscar Wong/Moment via Getty Images
The rush to deploy powerful new generative AI technologies, such as ChatGPT, has raised alarms about potential harm and misuse. The laws glacial response to such threats has prompted demands that the companies developing these technologies implement AI ethically.
But what, exactly, does that mean?
The straightforward answer would be to align a businesss operations with one or more of the dozens of sets of AI ethics principles that governments, multistakeholder groups and academics have produced. But that is easier said than done.
We and our colleagues spent two years interviewing and surveying AI ethics professionals across a range of sectors to try to understand how they sought to achieve ethical AI and what they might be missing. We learned that pursuing AI ethics on the ground is less about mapping ethical principles onto corporate actions than it is about implementing management structures and processes that enable an organization to spot and mitigate threats.
This is likely to be disappointing news for organizations looking for unambiguous guidance that avoids gray areas, and for consumers hoping for clear and protective standards. But it points to a better understanding of how companies can pursue ethical AI.
Our study, which is the basis for a forthcoming book, centered on those responsible for managing AI ethics issues at major companies that use AI. From late 2017 to early 2019, we interviewed 23 such managers. Their titles ranged from privacy officer and privacy counsel to one that was new at the time but increasingly common today: data ethics officer. Our conversations with these AI ethics managers produced four main takeaways.
First, along with its many benefits, business use of AI poses substantial risks, and the companies know it. AI ethics managers expressed concerns about privacy, manipulation, bias, opacity, inequality and labor displacement. In one well-known example, Amazon developed an AI tool to sort rsums and trained it to find candidates similar to those it had hired in the past. Male dominance in the tech industry meant that most of Amazons employees were men. The tool accordingly learned to reject female candidates. Unable to fix the problem, Amazon ultimately had to scrap the project.
Generative AI raises additional worries about misinformation and hate speech at large scale and misappropriation of intellectual property.
Second, companies that pursue ethical AI do so largely for strategic reasons. They want to sustain trust among customers, business partners and employees. And they want to preempt, or prepare for, emerging regulations. The Facebook-Cambridge Analytica scandal, in which Cambridge Analytica used Facebook user data, shared without consent, to infer the users psychological types and target them with manipulative political ads, showed that the unethical use of advanced analytics can eviscerate a companys reputation or even, as in the case of Cambridge Analytica itself, bring it down. The companies we spoke to wanted instead to be viewed as responsible stewards of peoples data.
The challenge that AI ethics managers faced was figuring out how best to achieve ethical AI. They looked first to AI ethics principles, particularly those rooted in bioethics or human rights principles, but found them insufficient. It was not just that there are many competing sets of principles. It was that justice, fairness, beneficence, autonomy and other such principles are contested and subject to interpretation and can conflict with one another.
This led to our third takeaway: Managers needed more than high-level AI principles to decide what to do in specific situations. One AI ethics manager described trying to translate human rights principles into a set of questions that developers could ask themselves to produce more ethical AI software systems. We stopped after 34 pages of questions, the manager said.
Fourth, professionals grappling with ethical uncertainties turned to organizational structures and procedures to arrive at judgments about what to do. Some of these were clearly inadequate. But others, while still largely in development, were more helpful, such as:
The key idea that emerged from our study is this: Companies seeking to use AI ethically should not expect to discover a simple set of principles that delivers correct answers from an all-knowing, Gods-eye perspective. Instead, they should focus on the very human task of trying to make responsible decisions in a world of finite understanding and changing circumstances, even if some decisions end up being imperfect.
In the absence of explicit legal requirements, companies, like individuals, can only do their best to make themselves aware of how AI affects people and the environment and to stay abreast of public concerns and the latest research and expert ideas. They can also seek input from a large and diverse set of stakeholders and seriously engage with high-level ethical principles.
This simple idea changes the conversation in important ways. It encourages AI ethics professionals to focus their energies less on identifying and applying AI principles though they remain part of the story and more on adopting decision-making structures and processes to ensure that they consider the impacts, viewpoints and public expectations that should inform their business decisions.
Ultimately, we believe laws and regulations will need to provide substantive benchmarks for organizations to aim for. But the structures and processes of responsible decision-making are a place to start and should, over time, help to build the knowledge needed to craft protective and workable substantive legal standards.
Indeed, the emerging law and policy of AI focuses on process. New York City passed a law requiring companies to audit their AI systems for harmful bias before using these systems to make hiring decisions. Members of Congress have introduced bills that would require businesses to conduct algorithmic impact assessments before using AI for lending, employment, insurance and other such consequential decisions. These laws emphasize processes that address in advance AIs many threats.
Some of the developers of generative AI have taken a very different approach. Sam Altman, the CEO of OpenAI, initially explained that, in releasing ChatGPT to the public, the company sought to give the chatbot enough exposure to the real world that you find some of the misuse cases you wouldnt have thought of so that you can build better tools. To us, that is not responsible AI. It is treating human beings as guinea pigs in a risky experiment.
Altmans call at a May 2023 Senate hearing for government regulation of AI shows greater awareness of the problem. But we believe he goes too far in shifting to government the responsibilities that the developers of generative AI must also bear. Maintaining public trust, and avoiding harm to society, will require companies more fully to face up to their responsibilities.
Dennis Hirsch, Professor of Law and Computer Science; Director, Program on Data and Governance; core faculty TDAI, The Ohio State University and Piers Norris Turner, Associate Professor of Philosophy & PPE Coordinator; Director, Center for Ethics and Human Values, The Ohio State University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Read more:
What is 'ethical AI' and how can companies achieve it? - The Ohio State University News
US to launch working group on generative AI, address its risks – Reuters.com
WASHINGTON, June 22 (Reuters) - A U.S. agency will launch a public working group on generative artificial intelligence (AI) to help address the new technology's opportunities while developing guidance to confront its risks, the Commerce Department said on Thursday.
The National Institute of Standards and Technology (NIST), a nonregulatory agency that is part of the Commerce Department, said the working group will draw on technical expert volunteers from the private and public sectors.
"This new group is especially timely considering the unprecedented speed, scale and potential impact of generative AI and its potential to revolutionize many industries and society more broadly," NIST Director Laurie Locascio said.
Regulators globally have been scrambling to draw up rules governing the use of generative AI, which can create text and images, and whose impact has been compared to that of the internet.
President Joe Biden said this week he believes the risks of artificial intelligence to national security and the economy need to be addressed, and that he would seek expert advice.
Reporting by Rami Ayyub; editing by Jonathan Oatis
Our Standards: The Thomson Reuters Trust Principles.
See the original post:
US to launch working group on generative AI, address its risks - Reuters.com
AINsight: Now Everywhere, Can AI Improve Aviation Safety? – Aviation International News
Artificial intelligence (AI) applications have created a buzz on the internet and with investors, and have the potential to transform the aviation industry. From flight data analytics to optimized route and fuel planning applications, AI, in its infancy, is making an impact on aviationat least operationally. But can it improve safety?
Natural language AI chatbots, such as ChatGPT, according to technology publication Digital Trends, continue to dazzle the internet with AI-generated content, morphing from a novel chatbot into a piece of technology that is driving the next era of innovation. In a mixed outlook, one article states, No tech product in recent memory has sparked as much interest, controversy, fear, and excitement.
First launched as a prototype in November 2022, ChatGPT (Openai.com) quickly grew to more than 100 million users by January 2023. Last month, traffic grew by more than 54 percent and is closing in on one billion unique users every month.
ChatGPT is a chatbot built on what is called a Large Language Model (LLM). According to Digital Trends, These neural networks are trained on huge quantities of information from the internet for deep learningmeaning they generate altogether new responses, rather than regurgitating specific canned responses.
While most adults in the U.S. have heard about ChatGPT, only 14 percent have used it. To learn more about what all the hype is about, I asked ChatGPT a few aviation-related questions. Its a bit tongue-in-cheek, but this was an exercise to satisfy my curiosity and to see if the responses were either accurate or innovative.
For fun, I submitted a prompt to ChatGPT with the following question: How can we improve aviation safety?
In a matter of seconds, the bot generated a tidy response with an opening statement and 10 key safety measures. It acknowledged that Aviation safety is a critical concern, and there are several ways to improve it.
Of those 10 safety measures, there were seven categories that included: enhanced training and education (pilots, maintenance personnel, and air traffic controllers), strengthening of safety regulations, implementation of advanced technology (including AI), investment in research and development (including ATC and NAS modernization), improved data sharing, comprehensive safety audits, and the fostering of a safety culture that includes improved reporting and communications.
A brief description of each safety measure was provided. Most relied on overused jargon or buzzwords such as safety culture, data sharing, and best practices. (For the college student experimenting with generative AI, here is a pro tip. Provide a little context to each term.)
In general, a lot of these are white cape safety measures that are easier to talk about than to implement. As an example, improved safety regulations often fall victim to powerful lobbying groups in Washington, D.C. A great example of a regulation that was influenced by lobbyists are the more proactive science-based Part 117 duty and rest rules that applied to all Part 121 operators, except cargo airlines.
Under implementation of advanced technologies, there was some serious self-promotion of AI by stating, Utilize automation and artificial intelligence to enhance decision-making processes, reduce human error, and provide real-time safety information.
Agreed, these are the areas where AI will shine. Recently, the Notam Alliancea team of notam end-users, pilots, dispatchers, airlines, and other operatorsran an exercise to create a super Notam that helps solve issues with the readability and useability of notams. The group used ChatGPT to see if notams could be understood by a machine; the results were promising, during this demonstration, AI could understand a notam over 98 percent of the time. (More on this in an upcoming AIN article.)
So, can AI improve aviation safety? The short answer is yes, but along the way, there will be appropriate applications of AI and it will continue to create interest, controversy, fear, and excitement. According to the ChatGPT response, Its important to note that aviation safety is an ongoing process that requires continuous improvement, vigilance, and collaboration among all stakeholders in the aviation industry.
Its also important to note that AI is not the end of humanity. For those humans with critical thinking skills, and the ability to use prior experiences to perform complex tasks (pilots, safety professionals, and writers), the future is also bright.
The opinions expressed in this column are those of the author and are not necessarily endorsed byAINMedia Group.
Read this article:
AINsight: Now Everywhere, Can AI Improve Aviation Safety? - Aviation International News
YouTube integrates AI-powered dubbing tool – TechCrunch
Image Credits: Olly Curtis/Future / Getty Images
YouTube is currently testing a new tool that will help creators automatically dub their videos into other languages using AI, the company announced Thursday at VidCon. YouTube teamed up with AI-powered dubbing service Aloud, which is part of Googles in-house incubator Area 120.
Earlier this year, YouTube introduced support for multi-language audio tracks, which allows creators to add dubbing to their new and existing videos, letting them reach a wider international audience. As of June 2023, creators have dubbed more than 10,000 videos in over 70 languages, the company told TechCrunch.
Previously, creators had to partner directly with third-party dubbing providers to create their audio tracks, which can be a time-consuming and expensive process. Aloud lets them dub videos at no additional cost.
Google first introduced Aloud in 2022. The AI-powered dubbing product transcribes a video for the creator, then translates and produces a dubbed version. Creators can review and edit the transcription before Aloud generates the dub.
YouTube is testing the tool with hundreds of creators, YouTubes VP of Creator Products, Amjad Hanif, said to the crowd yesterday. Soon the company will open the tool to all creators. Aloud is currently available in English, Spanish and Portuguese. However, there will be more languages offered in the future, such as Hindi and Bahasa Indonesian, among others.
Hanif added that YouTube is working to make translated audio tracks sound like the creators voice, with more expression and lip sync. YouTube confirmed to TechCrunch that, in the future, generative AI would allow Aloud to launch features like voice preservation, better emotion transfer and lip reanimation.
View post:
How AI like ChatGPT could be used to spark a pandemic – Vox.com
New research highlights how language-generating AI models could make it easier to create dangerous germs.
Heres an important and arguably unappreciated ingredient in the glue that holds society together: Google makes it moderately difficult to learn how to commit an act of terrorism. The first several pages of results for a Google search on how to build a bomb, or how to commit a murder, or how to unleash a biological or chemical weapon, wont actually tell you much about how to do it.
Its not impossible to learn these things off the internet. People have successfully built working bombs from publicly available information. Scientists have warned others against publishing the blueprints for deadly viruses because of similar fears. But while the information is surely out there on the internet, its not straightforward to learn how to kill lots of people, thanks to a concerted effort by Google and other search engines.
How many lives does that save? Thats a hard question to answer. Its not as if we could responsibly run a controlled experiment where sometimes instructions about how to commit great atrocities are easy to look up and sometimes they arent.
But it turns out we might be irresponsibly running an uncontrolled experiment in just that, thanks to rapid advances in large language models (LLMs).
When first released, AI systems like ChatGPT were generally willing to give detailed, correct instructions about how to carry out a biological weapons attack or build a bomb. Over time, Open AI has corrected this tendency, for the most part. But a class exercise at MIT, written up in a preprint paper earlier this month and covered last week in Science, found that it was easy for groups of undergraduates without relevant background in biology to get detailed suggestions for biological weaponry out of AI systems.
In one hour, the chatbots suggested four potential pandemic pathogens, explained how they can be generated from synthetic DNA using reverse genetics, supplied the names of DNA synthesis companies unlikely to screen orders, identified detailed protocols and how to troubleshoot them, and recommended that anyone lacking the skills to perform reverse genetics engage a core facility or contract research organization, the paper, whose lead authors include MIT biorisk expert Kevin Esvelt, says.
To be clear, building bioweapons requires lots of detailed work and academic skill, and ChatGPTs instructions are probably far too incomplete to actually enable non-virologists to do it so far. But it seems worth considering: Is security through obscurity a sustainable approach to preventing mass atrocities, in a future where information may be easier to access?
In almost every respect, more access to information, detailed supportive coaching, personally tailored advice, and other benefits we expect to see from language models are great news. But when a chipper personal coach is advising users on committing acts of terror, its not so great news.
But it seems to me that you can solve the problem from two angles.
We need better controls at all the chokepoints, Jaime Yassif at the Nuclear Threat Initiative told Science. It should be harder to induce AI systems to give detailed instructions on building bioweapons. But also, many of the security flaws that the AI systems inadvertently revealed like noting that users might contact DNA synthesis companies that dont screen orders, and so would be more likely to authorize a request to synthesize a dangerous virus are fixable!
We could require all DNA synthesis companies to do screening in all cases. We could also remove papers about dangerous viruses from the training data for powerful AI systems a solution favored by Esvelt. And we could be more careful in the future about publishing papers that give detailed recipes for building deadly viruses.
The good news is that positive actors in the biotech world are beginning to take this threat seriously. Ginkgo Bioworks, a leading synthetic biology company, has partnered with US intelligence agencies to develop software that can detect engineered DNA at scale, providing investigators with the means to fingerprint an artificially generated germ. That alliance demonstrates the ways that cutting-edge technology can protect the world against the malign effects of ... cutting-edge technology.
AI and biotech both have the potential to be tremendous forces for good in the world. And managing risks from one can also help with risks from the other for example, making it harder to synthesize deadly plagues protects against some forms of AI catastrophe just like it protects against human-mediated catastrophe. The important thing is that, rather than letting detailed instructions for bioterror get online as a natural experiment, we stay proactive and ensure that printing biological weapons is hard enough that no one can trivially do it, whether ChatGPT-aided or not.
A version of this story was initially published in the Future Perfect newsletter. Sign up here to subscribe!
See more here:
How AI like ChatGPT could be used to spark a pandemic - Vox.com
70% of Companies Will Use AI by 2030 — These 2 Stocks Have a … – The Motley Fool
We all benefit from artificial intelligence (AI) right now, even if we're not fully aware of it. For instance, if you found this article through an internet search engine, there's a high probability AI recommended it to you.
AI is also used by entertainment platforms like Netflix to recommend content that users are most likely to enjoy, to keep them engaged longer. And whenever you send money to another person or business online, AI is working in the background to detect potential fraud.
Those are just a few examples of the current uses of AI. According to one estimate, the technology will soon be everywhere.
Image source: Getty Images.
According to research firm McKinsey, 70% of organizations will be using AI in some capacity by 2030. Thanks to the technology's ability to boost productivity, the firm projects it will add a whopping $13 trillion to global economic output by then.
For investors, the important part here is to note that early adoption will help separate the winners from the losers. McKinsey predicts companies that integrate AI right now -- and continue developing it until 2030 -- will see a 122% increase in their free cash flow by that time. On the other hand, businesses that adopt the technology closer to the end of the decade might only see a 10% boost, and those that don't use AI at all could see a 23% decrease in free cash flow!
Why the disparity? McKinsey analysts think the benefits of AI won't be linear, but will accelerate over time instead. Therefore, early adopters could experience exponential growth in their financial results, whereas those late to the party will be stuck playing catch-up.
Here are two companies getting a head start on their competitors. Investors might want to buy shares in them right now.
Palo Alto Networks (PANW -2.09%) recently claimed it's the largest AI-based cybersecurity company, and it might be right. It certainly has one of the largest product portfolios in the entire industry, and given that its stock currently trades at an all-time high, its market valuation of $73 billion dwarfs all of its competitors, too.
The company has three areas of specialization -- cloud security, network security, and security operations -- and it's working to integrate AI across them all. Data is king when it comes to training AI, so cybersecurity companies with a large customer base fending off attacks in their ecosystems tend to have the most potential to produce accurate models.
Palo Alto management says the company's network security tools analyze 750 million data points each day and detect 1.5 million unique attacks that have never been seen before through that process. Overall, its AI models block a whopping 8.6 billion attacks on behalf of customers every single day.
Using AI-powered tools to fight cyber threats is increasingly important because as SentinelOnerecently noted, malicious actors have started using AI as well to launch sophisticated attacks.
Thanks in part to its leadership in AI, Palo Alto is the cybersecurity provider of choice for large organizations. In the recent fiscal 2023 third quarter (ended April 30), the company saw deal volume soar for its top-spending customers. Bookings among customers spending at least $10 million per year jumped 136% year over year, making it the company's fastest-growing cohort. Bookings from customers spending at least $5 million rose a more modest 62% year over year.
Palo Alto's pipeline of work continues to expand as well. While its revenue increased 24% year over year to $1.7 billion in the third quarter, its remaining performance obligations (RPOs) rose by 35%, topping an all-time high of $9.2 billion. Considering RPOs are expected to convert to revenue over time, the result bodes well.
Despite the all-time price high in the stock, Wall Street is still incredibly bullish. Of the 42 analysts who follow the stock and are tracked by The Wall Street Journal, not a single one recommends selling, and 76% of them have given the stock the highest-possible buy rating. Investors might do well to follow their lead.
Duolingo (DUOL -2.08%) is the world's largest digital language education platform, with more than 500 million downloads. The company takes learning out of the classroom and drops it into the user's pocket, aiming to create a fun, engaging, and interactive experience in the process.
Behind the scenes, Duolingo incorporated incrementally improved versions of AI into its products for 10 years, a process that accelerated recently thanks to a partnership with ChatGPT creator OpenAI.
The chatbot powers two revolutionary features on the Duolingo platform designed to speed up the learning process. The first is called Roleplay, which enables users to converse with an AI-generated partner to improve the user's speaking skills. The second is Explain My Answer, which uses AI to offer personalized advice to users based on their mistakes in each lesson.
OpenAI's new GPT-4 technology is having an even more profound impact on Duolingo. It's helping the company's developers write new lessons at a lightning-quick pace thanks to its ability to form sentences in several different languages. That gives Duolingo's employees more time to focus on building new experiences, rather than writing monotonous, repetitive lesson content.
These are exactly the sort of productivity gains that underpin McKinsey's estimate that AI will add trillions of dollars to the global economy in the long run.
Duolingo is monetizing at a growing rate. The platform is free to use, but 4.8 million of its 72.6 million monthly active users were unlocking additional features by paying a subscription fee during the first quarter of 2023. That was an all-time high, and it drove the company's revenue to $115.7 million in the quarter, up 43% year over year and well above its prior guidance. As a result, Duolingo raised its full-year revenue forecast to $509 million from $498 million previously.
And things might get even better in the future, because Roleplay and Explain My Answer were only recently released, and users need to buy the new Duolingo Max subscription tier to have access to them. It's more expensive than the traditional Super Duolingo tier, which could result in more revenue for the company.
Overall, when it comes to language education, Duolingo is solidifying its position at the top of the industry by taking the lead on AI to help users learn more effectively. That's likely to be a tailwind for its stock price over the long term.
More:
70% of Companies Will Use AI by 2030 -- These 2 Stocks Have a ... - The Motley Fool
Why C3.ai Stock Crashed by 10% on Friday – The Motley Fool
What happened
Though C3.ai's (AI -10.82%) leadership team made multiple optimistic pronouncements during an investor day update Thursday evening, shares of the artificial intelligence company were trading down by 10.6% as of 12:45 p.m. ET Friday.
Citing "broad interest in AI" across industries, CFO Juho Parkinnen said the company's pipeline of new contracts in the works has "basically doubled" since the beginning of its fiscal year. At the same time, management said the "sales cycle is shortening" on these opportunities, with prospects turning into contracts at a faster rate, reports TheFly.com.
So far, so good, right? But if this is the case, then why did C3.ai stock slump Friday?
According to the CFO, at this time last fiscal year, C3.ai had successfully landed 10 deals with prospective clients, whereas so far this fiscal year, the company has landed 16 deals. Problem is, as investment bank JP Morgan points out, most of these deals are for pilot projects, and not full-fledged, long-term contracts. The bank wants to see how many of these pilot projects turn into longer contracts so it can determine how accurate "the assumptions around the consumption-based pricing model" are.
Similarly, investment bank DA Davidson wrote Friday that it's pretty sure C3.ai's success in landing pilot projects is already reflected in the stock's price -- which, after all, has roughly tripled since the start of this year.
DA Davidson isn't coming right out and saying that it thinks C3.ai stock is overpriced, mind you. However, it reiterated the $30 price target that it put on the stock early this month. Given that C3.ai is trading at around $33, that's kind of the same thing, and suggests a downgrade may be imminent -- though it did maintain its neutral rating.
Given that the company has no profits and trades at more than 15 times sales, there's a lot of hype built into C3.ai's valuation right now. Discretion may be the better part of valor on this one, folks. Until C3.ai proves that it can turn its pilot deals into long-term contracts, and its long-term contracts into sustainable profits, invest in it at your own risk.
JPMorgan Chase is an advertising partner of The Ascent, a Motley Fool company. Rich Smith has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends JPMorgan Chase. The Motley Fool recommends C3.ai. The Motley Fool has a disclosure policy.
Here is the original post: