Category Archives: Artificial Intelligence

3 Top Artificial Intelligence Stocks to Buy Right Now – The Motley Fool

Unless you've been living under a rock without internet or wireless service, there's a good chance you've heard about some of the incredible breakthroughs that have been happening with artificial intelligence (AI) lately. Major technological leaps forward are occurring seemingly overnight and suggest that AI capabilities are on track to grow much faster than many had anticipated.

That said, as impressive as big breakthroughs in AI-generated art and OpenAI's ChatGPT have been, this paradigm shift is still just starting to unfold. While some companies have already seen explosive gains in conjunction with the excitement surrounding AI, others remain significantly underappreciated.

Read on for a look at three potentially explosive AI stocks that are worth buying right now.

CrowdStrike's (CRWD -0.24%) Falcon software helps protect computers, mobile devices, servers, and other endpoint hardware from being exploited. And crucially, the Falcon platform uses artificial intelligence to grow and adapt as it runs into new threats and attack vectors.

Amid some powerful demand catalysts, CrowdStrike's business has been doing gangbusters. CrowdStrike ended the year with annual recurring revenue of $2.56 billion, and it expects that it will grow its subscription sales base to $5 billion in fiscal 2026 -- good for growth of roughly 95% across the stretch. Even at the end of that projection period, the company will likely still be scratching the surface of its market opportunity.

Thanks to growth for existing services, new product launches, future initiatives, and cloud-security opportunities, the cybersecurity specialist estimates that its total addressable market will have expanded from $76 billion this year to $158 billion in 2026. But based on its targets, the company will still be tapping just over 3% of its addressable market at that point.

However, despite strong business performance and huge growth opportunities ahead, CrowdStrike stock has lost ground in conjunction with macroeconomic pressures impacting the broader market. Trading down 54% from its valuation peak, the software specialist presents an attractive risk-reward profile for investors looking to benefit from AI and cybersecurity trends.

E-commerce has historically been a low-margin business, but artificial intelligence has the potential to change that and pave the way for Amazon (AMZN 0.11%) to see huge earnings growth. Between advances in AI and robotics, the online retail giant will have opportunities to automate warehouse operations and offload deliveries to autonomous vehicles. The company's Zoox robotaxi business could also emerge as a significant sales and earnings driver.

In addition to advances in factory automation and autonomous shipping, the company is making some big moves in the consumer robotics space. The tech giant is on track to acquire iRobot, the maker of the popular Roomba vacuum cleaners, in a $1.7 billion deal. The move will not only push Amazon into a new consumer tech category, it will give the company access to data that can be fed into AI algorithms that lead to improvements and opportunities for other company initiatives.

Amazon's Echo smart speaker hardware and Alexa software also have the company positioned as a leader in terms of voice-based devices and operating systems. The company's strengths in these categories have already yielded benefits for its e-commerce business and data analytics initiatives, but leadership in voice-based OS potentially creates huge advantages in the AI space, and crossover opportunity between these two categories is likely just beginning to unfold.

With the stock still down roughly 47% from its high and the market seemingly underestimating its potential from AI, Amazon looks like a smart buy right now.

In some ways, an explosion of data generation and collection is the fuel that's powering the artificial intelligence revolution. But without special software tools, in many cases it's actually not possible to efficiently combine and analyze data generated from distinct cloud infrastructure service. Snowflake's (SNOW 0.53%) Data Cloud platform makes it possible to bring together data from Amazon, Microsoft, and Alphabet's respective cloud infrastructures.

AI and big-data trends are occurring in tandem, and they're still just starting to unfold.To put the progression of the latter trend in perspective, Tokyo Electron CEO Toshiki Kawai estimates that global data generation will increase tenfold by 2030. From there, he estimates that data generation will grow another hundredfold by 2040. Snowflake is on track to benefit from the ongoing evolution of big data, and its software tools are already playing a key role in powering AI and analytics applications.

At the end of last year, the data-services company tallied 330 customers generating trailing-12-month product revenue of more than $1 million, which represented a 79% increase for the number of customers in the category. Spurred by growing demand for analytics and app-building technologies, the company estimates that it will grow product revenue from roughly $2.7 billion this fiscal year to $10 billion in the fiscal year ending January 2029. Crucially, the data-services specialist could still have room for explosive growth from there.

Snowflake has seen macroeconomic pressures hurt its valuation and curb some of its near-term growth opportunities, but the market appears to be underestimating its significance as a player in AI. Down 65% from its high, the stock could go on to be an explosive winner for risk-tolerant investors.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fools board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. Keith Noonan has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet, Amazon.com, CrowdStrike, Microsoft, Snowflake, and iRobot. The Motley Fool has a disclosure policy.

Read more:
3 Top Artificial Intelligence Stocks to Buy Right Now - The Motley Fool

Elon Musk agrees A.I. will hit people like an asteroid, says he used Obama meeting to urge regulation – Fortune

Elon Musk thinks the world is woefully unprepared for the impact of artificial intelligence. On Sunday, he agreed that the technology will hit people like an asteroid, and he revealed that he used his only one-on-one meeting with then President Barack Obama to push for A.I. regulation.

The Twitter and Tesla CEO made the comments in response to a tweet from A.I. software developer Mckay Wrigley, who wrote on Saturday: It blows my mind that people cant apply exponential growth to the capabilities of AI. You wouldve been called a *lunatic* a year ago if you said wed have GPT-4 level AI right now. Now think another year. 5yrs? 10yrs? Its going to hit them like an asteroid.

Musk responded: I saw it happening from well before GPT-1, which is why I tried to warn the public for years. The only one on one meeting I ever had with Obama as President I used not to promote Tesla or SpaceX, but to encourage AI regulation. Obama had dinner with Musk in February 2015 in San Francisco.

This week, Musk responded to news about Senate Majority Leader Chuck Schumer laying the groundwork for Congress to regulate artificial intelligence.

Good news! AI regulation will be far more important than it may seem today, Musk tweeted.

According to the Financial Times, Musk is developing plans to launch an A.I. startup, dubbed X.AI, to compete against Microsoft-backed OpenAI, which makes generative A.I. tools, including the A.I. chatbots ChatGPT and GPT-4 and the image generator DALL-E 2.

Musk is also reportedly working on an A.I. project at Twitter.

A few weeks ago, Musk called for a six-month pause on developing A.I. tools more advanced than GPT-4, the successor to ChatGPT. He was joined in signing an open letter by hundreds of technology experts, among them Apple cofounder Steve Wozniak. The letter warned of mass-scale misinformation and the mass automation of jobs.

The power of A.I. systems to automate some white-collar jobs is in little doubt. A Wharton professor recently ran an experiment to see what A.I. tools could accomplish on a business project in 30 minutes and called the results superhuman. Meanwhile some remote workers are apparently taking advantage of productivity-enhancing A.I. tools to hold multiple full-time jobs, with their employers none the wiser. But fears that in the long run A.I. will replace many jobs are mounting.

Musk cofounded OpenAI in 2015 as a nonprofit, but he parted ways with it after a power struggle with CEO Sam Altman over its control and direction, according to the Wall Street Journal.

He tweeted on Feb. 17 that OpenAI was created as an open-source nonprofit to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.

Altman himself has warned frequently about the dangers of artificial intelligence. Last month in an ABC interview, he said that other A.I. developers working on ChatGPT-like tools wont apply the kind of safety limits his firm hasand the clock is ticking.

Musk has long believed that oversight for artificial intelligence is necessary, having described the technology as potentially more dangerous than nukes.

We need some kind of, like, regulatory authority or something overseeing A.I. development, he told Tesla investors last month. Make sure its operating in the public interest.

Read more here:
Elon Musk agrees A.I. will hit people like an asteroid, says he used Obama meeting to urge regulation - Fortune

Kennesaw State partners with Equifax to advance research in … – Kennesaw State University

KENNESAW, Ga. | Apr 13, 2023

Through its ongoing partnership with Atlanta-based global data, analytics, and technology company Equifax, Kennesaw State has launched a second research lab, the AI Ethics Lab. The new research lab will focus on studying the use of artificial intelligence in the U.S. financial services industry.

According to MinJae Woo, assistant professor of statistics and data science at Kennesaw State University, it is important that credit models used to make financial decisions are transparent and explainable, so consumers can understand the outcome of decisions. As the AI Ethics Labs director, Woo will work with two doctoral students to establish methods that will help identify how an AI-powered process may create different outcomes than traditional models and the potential impact of these differences.

MinJae Woo

We live in a time when AI is coming to a variety of fields, Woo said. Studying how AI indirectly acquires information is key to ensuring discrimination and unintended ethical issues do not arise within the models.

This is the second collaboration between the University and Equifax. In 2017, Kennesaw States Equifax Data Science Research Lab was launched with a mission to investigate business challenges and opportunities created by non-traditional sources of consumer and commercial data. The success of the data science lab, combined with Woos AI research, prompted Equifax to approach KSU about starting a new lab.

As one of the first patent holders for explainable AI in credit risk modeling, Equifax understands the importance in studying the impacts of how the technology is used by data scientists and our customers, Christopher Yasko, chief data scientist at Equifax, said. Expanding our work with KSU builds our academic partnerships, fueling the innovators of tomorrow while they focus on issues that can help move our industry and business forward.

According to Woo, the field of AI ethics is still in its infancy, but its a growing area. Last December, three KSU doctoral students graduated from the School of Data Science and Analytics all three worked on data ethics during their studies, and each secured a position focused on the topic.

Equifax has been applying machine learning, a subset of artificial intelligence, for at least two decades. As a leader in explainable AI, its research efforts include more than 25 current and pending patents related to AI. Joseph White, distinguished data scientist at Equifax, leads Equifaxs participation in the new AI Ethics Lab at KSU.

Our team is excited to work with Kennesaw State University to explore how models can remain fair and consistent across a wide range of both known and unknown dimensions, White said. The new lab will have four components that can be explored over time privacy, robustness, explainability and fairness.

Woo and his team have been analyzing data provided by Equifax. Next, they will study models and create hypotheses to help find and address any unintended disparities.

Abbey OBrien BarrowsPhotos by Darnell Wilburn

A leader in innovative teaching and learning, Kennesaw State University offers undergraduate, graduate and doctoral degrees to its more than 43,000 students. Kennesaw State is a member of the University System of Georgia with 11 academic colleges. The universitys vibrant campus culture, diverse population, strong global ties and entrepreneurial spirit draw students from throughout the country and the world. Kennesaw State is a Carnegie-designated doctoral research institution (R2), placing it among an elite group of only 7 percent of U.S. colleges and universities with an R1 or R2 status. For more information, visit kennesaw.edu.

Go here to read the rest:
Kennesaw State partners with Equifax to advance research in ... - Kennesaw State University

St. Louis County hopes artificial intelligence will reduce wait times … – St. Louis Public Radio

The St. Louis County Police Department has tapped artificial intelligence technology to reduce 911 wait times for county residents.

We are trying to provide prompt, efficient and accurate service to first responders in the community, said Brian Battles, the administrative specialist for the departments Bureau of Communications. But over the last three years, we noticed a decrease in the amount of applications that we've taken for the public safety dispatcher position, while the workload has increased. We were not able to keep up with that under the direction we were going.

Dispatchers handle about 2,000 calls a day, split roughly 50/50 between 911 and nonemergency issues like how to get a copy of a police report. Priority always goes to 911 calls, Battles said, but once dispatchers get on a nonemergency call, they cannot switch if an emergency call comes in.

And youre locked into a 5minute conversation with somebody on a nonemergency call in which you're not going to be able to provide them any assistance anyway, he said. That can leave someone who needs 911 waiting for the next available operator.

In order to free up dispatchers for 911 calls, the bureau needed to find a way to divert nonemergency calls. After consulting with other agencies and looking at national trends, Battles said, the department signed a contract with AT&T, which uses an intelligent voice assistant from Five9.

Brian Munoz

/

St. Louis Public Radio

Since the system went live in March, Battles said, the volume of nonemergency calls answered by dispatchers has decreased by 60%.

All 911 calls are still handled by dispatchers, Battles said. But those who dial the nonemergency line (636-529-8210) will have their call answered by a voice asking them to please state the nature of your call. The system is programmed to recognize key words and phrases, and then route the caller to the correct department, though they will get to a live person if the system incorrectly routes them twice.

Matt Crecelius, business manager for the St. Louis County Police Officers Association, called the system a great benefit to the community and our emergency dispatchers by making workloads more manageable.

Donald Wunsch, director of the Kummer Institutes Center for Artificial Intelligence and Autonomous Systems at Missouri University of Science and Technology in Rolla, said residents should give the system a chance.

The chances are that more often than not, this system is likely to forward you to as good a direction as the random person operating that system would be, he said. Even when you get to a human, it's kind of annoying if you have to get forwarded five times before you get to the right human.

Read more from the original source:
St. Louis County hopes artificial intelligence will reduce wait times ... - St. Louis Public Radio

Google CEO Sundar Pichai weighs in on the future of artificial … – Seeking Alpha

imaginima

The competition for AI dominance is heating up as the world's biggest tech giants go all in on an area that will "impact every product across every company." That's the opinion of Google (NASDAQ:GOOG) (NASDAQ:GOOGL) CEO Sundar Pichai, who hastily released the company's chatbot called Bard in March after Microsoft (NASDAQ:MSFT) poured billions of dollars into ChatGPT maker OpenAI. The developing industry also isn't limited to chatbots, with calls to pause many AI tools until new safety standards for the technology are in place, such as regulation for the economy, laws to punish abuse, and international treaties to make artificial intelligence safe for the world.

Is society prepared for what's coming? "On one hand, I feel no, because the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology is evolving, there seems to be a mismatch," Pichai told CBS's 60 Minutes. "On the other hand, compared to any other technology, I've seen more people worried about it earlier its life cycle... and worried about the implications." Knowledge workers could face the biggest disruption from future AI technologies, he added, and could unsettle professions like writers, accountants, architects and even software engineers.

Update (6:30 AM ET): Alphabet shares (GOOG) (GOOGL) are down nearly 3% in premarket trade on a report that smartphone giant Samsung (OTCPK:SSNLF) might replace Google as the default search service on its devices. An estimated $3B in annual revenue is at stake with the contract, per the New York Times.

There is no doubt that companies are on the brink of something big in terms of artificial intelligence, but it's also important to separate hype from the reality when talking about any emerging technology (remember Web 3.0?). There has been a lot of talk about the sentience of chatbots and the genesis of a new humanity, as well as an end to privacy and personal liberty or the quick demise of entire industries. There are also countless AI startups that are looking to play up the news cycle for valuable sources of funding, and even capitalize on investment from the public sector in terms of defense and national security.

SA commentary: Ironside Research explores why Google (GOOG) (GOOGL) was smart to let Microsoft launch its AI first, while Deep Tech Insights says its AI is even three time larger than ChatGPT. Meanwhile, Investing Groups Leader Samuel Smith calls out three AI stocks that are poised to win over the next decade and Luckbox Magazine explains how to add AI to your portfolio. Deep learning and debate are also taking place around hot industry players like C3.ai (NYSE:AI), with Julian Lin calling it an AI meme stock and Stone Fox Capital flagging the recent pullback as a buying opportunity.

Read this article:
Google CEO Sundar Pichai weighs in on the future of artificial ... - Seeking Alpha

ChatGPT, artificial intelligence, and the news – Columbia Journalism Review

When OpenAI, an artificial intelligence startup, released its ChatGPT tool in November, it seemed like little more than a toyan automated chat engine that could spit out intelligent-sounding responses on a wide range of topics for the amusement of you and your friends. In many ways, it didnt seem much more sophisticated than previous experiments with AI-powered chat software, such as the infamous Microsoft bot Taywhich was launched in 2016, and quickly morphed from a novelty act into a racism scandal before being shut downor even Eliza, the first automated chat program, which was introduced way back in 1966. Since November, however, ChatGPT and an assortment of nascent counterparts have sparked a debate not only over the extent to which we should trust this kind of emerging technology, but how close we are to what experts call Artificial General Intelligence, or AGI, which, they warn, could transform society in ways that we dont understand yet. Bill Gates, the billionaire cofounder of Microsoft, wrote recently that artificial intelligence is as revolutionary as mobile phones and the Internet.

The new wave of AI chatbots has already been blamed for a host of errors and hoaxes that have spread around the internet, as well as at least one death: La Libre, a Belgian newspaper, reported that a man died by suicide after talking with a chat program called Chai; based on statements from the mans widow and chat logs, the software appears to have encouraged the user to kill himself. (Motherboard wrote that when a reporter tried the app, which uses an AI engine powered by an open-source version of ChatGPT, it offered different methods of suicide with very little prompting.) When Pranav Dixit, a reporter at BuzzFeed, used FreedomGPTanother program based on an open source version of ChatGPT, which, according to its creator, has no guardrails around sensitive topicsthat chatbot praised Hitler, wrote an opinion piece advocating for unhoused people in San Francisco to be shot to solve the citys homeless crisis, [and] used the n-word.

The Washington Post has reported, meanwhile, that the original ChatGPT invented a sexual harassment scandal involving Jonathan Turley, a law professor at George Washington University, after a lawyer in California asked the program to generate a list of academics with outstanding sexual harassment allegations against them. The software cited a Post article from 2018, but no such article exists, and Turley said that hes never been accused of harassing a student. When the Post tried asking the same question of Microsofts Bing, which is powered by GPT-4 (the engine behind ChatGPT), it repeated the false claim about Turley, and cited an op-ed piece that Turley published in USA Today, in which he wrote about the false accusation by ChatGPT. In a similar vein, ChatGPT recently claimed that a politician in Australia had served prison time for bribery, which was also untrue. The mayor has threatened to sue OpenAI for defamation, in what would reportedly be the first such case against an AI bot anywhere.

According to a report in Motherboard, a different AI chat programReplika, which is also based on an open-source version of ChatGPTrecently came under fire for sending sexual messages to its users, even after they said they werent interested. Replika placed limits on the bots referencing of erotic roleplaybut some users who had come to depend on their relationship with the software subsequently experienced mental-health crises, according to Motherboard, and so the erotic roleplay feature was reinstated for some users. Ars Technica recently pointed out that ChatGPT, for its part, has invented books that dont exist, academic papers that professors didnt write, false legal citations, and a host of other fictitious content. Kate Crawford, a professor at the University of Southern California, told the Post that because AI programs respond so confidently, its very seductive to assume they can do everything, and its very difficult to tell the difference between facts and falsehoods.

Joan Donovan, the research director at the Harvard Kennedy Schools Shorenstein Center, told the Bulletin of the Atomic Scientists that disinformation is a particular concern with chatbots because AI programs lack any way to tell the difference between true and false information. Donovan added that when her team of researchers experimented with an early version of ChatGPT, they discovered that, in addition to sources such as Reddit and Wikipedia, the software was also incorporating data from 4chan, an online forum rife with conspiracy theories and offensive content. Last month, Emily Bell, the director of Columbias Tow Center for Digital Journalism, wrote in The Guardian that AI-based chat engines could create a new fake news frenzy.

As I wrote for CJR in February, experts say that the biggest flaw in a large language model like the one that powers ChatGPT is that, while the engines can generate convincing text, they have no real understanding of what they are writing about, and so often insert what are known as hallucinations, or outright fabrications. And its not just text: along with ChatGPT and other programs have come a similar series of AI image generators, including Stable Diffusion and Midjourney, which are capable of producing believable images, such as the recent photos of Donald Trump being arrestedwhich were actually created by Eliot Higgins, the founder of the investigative reporting outfit Bellingcatand a viral image of the Pope wearing a stylish puffy coat. (Fred Ritchin, a former photo editor at the New York Times, spoke to CJRs Amanda Darrach about the perils of AI-created images earlier this year.)

Three weeks ago, in the midst of all these scares, a body called the Future of Life Institutea nonprofit organization that says its mission is to reduce global catastrophic and existential risk from powerful technologiespublished an open letter calling for a six-month moratorium on further AI development. The letter suggested that we might soon see the development of AI systems powerful enough to endanger society in a number of ways, and stated that these kinds of systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. More than twenty thousand people signed the letter, including a number of AI researchers and Elon Musk. (Musks foundation is the single largest donor to the institute, having provided more than eighty percent of its operating budget. Musk himself was also an early funder of OpenAI, the company that created ChatGPT, but he later distanced himself after an attempt to take over the company failed, according to a report from Semafor. More recently, there have been reports that Musk is amassing servers with which to create a large language model at Twitter, where he is the CEO.)

Some experts found the letter over the top. Emily Bender, a professor of linguistics at the University of Washington and a co-author of a seminal research paper on AI that was cited in the Future of Life open letter, said on Twitter that the letter misrepresented her research and was dripping with #Aihype. In contrast to the letters vague references to some kind of superhuman AI that might pose profound risks to society and humanity, Bender said that her research focuses on how large language models, like the one that powers ChatGPT, can be misused by existing oppressive systems and governments. The paper that Bender co-published in 2021 was called On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? It asked whether enough thought had been put into the potential risks of such models. After the paper came out, two of Benders co-authors were fired from Googles AI team. Some believe that Google made that decision because AI is a major focus for the companys future.

As Chloe Xiang noted for Motherboard, Arvind Narayanan, a professor of computer science at Princeton and the author of a newsletter called AI Snake Oil, also criticized the open letter for making it harder to tackle real AI harms, and characterized many of the questions that the letter asked as ridiculous. In an essay for Wired, Sasha Luccioni, a researcher at the AI company Hugging Face, argued that a pause on AI research is impossible because it is already happening around the world, meaning there is no magic button that would halt dangerous AI research while allowing only the safe kind. Meanwhile, Brian Merchant, at the LA Times, argued that the all doom-and-gloom about the risks of AI may spring from an ulterior motive: apocalyptic doomsaying about the terrifying power of AI makes OpenAIs technology seem important, and therefore valuable.

Are we really in danger from the kind of artificial intelligence behind services like ChatGPT, or are we just talking ourselves into it? (I would ask ChatGPT, but Im not convinced I would get a straight answer.) Even if its the latter, those talking themselves into it now include regulators both in the US and around the world. Earlier this week, the Wall Street Journal reported that the Biden administration has started examining whether some kind of regulation needs to be applied to tools such as ChatGPT, due to the concerns that the technology could be used to discriminate or spread harmful information. Officials in Italy already banned ChatGPT for alleged privacy violations. (They later stated that the chatbot could return if it meets certain requirements.) And the software is facing possible regulation in a number of other European countries.

As governments are working to understand this new technology and its risks, so, too, are media companies. Often, they are doing so behind the scenes. But Wired recently published a policy statement on how and when it plans to use AI tools. Gideon Lichfield, Wireds global editorial director, told the Bulletin of the Atomic Scientists that the guidelines are designed both to give our own writers and editors clarity on what was an allowable use of AI, as well as for transparency so our readers would know what they were getting from us. The guidelines state that the magazine will not publish articles written or edited by AI tools, except when the fact that its AI-generated is the whole point of the story.

On the other side of the ledger, a number of news organizations seem more concerned that chatbots are stealing from them. The Journal reported recently that publishers are examining the extent to which their content has been used to train AI tools such as ChatGPT, how they should be compensated and what their legal options are.

Other notable stories:

ICYMI: Free Evan, prosecute the hostage takers

Follow this link:
ChatGPT, artificial intelligence, and the news - Columbia Journalism Review

With no one at the wheel, artificial intelligence races ahead – University of Miami: News@theU

University of Miami innovation and data science specialists assess the newest phase of artificial intelligence where tools and models utilizing turbo-charged computing power have transitioned from development to market production.

In late March, citing concerns that not even the creators of powerful new artificial intelligence systems can understand, predict, or reliably control them, more than a thousand AI sector experts and researchers published an open letter in Le Monde calling for a six-month pause in research on artificial intelligence systems more powerful than the new GPT-4, or Generative Pre-Trained Transformer 4 and is linked to the popular ChatGPT.

Max Cacchione, director of innovation with University of Miami Information Technology UMIT), and David Chapman, an associate professor of computer science with the Institute for Data Science and Computing (IDSC), both dismissed the feasibility of any such moratorium.

Zero chance it will happen. AI is like a virusand you cant contain a virus, said Cacchione, also the director of Innovate, a group which supports and implements innovative technology initiatives across the University. You can put a rule or law in place, but theres always someone who will get around it both nationally and internationally.

Chapman pointed to the intense competition in the industry as a major factor no pause would be enacted.

If we pause AI research, who else is going to proceed to develop the technology faster than us? These new tools and models are really coming to market now and, if we dont pursue them, then someone else will be making those advances, Chapman said.

Cacchione though highlighted that the concerns outlined in the letter were warranted.

The only thing thats preventing a disaster right now is that AI is contained in an environment where its not actionableits not connected to commercial airlines, a nuclear facility, a dam or something like that, Cacchione said. If it were connected right now, it would be in a position to cause a lot of damage.

The problem is that AI is an intelligence without any morals and guidance, he added. Its without a soul, so its going to do whats most logicaland it wont feel bad about us or factor in the long-term survival of humanity if its not programmed to do so.

Recently, an art tool produced by AI image generator Midjourney was used to generate a number of false imagesPope Francis in a puffy white parka and Donald Trump being arrested and then escaping from jail. The small startup has since, at least temporarily, disabled the free trial options, but the brouhaha prompted media outlets to decry the absence of oversight.

Cacchione stressed that there is no single regulatory body responsible for regulating AI research and relatively few specific regulations focused solely on AI.

He identified, though, a range of organizations and agencies including the European Union, United Nations Group of Governmental Experts on Autonomous Weapons Systems, the Institute of Electrical and Electronics Engineers, the Partnership on AI, and the Global Partnership on AI, among others that are working to develop guidelines and frameworks for the ethical and responsible use of AI.

Cacchione also mentioned efforts to regulate AI at the U.S. federal level, pointing out that in 2019, Congress established the National AI Initiative Act to coordinate federal investments in AI research and development. The bill also included provisions for establishing a national AI research resource task force, promoting AI education and training, and addressing ethical and societal implications of AI.

Chapman noted that, historically, regulatory policy has always lagged between technological advances and that, if this were not the case, advances important to humankind would be stymied.

The idea that AI can be used to create false content among othersthese are just things that societys going to evolve to, Chapman said. Regulations for AI are going to catch up and progress over time, and societal norms will change as we have access to more powerful tools that are ultimately going to help us live more productive lives.

Cacchione pointed out that AI research dates to the 1950s, when computer scientists first began to explore the concept of creating intelligent machines. The term artificial intelligence was coined in 1956 by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon at the Dartmouth Conference.

He highlighted the many milestones and the remarkable development pace of the past decades that have resulted today in self-driving cars, medical diagnosis, and robotics.

The potential applications of AI are vast and include improving healthcare, addressing climate change, space exploration, and advancing scientific research, he said. While there are still significant challenges to be overcome, AI has the potential to revolutionize many aspects of our lives and create new opportunities for innovation and progress.

Yet, while recognizing the tremendous upside, Cacchione highlighted the parallels between AI and Crypto and the potential for misuse in both sectors.

Both have the potential to be used for malicious purposes, such as money laundering, fraud, or cyberattacks, Cacchione said. This potential for misuse has raised concerns among regulators, who worry that these technologies could be used to undermine national security, financial stability, or consumer protections.

The innovation specialist noted that both sectors can be characterized by decentralization, with the effect of operating outside of traditional regulatory frameworks and without being subject to the same types of oversight as other industries. This can make it difficult for regulators to enforce existing laws and regulations or to develop new regulations that can effectively address the unique challenges presented by these technologies.

Both specialists concurred that AI has transitioned to a new phase, from research and development to commercialization.

People have been doing really impressive things with generative adversarial networks, and AI image generation software has been in development, at least on a small scale in research labs, for the past eight years or so, Chapman noted.

Whats new and different, he said, is the amount of computing resources and the data people are now investing into training these models.

The biggest change in the last year that were starting to see the machine learning, the deep learning hit the mass market, he said. Its not just research software anymore; you can actually see tools such as ChatGPT that have been in research for the past decade or so finally starting to go into production, and you start to finally have access to that technology.

Chapman highlighted AIs benefits and potential to save both cost and labor and improve efficiency. He emphasized that ultimately AI is a tool, an algorithm, that is based on data analysis and statistical modeling and that depends on humans to provide input.

It can now create images more quickly and that those images can be of anything youre wanting to dofor example creating special effects for a movie.

Thats a great use of this technology, and something that would save a lot in terms of cost. The experience would be better just because youre able to automate the process of creating images, Chapman said.

So, the question is who is using artificial intelligence and for what purpose? Chapman said.

Excerpt from:
With no one at the wheel, artificial intelligence races ahead - University of Miami: News@theU

Research reveals how Artificial Intelligence can help look for alien lifeforms on Mars and other planets – WION

Aliens have long been a fascinating subject for humans. Innumerable movies, TV series and books are proof of this allure. Our search for extra-terrestrial has eventaken us to other planets, albeit remotely. This search has progressed leaps and bounds in the last few years, but it is still in its natal stages. Global space agencies like the National Aeronautics and Space Administration (NASA) and China National Space Administration (CNSA) have in recent years sent rovers to Mars to aid this search remotely. However, the accuracy of these random searches remains low.

COMMERCIAL BREAK

SCROLL TO CONTINUE READING

To remedy this, the Search for Extraterrestrial Intelligence (SETI) Institute has been exploring the use of artificial intelligence (AI) for finding extraterrestrial life on Mars and other icy worlds.

According to a report on Space, a recent study from SETIstates that AI could be used to detect microbial life in the depths of the icy oceans on other planets.

In a paper published in Nature Astronomy, the team details how they trained a machine-learning model to scan data for signs of microbial life or other unusual features that could be indicative of alien life.

Also read |Here's how Artificial Intelligence can help modern-day Goldilocks get a good night's sleep

Using a machine learning algorithm called convolutional neural networks (CNNs) a multidisciplinary team of scientists led by SETI's Kim Warren-Rhodes has mapped sparse lifeforms on Earth.Warren-Rhodes worked alongside experts from other prestigious institutions: Michael Phillips of Johns Hopkins Applied Physics Laband Freddie Kalaitzis of the Universityof Oxford.

The system developed by them used statistical ecology and AI-detected biosignatures with up to 87.5 per cent accuracy, compared to only 10 per cent for random searches. As per the researchers, itcanpotentially reduce the search area by up to 97 per cent, making it easier for scientists to locate potential chemical traces of life.

Also read |Up, Up, and Away!: Elon Musks SpaceX to try and launch Starship, its most powerful rocket ever on Monday

For testing their system,they initiallyfocused on the sparse lifeforms that dwell in salt domes, rocks, and crystals at Salar de Pajonales at the boundary of the Chilean Atacama Desert and Altiplano.

Warren-Rhodes and his team collected over 8,000 images and 1,000 samples from Salar de Pajonales to search for photosynthetic microbes that may represent a biosignature on NASA's "ladder of life detection" for finding life beyond Earth.

The team also used drone imagery to simulate Mars Reconnaissance Orbiter's High-Resolution Imaging Experiment camera's Martian terrain images to examine the region.

They found that microbial life in the region is concentrated in biological hotspots that strongly relate to the availability of water.

Researchers suggest that the machine learning tools developed can be used in robotic planetary missions like NASA's Perseverance Rover. The tools can guide rovers towards areas with a higher probability of having traces of alien life, even if they are rare or hidden.

"With these models, we can design tailor-made roadmaps and algorithms to guide rovers to places with the highest probability of harbouring past or present life no matter how hidden or rare," explained Warren-Rhodes.

(With inputs from agencies)

WATCH WION LIVE HERE

You can now write for wionews.com and be a part of the community. Share your stories and opinions with us here.

Original post:
Research reveals how Artificial Intelligence can help look for alien lifeforms on Mars and other planets - WION

US Artificial Intelligence Regulations: Watch List for 2023 | Insights & … – Goodwin Procter

Companies are developing, deploying, and interacting with artificial intelligence (AI) technologies more than ever. At Goodwin, we are keeping a close eye on any regulations that may affect companies operating in this cutting-edge space.

For companies operating in Europe, the landscape is governed by a number of in force and pending EU legislative acts, most notably the EU AI Act, which is expected to be passed later this year; it was covered in our prior client alert here: EU Technology Regulation: Watch List for 2023 and Beyond. The United Kingdom has recently indicated that it may take a different approach, as discussed in our client alert on the proposed framework for AI regulation in the United Kingdom here: Overview of the UK Governments AI White Paper.

For companies operating in the United States, the landscape of AI regulation remains less clear. To date, there has been no serious consideration of a US analog to the EU AI Act or any sweeping federal legislation to govern the use of AI, nor is there any substantial state legislation in force (although there are state privacy laws that may extend to AI systems that process certain types of personal data).

That said, we have recently seen certain preliminary and sector-specific activity that gives clues about how the US federal government is thinking about AI and how it may look to govern it in the future. Specifically, the National Institute of Standards and Technology (NIST), the Federal Trade Commission (FTC), and the Food and Drug Administration (FDA) have all provided recent guidance. This client alert reviews this activity and is important reading for any business implementing or planning to implement AI technologies in the United States.

On January 26, 2023, NIST, an agency of the US Department of Commerce, released its Artificial Intelligence Risk Management Framework 1.0 (the RMF), as a voluntary, non-sector-specific, use-case-agnostic guide for technology companies that are designing, developing, deploying, or using AI systems to help manage the many risks of AI. Beyond risk management, the RMF seeks to promote trustworthy and responsible development and use of AI systems.

As the federal AI standards coordinator, NIST works with government and industry leaders both in the United States and internationally to develop technical standards to promote the adoption of AI, enumerated in the Technical AI Standards section on its website. In addition, Section 5301 of the National Defense Authorization Act for Fiscal Year 2021 directed NIST to develop a voluntary risk management framework for trustworthy AI systems, the RMF. Although the RMF is voluntary, it does provide good insights into the considerations the federal government is likely to take into account in any future regulation of AI and, as it evolves, it could eventually be adopted as an industry standard. We summarize the key aspects below.

A key recognition by the RMF is that humans typically assume AI systems are objective and high functioning. This assumption can inadvertently cause harm to people, communities, organizations, or broader ecosystems, including the environment. Enhancing the trustworthiness of an AI system can help mitigate the risk of this harm. The RMF defines trustworthiness as having seven defined characteristics:

The RMF also notes that AI systems are subject to certain unique risks, such as the following:

The RMF outlines four key functions to employ throughout the AI systems life cycle to manage risk and breaks down these core functions into further subcategory functions. The RMFs companion Playbook suggests the following action items to help companies implement these core functions:

In addition to NISTs release of the RMF, there has been some recent guidance from other bodies within the federal government. For example, the FTC has suggested it may soon increase its scrutiny on businesses that use AI. Notably, the FTC has recently issued various blog posts warning businesses to avoid unfair or misleading practices, including Keep your AI claims in check and Chatbots, deepfakes, and voice clones: AI deception for sale.

For companies interested in using AI technologies for healthcare-related decision making, the FDA has also announced its intention to regulate many AI-powered clinical decision support tools as devices. More information on those regulations can be found in our prior client alert available here: FDA Issues Final Clinical Decision Support Software Guidance.

While the above detailed recent actions from NIST, FTC, and FDA do provide some breadcrumbs related to what future US AI regulation may look like, there is no question that, at the moment, there are few hard and fast rules that US AI companies can look to in order to guide their conduct. It seems inevitable that regulation in some form will eventually emerge, but when that will occur is anybodys guess. Goodwin will continue to follow the developments and publish updates as they become available.

UPDATE:On April 13, 2022, the day after this alert was initially published, reports surfaced that US Senator Chuck Schumer is leading a congressional effort to establish US regulations on AI Reports indicated that Schumer has developed a framework for regulation that is currently being shared with and refined with the input of industry experts. Few details of the framework were initially available, but reports indicate that the regulations will focus on four guardrails: (1) identification of who trained the algorithm and who its intended audience is, (2) disclosure of its data source, (3) an explanation for how it arrives at its responses, and (4) transparent and strong ethical boundaries. (See: Scoop: Schumer lays groundwork for Congress to regulate AI (axios.com)). There is no clear timeline yet for when this framework may become established law, or if that will occur at all, but Goodwin will continue to track developments and publish alerts as they become available.

Read more here:
US Artificial Intelligence Regulations: Watch List for 2023 | Insights & ... - Goodwin Procter

What is the Next Big Step in Artificial Intelligence? – Analytics Insight

This article describes the next big step in Artificial Intelligence which may be instant videos

The Runway is one of several businesses developing artificial intelligence technology that may soon allow anyone to create videos by merely putting a few words into a box on a computer screen. Runway expects to launch its service to a select group of testers this week. The next big step in Artificial Intelligence seems to be Instant Videos.

They represent the next step in an industry competition to develop new varieties of artificial intelligence systems that some think might be the next great thing in technology, on par with web browsers or the iPhone. This competition includes industry heavyweights like Microsoft and Google as well as many smaller firms. The development of new video-generation technologies might speed up the work of filmmakers and other digital artists while also providing a rapid and innovative method for spreading false material online, making it even more difficult to determine what is true. The systems are illustrations of generative AI, which can produce text, images, and sounds quickly.

The first video-generation systems were introduced by Google and Meta, the parent company of Facebook, last year, but they were kept from the general public out of concern that they might one day be used to quickly and effectively disseminate false material.

Despite the hazards, Runways CEO, Cristobal Valenzuela, stated that he thought the technology was too vital to retain in a research lab. He declared that it was among the most astounding technology created in the last 100 years. People must use it, she said. Of course, the ability to edit and manipulate video and film is nothing new. It has been a practice among filmmakers for more than a century. Researchers and digital artists have been utilizing diverse methods.

The videos are only four seconds long, and if you pay close attention, you can see that they are choppy and indistinct. Images can occasionally be strange, twisted, and unsettling. The device is capable of fusing inanimate objects like telephones and balls with living creatures like dogs and cats. But if the correct cue is supplied, it creates videos that demonstrate the direction that technology is headed.

Runways system learns by examining digital material, in this example, pictures, videos, and captions that describe what those pictures show, similar to previous generative AI systems. Researchers are optimistic they can quickly advance and broaden the capabilities of this type of technology by training it on ever-larger volumes of data. Soon, according to experts, they will produce polished short films with dialogue and music.

The rest is here:
What is the Next Big Step in Artificial Intelligence? - Analytics Insight