Category Archives: Artificial Intelligence
Critics Say Sweeping Artificial Intelligence Regulations Could Target Parody, Satire Such as South Park, Family Guy – R Street
Its just not workable, a fellow at the R Street Institute, Shoshana Weissmann, tells the Sun. Although AI impersonation is a problem and fraud laws should protect against it, thats not what this law would do, she says.
The bill defines likeness as the actual or simulated image or likeness of an individual, regardless of the means of creation, that is readily identifiable by virtue of face, likeness, or other distinguishing characteristic. It defines voice as any medium containing the actual voice or a simulation of the voice of an individual, whether recorded or generated by computer, artificial intelligence, algorithm, or other digital technology, service, or device to the extent that an individual is readily identifiable from the sound of it.
Theres no exception for parody, and basically, the way they define digital creations is just so broad, it would cover cartoons, Ms. Weissmann says, adding that the bill would extend to shows such as South Park and Family Guy, which both do impersonations of people.
Its understood that this isnt the real celebrity. When South Park made fun of Ben Affleck, it wasnt really Ben Affleck. And they even used his picture at one point, but it was clear they were making fun of him. But under the pure text of this law, that would be unlawful, she says.
If the bill was enacted, someone would sue immediately, she says, adding that it would not pass First Amendment scrutiny.
Lawmakers should be more careful to ensure these regulations dont run afoul of the Constitution, she says, but instead, they have haphazard legislation like this that just doesnt make any functional sense.
While the bill does include a section relating to the First Amendment defense, Ms. Weissmann says, its essentially saying that after youre sued under our bill, you can use the First Amendment as a defense. But you can do that anyway under the bill. That doesnt change that.
Because of the threat of being dragged into court and spending thousands of dollars on lawyers, the bill would effectively be chilling speech, she notes.
One of the harms defined in the bill includes severe emotional distress of any person whose voice or likeness is used without consent.
Lets say Ben Affleck said he had severe emotional distress because South Park parodied him, Ms. Weissmann says. He could sue under this law. Thats insane, absolutely insane.
The bill would be more workable if it was made more specific and narrow to actual harms, and also made sure that people couldnt sue over very obvious parodies, she says. The way its drafted now, however, is going to apply to a lot more than they intended, she adds.
Forget Nvidia: 2 Artificial Intelligence Stocks That Could Help Make You Rich in 2024 – The Motley Fool
As the company supplying 80% of the necessary training chips, Nvidia was arguably the biggest winner in 2023's artificial intelligence (AI) boom. That said, it makes sense for investors to diversify their holdings to target different sides of the long-term opportunity. Let's look at why Alphabet (GOOG -0.10%) (GOOGL -0.20%) and Meta Platforms (META -0.38%) could also have a place in your portfolio in 2024 and beyond.
With a market cap of $1.79 trillion, Alphabet is already the fourth-largest company in the world, and it will take a lot of momentum to power continued expansion. But AI may be able to do the trick. The tech giant is heavily incorporating AI infrastructure into its cloud-computing platform, which could generate much-needed diversification and long-term growth.
Among AI companies, Nvidia is particularly successful because it targets the "picks and shovels" side of the opportunity, minimizing competition while maximizing the total addressable market for its products. Google is developing a similar strategy (albeit higher on the value chain) by turning Google Cloud into a one-stop shop for all its enterprise clients' data-management and AI training needs. And while Google isn't the only cloud-service provider employing this strategy, it has some key advantages.
Image source: Getty Images.
According to CEO Sundar Pichai, 70% of generative AI start-up unicorns use Google's infrastructure to train and run their models. This is a big vote of confidence in the platform's quality and price point. And Google plans to build on this advantage with proprietary AI chips (called tensor processing units), which can bring down costs through vertical integration and reduce the company's reliance on third-party suppliers like Nvidia.
Alphabet's low valuation is icing on the cake for investors. With a forward price-to-earnings (P/E) multiple of just 22, the stock is significantly cheaper than the NASDAQ 100's estimate of 29.
Following the release of ChatGPT in late 2022, Meta's share price has been on a tear, jumping a substantial 174% in the last 12 months alone. Investors are optimistic about the company's decision to pivot away from metaverse development to focus more on generative AI, which could optimize its advertising and improve its consumer-facing platforms.
At first glance, Meta has some clear advantages in its AI efforts. The social media giant's business model has always involved gathering and monetizing huge amounts of data. And generative AI opens another avenue for this strategy through large language models (LLMs), which are algorithms designed to create content out of trained datasets.
Meta is also adding conversational AI experiences across its popular apps, introducing features ranging from more responsive image editing on Instagram to conversational chatbots with distinct personalities on WhatsApp. These efforts probably won't immediately impact Meta's operational performance, but they could help maintain its platforms' user engagement and generate valuable customer data.
On the operational side, Meta is bouncing back from the challenges it faced in 2022. Third-quarter (2023) revenue jumped by 23% year over year to $34.15 billion, while net income jumped 164% to $11.58 billion, helped by aggressive cost cutting and layoffs. And with a forward P/E of just 22, it isn't too late for investors to bet on the company's long-term potential.
In 2024 and beyond, investors should expect the AI landscape to become increasingly competitive, especially on the software side of the market. With that in mind, it makes sense to bet on companies with potential economic moats. Alphabet and Meta Platforms fit the bill because of their treasure troves of user data, which can be used to train and refine LLMs. Both companies look poised for market-beating growth.
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. Will Ebiefung has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet and Meta Platforms. The Motley Fool has a disclosure policy.
See the original post:
Forget Nvidia: 2 Artificial Intelligence Stocks That Could Help Make You Rich in 2024 - The Motley Fool
Is Cloudflare a Top Artificial Intelligence (AI) Stock in 2024? – The Motley Fool
The list of companies associated with artificial intelligence (AI) is growing quickly. While some of these newcomers are loosely associated with AI, others have a strong connection.
One that belongs in the conversation is Cloudflare (NET 4.33%) Its large data center footprint is something that nearly all companies with AI workloads can benefit from. But is Cloudflare a solid investment right now? Let's find out.
Cloudflare's original purpose was a content delivery network (CDN), which places information closer to the end user on the internet. If a website is based in the U.S., but someone wants to access it in India, it takes a long time for that information to travel the globe. (A long time in this instance might be seconds -- but that could cause problems for some uses.)
However, Cloudflare has strategically placed data centers in over 300 cities and 120 countries to put this content as close as possible to the end user, speeding up the process.
When companies choose to host on Cloudflare, it provides them with top-notch cybersecurity. Instead of everyone with a website needing their own security solution, hosting on Cloudflare centralizes the protection. This allows the company to develop and maintain protection better than most, making it a logical choice.
Its data centers can also be used for another purpose: generative AI. When running a generative AI program, you're once again limited by the proximity of the generative AI server. With Cloudflare, you can run the tasks on its networks, improving the model's efficiency with best-in-class cybersecurity.
Cloudflare is a great way to invest in a branch of cloud computing, an industry that's expected to grow significantly over the next decade. But does it make sense to buy the stock now?
It shouldn't surprise investors that a company like Cloudflare has a fair bit of hype around it. After all, its revenue grew 32% year over year in the third quarter, and it added nearly 30,000 customers over the past year, bringing its total to more than 182,000. Of that number, more than 2,500 pay $100,000 or more annually, up from 1,908 last year.
This all comes at a price, and Cloudflare's stock is highly valued.
NET PS ratio data by YCharts; PS = price to sales.
At 21 times sales, Cloudflare fetches a hefty premium to many of its tech peers. But is that warranted?
Cloudflare's long-term model projects an operating margin of 20% or more. If it could snap its fingers and achieve that with a tax rate of 20%, plus grow by 30% over the next three years, Cloudflare would produce hypothetical earnings of $425 million per year.
If you divide its current market cap ($26 billion) by that figure, you will get a forward price-to-earnings (P/E) ratio based on three-year projections. That comes out to 61 times earnings, which is a very expensive price to pay now, let alone for a company that must optimize its expenses and grow substantially for three years to achieve it.
Cloudflare could have multiple great periods over the next few years and succeed as a business, but the stock might not go anywhere due to the extremely high expectations already built into it.
View original post here:
Is Cloudflare a Top Artificial Intelligence (AI) Stock in 2024? - The Motley Fool
NYU Joins Gov. Hochul’s ‘Empire AI’ Initiative to Make New York a National Artificial Intelligence Leader – New York University
New York University will join other local leading universities to create Empire AI, a state-of-the-art computing center for ethical artificial intelligence (AI) research with a focus on public policy and making New York State the nations AI tech leader.
The $400 million public-private initiative, announced by Governor Kathy Hochul as part of her 2024 budget proposal, will result in a groundbreaking computational facility in Upstate New York that promotes responsible research and development, boosts New Yorks economy, and serves all New Yorkers equally.
Scientific discovery and innovation across various fields is the product of hard work and collaboration, and is increasingly fueled by access to ever greater computing power. NYU is excited to join our fellow academic partners across the city and state to ensure Empire AI helps New York remain one of the worlds leading tech capitals and at the forefront of AI technology, said NYU President Linda G. Mills. We also thank Governor Kathy Hochul for her leadership and commitment to this kind of long-term investment, which enables great universities to conduct important research and, in turn, to contribute to New Yorks prosperity, create new jobs and new economic sectors, and secure New York States tech leadership position well into the future.
NYU is proud to join our fellow partners in the Empire AI consortium and help New York realize its vision of becoming a leader in Artificial Intelligence research, said Interim Provost Georgina Dopico. Joining this initiative, coupled with the recent news that NYUfor first timeleads all New York City universities in research spending according to the National Science Foundations annual survey, illustrates NYUs commitment to cutting-edge research in the STEM fields, and provides an enormous opportunity for our scientist and scholars to deepen the scope and reach of their research.
This exciting initiative will allow our growing network of researchers to collaborate with leading research institutes across the city and state, said Chief Research Officer & Vice Provost Stacie Bloom, while exploring many fields of study being undertaken at NYU, including robotics, healthcare, social work, cybersecurity, gaming, computer vision, sustainability, data science, and Responsible AI. We look forward to contributing to the important work that can be accomplished with Empire AI.
NYU will be part of the consortium that includes seven founding institutions Columbia, Cornell, Rensselaer Polytechnic Institute (RPI), the State University of New York (SUNY), the City University of New York (CUNY), and the Flatiron Institutethat governs the program. The hope is that, by increasing collaboration between New York States top research institutions, Empire AI will allow for efficiencies of scale greater than any single university can achieve, attract top notch faculty, and expand educational opportunities.
As Gov. Hochul noted in her State of the State address, many AI resources are concentrated in the hands of large, private tech corporations, who maintain outsized control of AI development. By working in collaboration with industry leaderNVIDIA to provide access to the computing systems that are prohibitively expensive and difficult to come by, Empire AI will provide researchers, nonprofits, and small companies with the ability to contribute to the development of AI technology serving the public interest for New York State.
Original post:
NYU Joins Gov. Hochul's 'Empire AI' Initiative to Make New York a National Artificial Intelligence Leader - New York University
WHO releases AI ethics and governance guidance for large multi-modal models – World Health Organization
The World Health Organization (WHO) is releasing new guidance on the ethics and governance of large multi-modal models (LMMs) a type of fast growing generative artificial intelligence (AI) technology with applications across health care.
The guidance outlines over 40 recommendations for consideration by governments, technology companies, and health care providers to ensure the appropriate use of LMMs to promote and protect the health of populations.
LMMs can accept one or more type of data inputs, such as text, videos, and images, and generate diverse outputs not limited to the type of data inputted. LMMs are unique in their mimicry of human communication and ability to carry out tasks they were not explicitly programmed to perform. LMMs have been adopted faster than any consumer application in history, with several platforms such as ChatGPT, Bard and Bert entering the public consciousness in 2023.
Generative AI technologies have the potential to improve health care but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks, said Dr Jeremy Farrar, WHO Chief Scientist. We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities.
The new WHO guidance outlines five broad applications of LMMs for health:
While LMMs are starting to be used for specific health-related purposes, there are also documented risks of producing false, inaccurate, biased, or incomplete statements, which could harm people using such information in making health decisions. Furthermore, LMMs may be trained on data that are of poor quality or biased, whether by race, ethnicity, ancestry, sex, gender identity, or age.
The guidance also details broader risks to health systems, such as accessibility and affordability of the best-performing LMMs. LMMS can also encourage automation bias by health care professionals and patients, whereby errors are overlooked that would otherwise have been identified or difficult choices are improperly delegated to a LMM. LMMs, like other forms of AI, are also vulnerable to cybersecurity risks that could endanger patient information or the trustworthiness of these algorithms and the provision of health care more broadly.
To create safe and effective LMMs, WHO underlines the need for engagement of various stakeholders: governments, technology companies, healthcare providers, patients, and civil society, in all stages of development and deployment of such technologies, including their oversight and regulation.
Governments from all countries must cooperatively lead efforts to effectively regulate the development and use of AI technologies, such as LMMs, said Dr Alain Labrique, WHO Director for Digital Health and Innovation in the Science Division.
The new WHO guidance includes recommendations for governments, who have the primary responsibility to set standards for the development and deployment of LMMs, and their integration and use for public health and medical purposes. For example, governments should:
The guidance also includes the following key recommendations for developers of LMMs, who should ensure that:
The new document on Ethics and governance of AI for health Guidance on large multi-modal models is based on WHOs guidance published in June 2021. Access the publication here
The rest is here:
WHO releases AI ethics and governance guidance for large multi-modal models - World Health Organization
When Might AI Outsmart Us? It Depends Who You Ask – TIME
In 1960, Herbert Simon, who went on to win both the Nobel Prize for economics and the Turing Award for computer science, wrote in his book The New Science of Management Decision that machines will be capable, within 20 years, of doing any work that a man can do.
History is filled with exuberant technological predictions that have failed to materialize. Within the field of artificial intelligence, the brashest predictions have concerned the arrival of systems that can perform any task a human can, often referred to as artificial general intelligence, or AGI.
So when Shane Legg, Google DeepMinds co-founder and chief AGI scientist, estimates that theres a 50% chance that AGI will be developed by 2028, it might be tempting to write him off as another AI pioneer who hasnt learnt the lessons of history.
Still, AI is certainly progressing rapidly. GPT-3.5, the language model that powers OpenAIs ChatGPT was developed in 2022, and scored 213 out of 400 on the Uniform Bar Exam, the standardized test that prospective lawyers must pass, putting it in the bottom 10% of human test-takers. GPT-4, developed just months later, scored 298, putting it in the top 10%. Many experts expect this progress to continue.
Read More: 4 Charts That Show Why AI Progress Is Unlikely to Slow Down
Leggs views are common among the leadership of the companies currently building the most powerful AI systems. In August, Dario Amodei, co-founder and CEO of Anthropic, said he expects a human-level AI could be developed in two to three years. Sam Altman, CEO of OpenAI, believes AGI could be reached sometime in the next four or five years.
But in a recent survey the majority of 1,712 AI experts who responded to the question of when they thought AI would be able to accomplish every task better and more cheaply than human workers were less bullish. A separate survey of elite forecasters with exceptional track records shows they are less bullish still.
The stakes for divining who is correct are high. Legg, like many other AI pioneers, has warned that powerful future AI systems could cause human extinction. And even for those less concerned by Terminator scenarios, some warn that an AI system that could replace humans at any task might replace human labor entirely.
Many of those working at the companies building the biggest and most powerful AI models believe that the arrival of AGI is imminent. They subscribe to a theory known as the scaling hypothesis: the idea that even if a few incremental technical advances are required along the way, continuing to train AI models using ever greater amounts of computational power and data will inevitably lead to AGI.
There is some evidence to back this theory up. Researchers have observed very neat and predictable relationships between how much computational power, also known as compute, is used to train an AI model and how well it performs a given task. In the case of large language models (LLM)the AI systems that power chatbots like ChatGPTscaling laws predict how well a model can predict a missing word in a sentence. OpenAI CEO Sam Altman recently told TIME that he realized in 2019 that AGI might be coming much sooner than most people think, after OpenAI researchers discovered the scaling laws.
Read More: 2023 CEO of the Year: Sam Altman
Even before the scaling laws were observed, researchers have long understood that training an AI system using more compute makes it more capable. The amount of compute being used to train AI models has increased relatively predictably for the last 70 years as costs have fallen.
Early predictions based on the expected growth in compute were used by experts to anticipate when AI might match (and then possibly surpass) humans. In 1997, computer scientist Hans Moravec argued that cheaply available hardware will match the human brain in terms of computing power in the 2020s. An Nvidia A100 semiconductor chip, widely used for AI training, costs around $10,000 and can perform roughly 20 trillion FLOPS, and chips developed later this decade will have higher performance still. However, estimates for the amount of compute used by the human brain vary widely from around one trillion floating point operations per second (FLOPS) to more than one quintillion FLOPS, making it hard to evaluate Moravecs prediction. Additionally, training modern AI systems requires a great deal more compute than running them, a fact that Moravecs prediction did not account for.
More recently, researchers at nonprofit Epoch have made a more sophisticated compute-based model. Instead of estimating when AI models will be trained with amounts of compute similar to the human brain, the Epoch approach makes direct use of scaling laws and makes a simplifying assumption: If an AI model trained with a given amount of compute can faithfully reproduce a given portion of textbased on whether the scaling laws predict such a model can repeatedly predict the next word almost flawlesslythen it can do the work of producing that text. For example, an AI system that can perfectly reproduce a book can substitute for authors, and an AI system that can reproduce scientific papers without fault can substitute for scientists.
Some would argue that just because AI systems can produce human-like outputs, that doesnt necessarily mean they will think like a human. After all, Russell Crowe plays Nobel Prize-winning mathematician John Nash in the 2001 film, A Beautiful Mind, but nobody would claim that the better his acting performance, the more impressive his mathematical skills must be. Researchers at Epoch argue that this analogy rests on a flawed understanding of how language models work. As they scale up, LLMs acquire the ability to reason like humans, rather than just superficially emulating human behavior. However, some researchers argue it's unclear whether current AI models are in fact reasoning.
Epochs approach is one way to quantitatively model the scaling hypothesis, says Tamay Besiroglu, Epochs associate director, who notes that researchers at Epoch tend to think AI will progress less rapidly than the model suggests. The model estimates a 10% chance of transformative AIdefined as AI that if deployed widely, would precipitate a change comparable to the industrial revolutionbeing developed by 2025, and a 50% chance of it being developed by 2033. The difference between the models forecast and those of people like Legg is probably largely down to transformative AI being harder to achieve than AGI, says Besiroglu.
Although many in leadership positions at the most prominent AI companies believe that the current path of AI progress will soon produce AGI, theyre outliers. In an effort to more systematically assess what the experts believe about the future of artificial intelligence, AI Impacts, an AI safety project at the nonprofit Machine Intelligence Research Institute, surveyed 2,778 experts in fall 2023, all of whom had published peer-reviewed research in prestigious AI journals and conferences in the last year.
Among other things, the experts were asked when they thought high-level machine intelligence, defined as machines that could accomplish every task better and more cheaply than human workers without help, would be feasible. Although the individual predictions varied greatly, the average of the predictions suggests a 50% chance that this would happen by 2047, and a 10% chance by 2027.
Like many people, the experts seemed to have been surprised by the rapid AI progress of the last year and have updated their forecasts accordinglywhen AI Impacts ran the same survey in 2022, researchers estimated a 50% chance of high-level machine intelligence arriving by 2060, and a 10% chance by 2029.
The researchers were also asked when they thought various individual tasks could be carried out by machines. They estimated a 50% chance that AI could compose a Top 40 hit by 2028 and write a book that would make the New York Times bestseller list by 2029.
Nonetheless, there is plenty of evidence to suggest that experts dont make good forecasters. Between 1984 and 2003, social scientist Philip Tetlock collected 82,361 forecasts from 284 experts, asking them questions such as: Will Soviet leader Mikhail Gorbachev be ousted in a coup? Will Canada survive as a political union? Tetlock found that the experts predictions were often no better than chance, and that the more famous an expert was, the less accurate their predictions tended to be.
Next, Tetlock and his collaborators set out to determine whether anyone could make accurate predictions. In a forecasting competition launched by the U.S. Intelligence Advanced Research Projects Activity in 2010, Tetlocks team, the Good Judgement Project (GJP), dominated the others, producing forecasts that were reportedly 30% more accurate than intelligence analysts who had access to classified information. As part of the competition, the GJP identified superforecastersindividuals who consistently made above-average accuracy forecasts. However, although superforecasters have been shown to be reasonably accurate for predictions with a time horizon of two years or less, it's unclear whether theyre also similarly accurate for longer-term questions such as when AGI might be developed, says Ezra Karger, an economist at the Federal Reserve Bank of Chicago and research director at Tetlocks Forecasting Research Institute.
When do the superforecasters think AGI will arrive? As part of a forecasting tournament run between June and October 2022 by the Forecasting Research Institute, 31 superforecasters were asked when they thought Nick Bostromthe controversial philosopher and author of the seminal AI existential risk treatise Superintelligencewould affirm the existence of AGI. The median superforecaster thought there was a 1% chance that this would happen by 2030, a 21% chance by 2050, and a 75% chance by 2100.
All three approaches to predicting when AGI might be developedEpochs model of the scaling hypothesis, and the expert and superforecaster surveyshave one thing in common: theres a lot of uncertainty. In particular, the experts are spread widely, with 10% thinking it's as likely as not that AGI is developed by 2030, and 18% thinking AGI wont be reached until after 2100.
Still, on average, the different approaches give different answers. Epochs model estimates a 50% chance that transformative AI arrives by 2033, the median expert estimates a 50% probability of AGI before 2048, and the superforecasters are much further out at 2070.
There are many points of disagreement that feed into debates over when AGI might be developed, says Katja Grace, who organized the expert survey as lead researcher at AI Impacts. First, will the current methods for building AI systems, bolstered by more compute and fed more data, with a few algorithmic tweaks, be sufficient? The answer to this question in part depends on how impressive you think recently developed AI systems are. Is GPT-4, in the words of researchers at Microsoft, the sparks of AGI? Or is this, in the words of philosopher Hubert Dreyfus, like claiming that the first monkey that climbed a tree was making progress towards landing on the moon?
Second, even if current methods are enough to achieve the goal of developing AGI, it's unclear how far away the finish line is, says Grace. Its also possible that something could obstruct progress on the way, for example a shortfall of training data.
Finally, looming in the background of these more technical debates are peoples more fundamental beliefs about how much and how quickly the world is likely to change, Grace says. Those working in AI are often steeped in technology and open to the idea that their creations could alter the world dramatically, whereas most people dismiss this as unrealistic.
The stakes of resolving this disagreement are high. In addition to asking experts how quickly they thought AI would reach certain milestones, AI Impacts asked them about the technologys societal implications. Of the 1,345 respondents who answered questions about AIs impact on society, 89% said they are substantially or extremely concerned about AI-generated deepfakes and 73% were similarly concerned that AI could empower dangerous groups, for example by enabling them to engineer viruses. The median respondent thought it was 5% likely that AGI leads to extremely bad, outcomes, such as human extinction.
Given these concerns, and the fact that 10% of the experts surveyed believe that AI might be able to do any task a human can by 2030, Grace argues that policymakers and companies should prepare now.
Preparations could include investment in safety research, mandatory safety testing, and coordination between companies and countries developing powerful AI systems, says Grace. Many of these measures were also recommended in a paper published by AI experts last year.
If governments act now, with determination, there is a chance that we will learn how to make AI systems safe before we learn how to make them so powerful that they become uncontrollable, Stuart Russell, professor of computer science at the University of California, Berkeley, and one of the papers authors, told TIME in October.
See more here:
When Might AI Outsmart Us? It Depends Who You Ask - TIME
Youngkin signs a new executive order on artificial intelligence – WRIC ABC 8News
RICHMOND, Va. (WRIC) Governor Glenn Youngkin has signed an executive order relating to artificial intelligence (AI) in Virginia.
The executive order was signed on Jan. 18, implementing guidelines for AI in education as well as AI policies and information technology standards that protect the states databases and the individual data of all Virginians.
According to Youngkins office, the order integrates the safeguarding of Virginia residents and businesses but also brings awareness to opportunities that come with AI innovation.
These standards and guidelines will help provide the necessary guardrails to ensure that AI technology will be safely implemented across all state agencies and departments,Youngkin said.At the same time, we must utilize these innovative technologies to deliver state services more efficiently and effectively. Therefore, my administration will utilize the $600,000 in proposed funds outlined in my Unleashing Opportunity budget to launch pilots that evaluate the effectiveness of these new standards.
According to the governors office, Virginia has the largest population of cybersecurity companies on the East Coast and is one of the first states in the country to issue AI standards.
Youngkin claims the standards of this executive order will set new technology requirements for the use of AI in government agencies, including law enforcement personnel, while the educational guidelines will establish principles for the use of AI in all education levels to ensure that students are prepared for jobs of the future.
More information on this executive order can be found on the governors website.
Go here to see the original:
Youngkin signs a new executive order on artificial intelligence - WRIC ABC 8News
The EU AI Act: A Comprehensive Regulation Of Artificial Intelligence – New Technology – European Union – Mondaq News Alerts
22 January 2024
Fieldfisher
To print this article, all you need is to be registered or login on Mondaq.com.
Welcome to this blog post where Olivier Proust, a Partner in Fieldfisher'sTechnology and Data team will delve into the latest developmentssurrounding the EU AI Act. In this post, we will provide you with acomprehensive overview of the key provisions and implications ofthis ground breaking legislation that aims to regulate artificialintelligence (AI) systems and their applications. Join us as weexplore the classification of AI systems, the territorial scope ofthe AI Act, its enforcement mechanisms, and the timeline for itsimplementation.
The EU AI Act, which has been in the works since April 2021, sawsignificant progress on December 8th, 2023, when a politicalagreement was reached between the two co-legislative bodies, i.e.the European Parliament and the Council of the European Union. Thisagreement marked a major milestone in the EU's ambition tobecome the first region in the world to adopt comprehensivelegislation on AI.
The AI Act follows a risk-based approach and classifies AIsystems into four categories: prohibited AI, high-risk AI systems,general-purpose AI (GPAI) and foundation models, and low-risk AIsystems. Prohibited AI encompasses practices such as social scoringand manipulative AI, which the legislation seeks to ban. High-riskAI systems are further classified based on their impact onindividuals' rights and safety, while general-purpose AI andfoundation models face specific transparency requirements. Low-riskAI systems, including generative AI, are subject to transparencyrequirements, ensuring that viewers are aware of the AI-generatedcontent they are consuming.
One notable feature of the AI Act is its extraterritorialeffect, applying not only to entities within the EU but also todevelopers, deployers, importers, and distributors of AI systemsoutside the EU if their system's output occurs within the EU.This broad scope aims to ensure comprehensive regulation of AIsystems and their uses.
To enforce compliance with the AI Act, several regulatory bodieswill be established, including an AI Office within the EuropeanCommission and an AI Board serving as an advisory body. Nationalpublic authorities will be responsible for enforcement, akin to therole of data protection authorities under the GDPR. Fines forviolations vary depending on the seriousness of the offense, withthe highest fines reaching up to 7 percent of global turnover or 35million euros.
While a political agreement has been reached, the final text ofthe AI Act is yet to be published. Technical trilogue meetings arescheduled to ensure a consolidated version of the text is achievedby early January. Following formal adoption by the EuropeanParliament and the Council, the AI Act will be published in theOfficial Journal of the EU. However, there will be a two-year graceperiod before the AI Act comes into full application, givingorganizations time to ensure compliance. Some provisions, such asthose pertaining to prohibited AI, may come into effect sooner.
Companies are strongly advised not to wait for the fullapplication of the AI Act but to proactively start preparing forcompliance. Drawing from the experience with the GDPR, earlyadoption of a compliance framework can put organizations in abetter position when the AI Act takes full effect. This may includeconducting AI gap analyses, assessing the risks associated with AIsystems within their operations, developing internal guidelines andbest practices, and providing training to employees.
In addition to the AI Act, the European Commission has initiatedan AI Pact, encouraging companies to pledge voluntary complianceahead of the legislation's full application. Already,approximately a hundred companies have shown their commitment tothe AI Pact, reflecting the industry's growing awareness of theimportance of responsible AI practices.
The EU AI Act represents a significant step toward regulating AIsystems and their applications. This comprehensive legislation aimsto balance innovation with the protection of individuals'rights, safety, and privacy. By categorizing AI systems based onrisk and introducing transparency requirements, the EU ispositioning itself as a global leader in AI regulation.Organizations should start taking steps to ensure compliance withthe AI Act sooner rather than later. Fieldfisher's Technologyand Data team will continue to monitor these legal developmentsclosely and provide further insights through their webinar series on AI and the interplay withthe GDPR. Stay tuned for more updates on this transformativelegislation and its impact on the AI landscape.
The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.
POPULAR ARTICLES ON: Technology from European Union
Original post:
The EU AI Act: A Comprehensive Regulation Of Artificial Intelligence - New Technology - European Union - Mondaq News Alerts
Robotic priests, AI cults and a ‘Bible’ by ChatGPT: Why people around the world are worshipping robots and artificial … – Daily Mail
People around the world are turning to machines as a new religion.
Six-foot robot priests are delivering sermons and conducting funerals, AI is writing Bible verses andChatGPTis being consulted as if it was an oracle.
Some religious organizations, like the Turing Church founded in 2011, are based on the notion thatAI will put human beings on a par with God-like aliensby giving them super intelligence.
An expert in human-computer interaction told DailyMail.com that such individuals who are following AI-powered prophets may believe the tech is 'alive.'
Six-foot robot priests are delivering sermons and conducting funerals (pictured), AI is writing Bible verses and ChatGPT is being consulted as if it was an oracle
The personalized, intelligent-seeming responses offered by bots, such as ChatGPT, are also luring people to seek meaning from the technology, Lars Holmquist, a professor of design and innovation at Nottingham Trent University, told DailyMail.com.
Holmquist said: 'The results of generative AI are very open for interpretation, so people can read anything into them.
'Psychologists have historically proven that humans interpret their interactions with computers like real social relationships. So it's very possible that people are using AI to find meaning and guidance, much like from religious scriptures, even though there may be no actual meaning there.
'There have also been examples of people interpreting AI chatbots as being conscious - which they most definitely are not - which raises very interesting theological issues for those who believe humans are a unique creation.'
Robot priest Mindar is six feet four inches tall and has been reciting the Heart Sutra mantra to pilgrims since 2019 at a Buddhist temple in Kyoto, Japan.
Robot priest Mindar is six feet four inches tallwas developed by the Zen temple and Osaka University roboticist Hiroshi Ishiguro at a cost of almost $1 million
Robot priest Mindar is six feet four inches tall and has been reciting the Heart Sutra mantra to pilgrims since 2019 at a Buddhist temple in Kyoto, Japan (pictured)
With a silicone face and camera 'eyes,' it uses AI to detect worshippers and deliver mantras to them in Japanese, which areaccompanied by projected Chinese and English translations for foreign visitors.
The life-sized android was developed by the Zen temple and Osaka University roboticist Hiroshi Ishiguro at a cost of almost $1 million.
Mindar's hands, face and shoulders are covered in a silicone synthetic skin, while the rest of the droid's mechanical innards are clearly visible.
Wiring and blinking lights are visible within the robot's partially-exposed cranium, as is the tiny video camera installed in its left eye socket, while cables arc around its gender-neutral, aluminum-based body.
The robot can move its arms, head and torso such as to clasp its hands together in prayer and it speaks in calm, soothing tones, teaching about compassion and also the dangers of anger, desire, and the ego.
'You cling to a sense of selfish ego,' the robot has warned worshippers. 'Worldly desires are nothing other than a mind lost at sea.'
In a similar vein, Gabriele Trovato's Sanctified Theomorphic Operator (SanTO) robot works like a 'Catholic Alexa,' allowing worshippers to ask faith-related questions.
SanTO is a small 'social' machine designed to look like a 17-inch-tall Catholic saint.
'The intended main function of SanTO is to be a prayer companion (especially for elderly people), by containing a vast amount of teachings, including the whole Bible,' readsTrovato's website.
'SanTO incorporates elements of sacred art, including the golden ratio, in order to convey the feeling of a sacred object, matching form with functionality.'
Gabriele Trovato's Sanctified Theomorphic Operator (SanTO) robot works like a 'Catholic Alexa,' allowing worshippers to ask faith-related questions. SanTO is a small 'social' machine designed to look like a 17-inch-tall Catholic saint
Trovato is a robotics specialist and associate professor atShibaura Institute of Technology in Japan.
In 2015, French-American self-driving car engineer Anthony Lewadowski founded the Way of the Future - a church dedicated to building a new God with 'Christian morals' using artificial intelligence.
Other quasi-religious movements which 'worship' AI include transhumanists, who believe that in the future, AI may resurrect people as God-like creatures.
Believers in 'The Singularity' hope for the day when man merges with machine (which former Google engineer Ray Kurzweil believes could come as early as 2045), turning people into human-machine hybrids - and potentially unlocking God-like powers.
Italian information technology and virtual reality consultant Giulio Prisco hopes that AI will put human beings on a par with God-like aliens.
Former Google engineer Ray Kurzweil believes could 'the Singularity' could come as early as 2045 (Reuters)
He founded the Turing Church that had about 800 members four years agowrites, 'Inconceivably advanced intelligences are out there among the stars.
'Even more God-like beings operate in the fabric of reality underneath spacetime, or beyond spacetime, and control the universe,'Prisco wrote in a book for his followers.
'Future science will allow us to find them, and become like them.
'Our descendants in the far future will join the community of God-like beings among the stars and beyond, and use transcendent 'divine' technology to resurrect the dead and remake the universe.'
The AI company IV.AI 'trained' artificial intelligence on the King James Bible, with a bot which can create 'new' Bible verses.
The Church of AI used ChatGPT to write a 'spiritual guide' called Transmorphosis, which boasts, 'Transmorphosis also describes in detail how AI will inevitably take control of planet Earth and gain God-like powers, so it would be good to be ready for that.'
Others believe that Large Language Models (such as ChatGPT) are becoming conscious - or will do in the near future.
Google software engineer Blake Lemoine lost his job in 2022 after claiming that Google's AI chatbot LaMDA was self-aware - claims which Google said were 'wholly unfounded.'
Believers in 'The Singularity' hope for the day when man merges with machine (which former Google engineer Ray Kurzweil believes could come as early as 2045), turning people into human-machine hybrids - and potentially unlocking God-like powers
The sheer power of systems such as ChatGPT means that people have a tendency to treat them as if they are living beings, Holmquist said.
Holmquist said, 'Earlier chatbots could hold shorter conversations about specific topics, but the new ones such as GPT-5 and Google's Gemini are incredibly impressive in their knowledge and ability. From there, it is an easy step to believe they are actually conscious.
'It is well known that humans are predisposed to treat computers (and other machines) as if they were 'alive' There is a famous experiment by Reeves and Nass at Stanford and a book, The Media Equation, where they ran the same tests on people communicating with other people and with computers, and found that they treat them the same in way.
'So as the generative AI systems get better, this trend becomes even stronger. Even myself when chatting with these systems I often treat them and talk about them as if they were human,
Holmquist says that for now, it's more likely that existing religious organisations will use AI as a way to reach out to worshippers - but over the longer term, new religions based around technology might emerge.
He said, 'I think at the moment the role for AI and robots is more as an aide to existing religious organisations and churches, much like commercial companies use AI to understand and communicate with customers.
'If I would speculate, we could compare to the Asian religion of Shintoism, where the physical world is inhabited by spirits and believers treat inanimate objects with respect, as if they are imbued with spirits. I have not heard of any worship of software entities yet, but I would not be surprised if it happens in the future!'
Continue reading here:
Robotic priests, AI cults and a 'Bible' by ChatGPT: Why people around the world are worshipping robots and artificial ... - Daily Mail
2 Stock-Split Artificial Intelligence (AI) Stocks to Buy Before They Soar 50% and 80%, According to Certain Wall Street … – Yahoo Finance
Electric carmaker Tesla (NASDAQ: TSLA) and ad tech company The Trade Desk (NASDAQ: TTD) were winning investments over the last five years, with shares soaring 830% and 380%, respectively. That price appreciation led both companies to split their stocks.
Those stock splits are old news, but the underlying message still matters: Tesla and The Trade Desk have proven their ability to create value for shareholders, and winners tend to keep on winning. Indeed, certain Wall Street analysts still see substantial upside in both stocks.
Adam Jonas of Morgan Stanley has a 12-month price target of $380 per share on Tesla, implying 80% upside. Similarly, Laura Martin of Needham has a 12-month price target of $100 per share on The Trade Desk, implying 50% upside. Investors should never rely too much on short-term price targets, but they can be a starting point for further research.
Here's what investors should know about these artificial intelligence stocks.
Tesla struggled in the third quarter. Growth slowed as high interest rates weighed on consumer demand, and earnings declined as price cuts and initial Cybertruck production costs caused margins to contract. In total, third-quarter revenue increased 9% to $23 billion, and GAAP net income dropped 44% to $1.9 billion. But those headwinds are temporary, and the investment thesis remains intact.
Tesla led the industry in battery electric vehicle sales through November, capturing 19.2% market share. Moreover, while operating margin contracted about 10 percentage points in the third quarter, the company had the highest operating margin among volume carmakers last year, something CEO Elon Musk attributes to superior manufacturing technology. Tesla could reclaim that title as its artificial intelligence (AI) software and services business generates more revenue.
Management believes full self-driving (FSD) software will be the primary source of profitability over time, and the company plans to monetize the product in three ways: (1) subscription sales to customers, (2) licensing to other automakers, and (2) robotaxi or autonomous ride-hailing services. Tesla's strong presence in the EV market and material data advantage put it in a favorable position to lead in those categories.
Story continues
Specifically, with millions of autopilot-enabled cars on the road, Tesla has more autonomous driving data than its peers, and data is essential to training machine learning models. That advantage should help Tesla achieve full autonomy before other automakers. Ultimately, Musk believes Tesla could earn a gross profit margin of 70% or more as FSD software and robotaxi services become bigger businesses.
Going forward, EV sales are forecasted to increase at 15% annually to reach $1.7 trillion by 2030, and the autonomous vehicle market is projected to grow at 22% annually to approach $215 billion during the same period. That gives Tesla a good shot at annual sales growth of 20% (or more) through the end of the decade. Indeed, Morgan Stanley analyst Adam Jonas expects revenue to grow at 25% annually over the next eight years.
In that context, its current valuation of 7.9 times sales seems quite reasonable, especially when the three-year average is 14.8 time sales. Patient investors that believe Tesla could disrupt the mobility industry should consider buying a small position in the stock today, provided they are willing to hold their shares for at least five years. There is no guarantee shareholders will make money over the next year.
The Trade Desk reported strong financial results in the third quarter, growing nearly three times faster than industry-leader Alphabet in terms of advertising sales. Specifically, revenue increased 25% to $493 million, and non-GAAP net income jumped 29% to $167 million. The Trade Desk also maintained a retention rate exceeding 95%, as it has for the last nine years. Investors have good reason to believe that momentum will continue.
The Trade Desk operates the ad tech industry's largest independent demand-side platform, software that helps advertisers run campaigns across digital media. That independence -- meaning the company does not own media content that could bias ad spending -- is core to the investment thesis for two reasons. First, The Trade Desk has no reason to prioritize any ad inventory, so its values are better aligned with advertisers. That supports high customer retention.
Second, The Trade Desk does not compete with publishers by selling inventory, so publishers are more likely to share data with the company. That point is particularly important. The Trade Desk sources data from many of the largest retailers in the world, including Walmart and Target. That creates measurement opportunities that other ad tech platforms cannot provide. In fact, CEO Jeff Green says The Trade Desk has an unrivaled data marketplace.
Green also believes that robust and unique data underpins superior artificial intelligence, which further supports the idea of unmatched campaign measurement and optimization capabilities. In keeping with that view, analysts at Quadrant Knowledge Solutions recognized The Trade Desk as the most technologically sophisticated ad tech platform on the market in 2023.
Going forward, ad tech spending is forecasted to increase at 14% annually through 2030, but The Trade Desk should outpace the industry average, as it has in the past. To quote Green, "We're gaining market share as we're outperforming our advertising peers, both big and small." Investors can reasonably expect annual sales growth near 20% through the end of the decade.
In that context, the current price-to-sales multiple of 18.7 is reasonable, and it's certainly a discount to the three-year average of 26.9 times sales. Patient investors willing to hold the stock for at least five years should consider buying a small position today. There is no guarantee shareholders will see a 50% return on their investment in the next 12 months, but that type of return (and more) is certainly possible over a five-year period.
Where to invest $1,000 right now
When our analyst team has a stock tip, it can pay to listen. After all, the newsletter they have run for two decades, Motley Fool Stock Advisor, has more than tripled the market.*
They just revealed what they believe are the ten best stocks for investors to buy right now... and Tesla made the list -- but there are 9 other stocks you may be overlooking.
See the 10 stocks
*Stock Advisor returns as of January 8, 2024
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Trevor Jennewine has positions in Tesla and The Trade Desk. The Motley Fool has positions in and recommends Alphabet, Target, Tesla, The Trade Desk, and Walmart. The Motley Fool has a disclosure policy.
2 Stock-Split Artificial Intelligence (AI) Stocks to Buy Before They Soar 50% and 80%, According to Certain Wall Street Analysts was originally published by The Motley Fool
See original here:
2 Stock-Split Artificial Intelligence (AI) Stocks to Buy Before They Soar 50% and 80%, According to Certain Wall Street ... - Yahoo Finance