Category Archives: Artificial Intelligence

Artificial Intelligence and Nuclear Stability – War On The Rocks

Policymakers around the world are grappling with the new opportunities and dangers that artificial intelligence presents. Of all the effects that AI can have on the world, among the most consequential would be integrating it into the command and control for nuclear weapons. Improperly used, AI in nuclear operations could have world-ending effects. If properly implemented, it could reduce nuclear risk by improving early warning and detection and enhancing the resilience of second-strike capabilities, both of which would strengthen deterrence. To take full advantage of these benefits, systems must take into account the strengths and limitations of humans and machines. Successful human-machine joint cognitive systems will harness the precision and speed of automation with the flexibility of human judgment and do so in a way that avoids automation bias and surrendering human judgment to machines. Because of the early state of AI implementation, the United States has the potential to make the world safer by more clearly outlining its policies, pushing for broad international agreement, and acting as a normative trendsetter.

The United States has been extremely transparent and forward-leaning in establishing and communicating its policies on military AI and autonomous systems, publishing its policy on autonomy in weapons in 2012, adopting ethical principles for military AI in 2020, and updating its policy on autonomy in weapons in 2023. The department stated formally and unequivocally in the 2022 Nuclear Posture Review that it will always maintain a human in the loop for nuclear weapons employment. In November 2023, over 40 nations joined the United States in endorsing a political declaration on responsible military use of AI. Endorsing states included not just U.S. allies but also nations in Africa, Southeast Asia, and Latin America.

[wotr_memer_button]

Building on this success, the United States should push for international agreements with other nuclear powers to mitigate the risks of integrating AI into nuclear systems or placing nuclear weapons onboard uncrewed vehicles. The United Kingdom and France released a joint statement with the United States in 2022 agreeing on the need to maintain human control of nuclear launches. Ideally, this could represent the beginning of a commitment by the permanent members of the United Nations Security Council if Russia and China could be convinced to join this principle. Even if they are not willing to agree, the United States should further mature its own policies to address critical gaps and work with other nuclear-armed states to strengthen their commitments as an interim measure and as a way to build international consensus on the issue.

The Dangers of Automation

As militaries increasingly adopt AI and automation, there is an urgent need to clarify how these technologies should be used in nuclear operations. Absent formal agreements, states risk an incremental trend of creeping automation that could undermine nuclear stability. While policymakers are understandably reluctant to adopt restrictions on emerging technologies lest they give up a valuable future capability, U.S. officials should not be complacent in assuming other states will approach AI and automation in nuclear operations responsibly. Examples such as Russias Perimeter dead hand system and Poseidon autonomous nuclear-armed underwater drone demonstrate that other nations might see these risks differently than the United States and might be willing to take risks that U.S. policymakers would find unacceptable.

Existing systems, such as Russias Perimeter, highlight the risks of states integrating automation into nuclear systems. Perimeter is reportedly a system created by the Soviet Union in the 1980s to act as a failsafe in case Soviet leadership was destroyed in a decapitation strike. Perimeter reportedly has a network of sensors to determine if a nuclear attack has occurred. If these sensors are triggered while Perimeter is activated, the system would wait a predetermined period of time for a signal from senior military commanders. If there is no signal from headquarters, presumably because Soviet/Russian leadership had been wiped out, then Perimeter would bypass the normal chain of command and pass nuclear launch authority to a relatively junior officer on duty. Senior Russian officials have stated the system is still functioning, noting in 2011 that the system was combat ready and in 2018 that it had been improved.

The system was designed to reduce the burden on Soviet leaders of hastily making a nuclear decision under time pressure and with incomplete information. In theory, Soviet/Russian leaders could take more time to deliberate knowing that there is a failsafe guaranteeing retaliation if the United States succeeded in a decapitation strike. The cost, however, is a system that risks easing pathways to nuclear annihilation in the event of an accident.

Allowing autonomous systems to participate in nuclear launch decisions risks degrading stability and increasing the dangers of nuclear accidents. The Stanislav Petrov incident is an illustrative example of the dangers of automation in nuclear decision-making. In 1983, a Soviet early warning system indicated that the United States had launched several intercontinental ballistic missiles. Lieutenant Colonel Stanislav Petrov, the duty officer at the time, suspected that the system was malfunctioning because the number of missiles launched was suspiciously low and the missiles were not picked up by early warning radars. Petrov reported it (correctly) as a malfunction instead of an attack. AI and autonomous systems often lack the contextual understanding that humans have and that Petrov used to recognize that the reported missile launch was a false alarm. Without human judgment at critical stages of nuclear operations, automated systems could make mistakes or elevate false alarms, heightening nuclear risk.

Moreover, merely having humans in the loop will not be enough to ensure effective human decision-making. Human operators frequently fall victim to automation bias, a condition in which humans overtrust automation and surrender their judgment to machines. Accidents with self-driving cars demonstrate the dangers of humans overtrusting automation, and military personnel are not immune to this phenomenon. To ensure humans remain cognitively engaged in their decision-making, militaries will need to take into account not only the automation itself but also human psychology and human-machine interfaces.

More broadly, when designing human-machine systems, it is essential to consciously determine the appropriate roles for humans and machines. Machines are often better at precision and speed, while humans are often better at understanding the broader context and applying judgment. Too often, human operators are left to fill in the gaps for what automation cant do, acting as backups or failsafes for the edge cases that autonomous systems cant handle. But this model often fails to take into account the realities of human psychology. Even if human operators dont fall victim to automation bias, to assume that a person can sit passively watching a machine perform a task for hours on end, whether a self-driving car or a military weapon system, and then suddenly and correctly identify a problem when the automation is not performing and leap into action to take control is not realistic. Human psychology doesnt work that way. And tragic accidents with complex highly automated systems, such as the Air France 447 crash in 2009 and the 737 MAX crashes in 2018 and 2019, demonstrate the importance of taking into account the dynamic interplay between automation and human operators.

The U.S. military has also suffered tragic accidents with automated systems, even when humans are in the loop. In 2003, U.S. Army Patriot air and missile defense systems shot down two friendly aircraft during the opening phases of the Iraq war. Humans were in the loop for both incidents. Yet a complex mix of human and technical failures meant that human operators did not fully understand the complex, highly automated systems they were in charge of and were not effectively in control.

The military will need to establish guidance to inform system design, operator training, doctrine, and operational procedures to ensure that humans in the loop arent merely unthinking cogs in a machine but actually exercise human judgment. Issuing this concrete guidance for weapons developers and operators is most critical in the nuclear domain, where the consequences of an accident could be grave.

Clarifying Department of Defense Guidance

Recent policies and statements on the role of autonomy and AI in nuclear operations are an important first step in establishing this much-needed guidance, but additional clarification is needed. The 2022 Nuclear Posture Review states: In all cases, the United States will maintain a human in the loop for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment. The United Kingdom adopted a similar policy in 2022, stating in their Defence Artificial Intelligence Strategy: We will ensure that regardless of any use of AI in our strategic systems human political control of our nuclear weapons is maintained at all times.

As the first official policies on AI in nuclear command and control, these are landmark statements. Senior U.S. military officers had previously emphasized the importance of human control over nuclear weapons, including statements by Lt. Gen. Jack Shanahan, then-director of the Joint Artificial Intelligence Center in 2019. Official policy statements are more significant, however, in signaling to audiences both internal and external to the military the importance of keeping humans firmly in charge of all nuclear use decisions. These high-level statements nevertheless leave many open questions about implementation.

The next step for Department of Defense is to translate what the high-level principle of human in the loop means for nuclear systems, doctrine, and training. Key questions include: Which actions are critical to informing and executing decisions by the president? Do those only consist of actions immediately surrounding the president, or do they also include actions further down the chain of command before and after a presidential decision? For example, would it be acceptable for a human to deliver an algorithm-based recommendation to the president to carry out a nuclear attack? Or does a human need to be involved in understanding the data and rendering their own human judgment?

The U.S. military already uses AI to process information, such as satellite images and drone video feeds. Presumably, AI would also be used to support intelligence analysis that could support decisions about nuclear use. Under what circumstances is AI appropriate and beneficial to nuclear stability? Are some applications and ways of using AI more valuable than others?

When AI is used, what safeguards should be put in place to guard against mistakes, malfunctions, or spoofing of AI systems? For example, the United States currently employs a dual phenomenology mechanism to ensure that a potential missile attack is confirmed by two independent sensing methods, such as satellites and ground-based radars. Should the United States adopt a dual algorithm approach to any use of AI in nuclear operations, ensuring that there are two independent AI systems trained on different data sets with different algorithms as a safeguard against spoofing attacks or unreliable AI systems?

When AI systems are used to process information, how should that information be presented to human operators? For example, if the military used an algorithm trained to detect signs of a missile being fueled, that information could be interpreted differently by humans if the AI system reported fueling versus preparing to launch. Fueling is a more precise and accurate description of what the AI system is actually detecting and might lead a human analyst to seek more information, whereas preparing to launch is a conclusion that might or might not be appropriate depending on the broader context.

When algorithmic recommendation systems are used, how much of the underlying data should humans have to directly review? Is it sufficient for human operators to only see the algorithms conclusion, or should they also have access to the raw data that supports the algorithms recommendation?

Finally, what degree of engagement is expected from a human in the loop? Is the human merely there as a failsafe in case the AI malfunctions? Or must the human be engaged in the process of analyzing information, generating courses of actions, and making recommendations? Are some of these steps more important than others for human involvement?

These are critical questions that the United States will need to address as it seeks to harness the benefits of AI in nuclear operations while meeting the human in the loop policy. The sooner the Department of Defense can clarify answers to these questions, the more that it can accelerate AI adoption in ways that are trustworthy and meet the necessary reliability standards for nuclear operations. Nor does clarifying these questions overly constrain how the United States approaches AI. Guidance can always be changed over time as the technology evolves. But a lack of clear guidance risks forgoing valuable opportunities to use AI or, even worse, adopting AI in ways that might undermine nuclear surety and deterrence.

Dead Hand Systems

In clarifying its human-in-the-loop policy, the United States should make a firm commitment to reject dead hand nuclear launch systems or a system with a standing order to launch that incorporates algorithmic components. Dead hand systems akin to Russias Perimeter would appear to be prohibited by current Department of Defense policy. However, the United States should explicitly state that it will not build such systems given their risk.

Despite their danger, some U.S. analysts have suggested that the United States should adopt a dead hand system to respond to emerging technologies such as AI, hypersonics, and advanced cruise missiles. There are safer methods for responding to these threats, however. Rather than gambling humanitys future on an algorithm, the United States should strengthen its second-strike deterrent in response to new threats.

Some members of the U.S. Congress have even expressed a desire for writing this requirement into law. In April 2023, a bipartisan group of representatives introduced the Block Nuclear Launch by Autonomous Artificial Intelligence Act, which would prohibit funding for any system that launches nuclear weapons without meaningful human control. There is precedent for a legal requirement to maintain a human in the loop for strategic systems. In the 1980s, during development of the Strategic Defense Initiative (also known as Star Wars), Congress passed a law requiring affirmative human decision at an appropriate level of authority for strategic missile defense systems. This legislation could serve as a blueprint for a similar legislative requirement for nuclear use. One benefit of a legal requirement is that it ensures that such an important policy could not be overturned by a future administration or Pentagon leadership that is more risk-accepting without Congressional authorization.

Nuclear Weapons and Uncrewed Vehicles

The United States should similarly clarify its policy for nuclear weapons on uncrewed vehicles. The United States is producing a new nuclear-capable strategic bomber, the B-21, that will be able to perform uncrewed missions in the future, and is developing large undersea uncrewed vehicles that could carry weapons payloads. U.S. military officers have stated a strong reticence for placing nuclear weapons aboard uncrewed platforms. In 2016, then-Commander of Air Force Global Strike Command Gen. Robin Rand noted that the B-21 would always be crewed when carrying nuclear weapons: If you had to pin me down, I like the man in the loop; the pilot, the woman in the loop, very much, particularly as we do the dual-capable mission with nuclear weapons. General Rands sentiment may be shared among senior military officers, but it is not official policy. The United States should adopt an official policy that nuclear weapons will not be placed aboard recoverable uncrewed platforms. Establishing this policy could help provide guidance to weapons developers and the services about the appropriate role for uncrewed platforms in nuclear operations as the Department of Defense fields larger uncrewed and optionally crewed platforms.

Nuclear weapons have long been placed on uncrewed delivery vehicles, such as ballistic and cruise missiles, but placing nuclear weapons on a recoverable uncrewed platform such as a bomber is fundamentally different. A human decision to launch a nuclear missile is a decision to carry out a nuclear strike. Humans could send a recoverable, two-way uncrewed platform, such as a drone bomber or undersea autonomous vehicle, out on patrol. In that case, the human decision to launch the nuclear-armed drone would not yet be a decision to carry out a nuclear strike. Instead, the drone could be sent on patrol as an escalation signal or to preposition in case of a later decision to launch a nuclear attack. Doing so would put enormous faith in the drones communications links and on-board automation, both of which may be unreliable.

The U.S. military has lost control of drones before. In 2017, a small tactical Army drone flew over 600 miles from southern Arizona to Denver after Army operators lost communications. In 2011, a highly sensitive U.S. RQ-170 stealth drone ended up in Iranian hands after U.S. operators lost contact with it over Afghanistan. Losing control of a nuclear-armed drone could cause nuclear weapons to fall into the wrong hands or, in the worst case, escalate a nuclear crisis. The only way to maintain nuclear surety is direct, physical human control over nuclear weapons up until the point of a decision to carry out a nuclear strike.

While the U.S. military would likely be extremely reluctant to place nuclear weapons onboard a drone aircraft or undersea vehicle, Russia is already developing such a system. The Poseidon, or Status-6, undersea autonomous uncrewed vehicle is reportedly intended as a second- or third-strike weapon to deliver a nuclear attack against the United States. How Russia intends to use the weapon is unclear and could evolve over time but an uncrewed platform like the Poseidon in principle could be sent on patrol, risking dangerous accidents. Other nuclear powers could see value in nuclear-armed drone aircraft or undersea vehicles as these technologies mature.

The United States should build on its current momentum in shaping global norms on military AI use and work with other nations to clarify the dangers of nuclear-armed drones. As a first step, the U.S. Defense Department should clearly state as a matter of official policy that it will not place nuclear weapons on two-way, recoverable uncrewed platforms, such as bombers or undersea vehicles. The United States has at times foresworn dangerous weapons in other areas, such as debris-causing antisatellite weapons, and publicly articulated their dangers. Similarly explaining the dangers of nuclear-armed drones could help shape the behavior of other nuclear powers, potentially forestalling their adoption.

Conclusion

It is imperative that nuclear powers approach the integration of AI and autonomy in their nuclear operations thoughtfully and deliberately. Some applications, such as using AI to help reduce the risk of a surprise attack, could improve stability. Other applications, such as dead hand systems, could be dangerous and destabilizing. Russias Perimeter and Poseidon systems demonstrate that other nations might be willing to take risks with automation and autonomy that U.S. leaders would see as irresponsible. It is essential for the United States to build on its current momentum to clarify its own policies and work with other nuclear-armed states to seek international agreement on responsible guardrails for AI in nuclear operations. Rumors of a U.S.-Chinese agreement on AI in nuclear command and control at the meeting between President Joseph Biden and General Secretary Xi Jinping offer a tantalizing hint of the possibilities for nuclear powers to come together to guard against the risks of AI integrated into humanitys most dangerous weapons. The United States should seize this moment and not let this opportunity pass to build a safer, more stable future.

Michael Depp is a research associate with the AI safety and stability project at the Center for a New American Security (CNAS).

Paul Scharre is the executive vice president and director of studies at CNAS and the author of Four Battlegrounds: Power in the Age of Artificial Intelligence.

Image: U.S. Air Force photo by Senior Airman Jason Wiese

See the original post:
Artificial Intelligence and Nuclear Stability - War On The Rocks

Test Yourself: Which Faces Were Made by A.I.? – The New York Times

Tools powered by artificial intelligence can create lifelike images of people who do not exist.

See if you can identify which of these images are real people and which are A.I.-generated.

1/10

Were you surprised by your results? You guessed 0 times and got0 correct.

Ever since the public release of tools like Dall-E and Midjourney in the past couple of years, the A.I.-generated images theyve produced have stoked confusion about breaking news, fashion trends and Taylor Swift.

Distinguishing between a real versus an A.I.-generated face has proved especially confounding.

Research published across multiple studies found that faces of white people created by A.I. systems were perceived as more realistic than genuine photographs of white people, a phenomenon called hyper-realism.

Researchers believe A.I. tools excel at producing hyper-realistic faces because they were trained on tens of thousands of images of real people. Those training datasets contained images of mostly white people, resulting in hyper-realistic white faces. (The over-reliance on images of white people to train A.I. is a known problem in the tech industry.)

The confusion among participants was less apparent among nonwhite faces, researchers found.

Participants were also asked to indicate how sure they were in their selections, and researchers found that higher confidence correlated with a higher chance of being wrong.

We were very surprised to see the level of over-confidence that was coming through, said Dr. Amy Dawel, an associate professor at Australian National University, who was an author on two of the studies.

It points to the thinking styles that make us more vulnerable on the internet and more vulnerable to misinformation, she added.

The idea that A.I.-generated faces could be deemed more authentic than actual people startled experts like Dr. Dawel, who fear that digital fakes could help the spread of false and misleading messages online.

A.I. systems had been capable of producing photorealistic faces for years, though there were typically telltale signs that the images were not real. A.I. systems struggled to create ears that looked like mirror images of each other, for example, or eyes that looked in the same direction.

But as the systems have advanced, the tools have become better at creating faces.

The hyper-realistic faces used in the studies tended to be less distinctive, researchers said, and hewed so closely to average proportions that they failed to arouse suspicion among the participants. And when participants looked at real pictures of people, they seemed to fixate on features that drifted from average proportions such as a misshapen ear or larger-than-average nose considering them a sign of A.I. involvement.

The images in the study came from StyleGAN2, an image model trained on a public repository of photographs containing 69 percent white faces.

Study participants said they relied on a few features to make their decisions, including how proportional the faces were, the appearance of skin, wrinkles, and facial features like eyes.

Continue reading here:
Test Yourself: Which Faces Were Made by A.I.? - The New York Times

History Suggests the Nasdaq Will Surge in 2024: My Top 7 Artificial Intelligence (AI) Growth Stocks to Buy Before It Does – The Motley Fool

The macroeconomic challenges of the past couple of years are beginning to fade, and investors are looking to the future. After the Nasdaq Composite plunged in 2022, suffering its worst performance since 2008, the index enjoyed a robust recovery in 2023 and gained 43%.

There could be more to come. Since the Nasdaq Composite began trading in 1972, in every year following a market recovery, the tech-heavy index rose again -- and those second-year gains averaged 19%. The economy is the wildcard here, though, and it could yet stumble in 2024. But historical patterns suggest that this could be a good year for investors.

Recent developments in the field of artificial intelligence (AI) helped fuel the market's rise last year and will likely drive further gains in 2024. While estimates vary wildly, generative AI is expected to add between $2.6 trillion and $4.4 trillion to the global economy annually over the next few years, according to a study by McKinsey Global Institute. This will result in windfalls for many companies in the field.

Here are my top seven AI stocks to buy for 2024 before the Nasdaq reaches new heights.

Image source: Getty Images.

Nvidia (NVDA 0.50%) is the poster child for AI innovation. Its graphics processing units (GPUs) are already the industry standard chips in a growing number of AI use cases -- including data centers, cloud computing, and machine learning -- and it quickly adapted its processors for the needs of generative AI. Though it has been ramping up production, the AI chip shortage is expected to last until 2025 as demand keeps growing. The specter of competition looms, but thus far, Nvidia has stayed ahead of the competition by spending heavily on research and development.

The company's triple-digit percentage year-over-year growth is expected to continue into 2024. Despite its prospects, Nvidia remains remarkably cheap, with a price/earnings-to-growth ratio (PEG ratio) of less than 1 -- the standard for an undervalued stock.

Microsoft (MSFT -0.41%) helped jump-start the AI boom when it invested $13 billion in ChatGPT creator OpenAI, shining a spotlight on generative AI. The company's tech peers jumped on the bandwagon, and the AI gold rush began. Microsoft seized the advantage, integrating OpenAI's technology into its Bing search and a broad cross-section of its cloud-based offerings.

Its productivity-enhancing AI assistant, Copilot, could generate as much as $100 billion in incremental revenue by 2027, according to some analysts, though estimates vary. This and other AI tools already caused Azure Cloud's growth to outpace rivals in Q3, and Microsoft attributed 3 percentage points of that growth to AI.

The stock is selling for 35 times forward earnings, a slight premium to the price-to-earnings ratio of 26 for the S&P 500. Even so, that looks attractive given Microsoft's growth potential.

Alphabet (GOOGL -0.20%) (GOOG -0.10%) has long used AI to improve its search results and the relevance of its digital advertising. The company was quick to recognize the potential of generative AI, imbuing many of its Google and Android products with increased functionality and announcing plans to add new AI tools to its search product. Furthermore, as the world's third-largest cloud infrastructure provider, Google Cloud is suited to offer AI systems to its customers.

A collaboration between Google and Alphabet's AI research lab, DeepMind, gave birth to Gemini, which the company bills as its "largest and most capable AI model." Google Cloud's Vertex AI offers 130 foundational models that help users build and deploy generative AI apps quickly.

Add to that the ongoing rebound in its digital advertising business, and Alphabet's valuation of 27 times earnings seems like a steal.

There's a popular narrative that Amazon (AMZN -0.45%) was late to recognize the opportunities in AI, but the company's history tells a different story. Amazon continues to deploy AI to surface relevant products to shoppers, recommend viewing choices on Prime Video, schedule e-commerce deliveries, and predict inventory levels, among other uses. Most recently, Amazon began testing an AI tool designed to answer shoppers' questions about products.

Amazon Web Services (AWS) stocks all the most popular generative AI models for its cloud customers on Bedrock AI, and is also deploying its Inferentia and Trainium purpose-built AI chips for accelerating AI on its infrastructure.

Now that inflation has slowed markedly, more consumers and businesses are patronizing Amazon, and AI will help boost its fortunes.

Image source: Getty Images.

Meta Platforms (META -0.38%) also has a long and distinguished history of using AI to its advantage. From identifying and tagging people in photos to surfacing relevant content on its social media platforms, Meta has never been shy about deploying AI systems.

Unlike some of its big tech rivals, Meta doesn't have a cloud infrastructure service to peddle its AI wares, but it quickly developed a workaround. After developing its open-source Llama AI model, Meta made it available on all the major cloud services -- for a price. Furthermore, Meta offers a suite of free AI-powered tools to help advertisers succeed.

Improving economic conditions will no doubt boost its digital advertising business. And with the stock trading at just 22 times forward earnings, Meta is inexpensive relative to its opportunity.

Palantir Technologies (PLTR 5.07%) has two decades of experience building AI-powered data analytics, and was ready to meet the challenge when AI went mainstream. In just months, the company added generative AI models to its portfolio, layering these atop its data analytics tools. The launch of the Palantir Artificial Intelligence Platform (AIP) has generated a lot of excitement. "Demand for AIP is unlike anything we have seen in the past 20 years," said management.

When fears of a downturn were higher, businesses scaled back on most nonessential spending, including data analytics and AI services, but now, demand for those services is rebounding, particularly in relation to generative AI.

Looking ahead one year, Palantir sports a PEG ratio of less than 1, which helps illustrate how cheap the stock really is.

Tesla (TSLA -1.61%) made a splash by bringing electric vehicles (EVs) into the mainstream. In 2023, its Model Y topped the list of the world's best-selling cars by a comfortable margin, the first EV to do so. However, the magnitude of its future prosperity will likely be linked to AI. The company's "full self-drive" system has yet to live up to its name, but success on that front would be a boon to shareholders.

In Ark Investment Management's Big Ideas 2023 report, the firm estimates that robotaxis could generate $4 trillion in revenue in 2027. With an estimated 2.7 million vehicles on the road collecting data, Tesla could hold an insurmountable technological edge, if it cracks the code on autonomous driving. Some analysts estimate the software is already worth tens of billions of dollars.

Finally, 6 times forward sales is a pretty reasonable valuation for an industry leader with a treasure trove of data.

Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Danny Vena has positions in Alphabet, Amazon, Meta Platforms, Microsoft, Nvidia, Palantir Technologies, and Tesla. The Motley Fool has positions in and recommends Alphabet, Amazon, Meta Platforms, Microsoft, Nvidia, Palantir Technologies, and Tesla. The Motley Fool has a disclosure policy.

Read more here:
History Suggests the Nasdaq Will Surge in 2024: My Top 7 Artificial Intelligence (AI) Growth Stocks to Buy Before It Does - The Motley Fool

How is Artificial Intelligence Impacting the Global Economy? – Analytics Insight

The idea of artificial intelligence (AI) is no longer limited to science fictions futuristic future. It has permeated our daily lives, from personalized shopping recommendations to facial recognition unlocking our phones. But the true impact of AI is unfolding on a grander scale, fundamentally reshaping the global economy.

One of the most hotly debated aspects of AI is its impact on jobs. On the one hand, AI presents an opportunity for increased productivity and automation, leading to economic growth and the creation of new jobs in fields like AI development and data analysis. On the other hand, concerns lurk about job displacement, particularly in sectors with routine tasks susceptible to automation.

Studies estimate that between 40% and 60% of existing jobs are at risk of automation to some degree. Manufacturing, transportation, and administrative tasks are at the forefront, while jobs requiring creativity, empathy, and critical thinking remain less vulnerable. This could widen the existing skill gap and exacerbate income inequality if adequate reskilling and education programs arent implemented.

However, experts emphasize that AI-driven job losses will likely be a gradual process, allowing time for workforce adaptation. Additionally, AI is expected to create new opportunities in sectors like healthcare, environmental management, and personalized education. The key lies in proactively preparing the workforce for these shifts and ensuring equitable access to new training and education.

Beyond potential job displacement, AI offers immense potential for economic growth by enhancing productivity and sparking innovation. In manufacturing, AI-powered robots can optimize production lines, reduce waste, and improve product quality. In finance, AI algorithms can analyze vast amounts of data to detect fraud, optimize investments, and personalized financial services.

Healthcare benefits from AI-powered systems that analyze medical images to detect diseases with greater accuracy, develop personalized treatment plans, and even assist with robotic surgery. AI also accelerates scientific research by analyzing large datasets and predicting potential breakthroughs in fields like drug discovery and materials science.

These productivity gains and innovations translate into economic growth, increased global trade, and potentially improved living standards. However, harnessing the full potential of AI requires significant investments in infrastructure, data security, and ethical considerations to ensure the responsible development and deployment of these technologies.

The development and adoption of AI is not evenly distributed across the globe. Developed nations like the United States, China, and the European Union currently hold the lead in AI research, development, and implementation. This creates a risk of widening the already existing economic gap between developed and developing nations.

Developing countries may struggle to acquire the necessary infrastructure, resources, and talent to compete in the AI race. This could lead to further economic dependence on developed nations and exacerbate global inequality. To bridge this gap, international cooperation and knowledge sharing are crucial, allowing developing countries to leverage AI for their own economic development and social progress.

As AI becomes increasingly integrated into the economy, ethical considerations take center stage. Issues like data privacy, algorithmic bias, and the potential misuse of AI for surveillance and warfare demand careful attention.

Transparency and accountability in AI development are crucial to ensure that algorithms are free from bias and do not discriminate against individuals or groups. Additionally, discussions regarding the human in the loop and the role of human oversight in AI-driven decision-making are essential to prevent unforeseen consequences.

The future of work in the age of AI will likely be characterized by collaboration between humans and machines. Humans will focus on tasks requiring creativity, critical thinking, and emotional intelligence, while AI handles routine tasks and provides data-driven insights. This necessitates significant changes in education and training to equip future generations with the skills needed to thrive in this evolving landscape.

The impact of AI on the global economy is complex and multifaceted. While it presents challenges like job displacement and ethical concerns, it also offers immense opportunities for economic growth, innovation, and improved living standards. By proactively addressing the challenges and harnessing the potential of AI responsibly, we can navigate this technological revolution towards a more inclusive and prosperous future for all.

Join our WhatsApp and Telegram Community to Get Regular Top Tech Updates

Originally posted here:
How is Artificial Intelligence Impacting the Global Economy? - Analytics Insight

Building trust in artificial intelligence: lessons from the EU AI Act | The Strategist – The Strategist

Artificial intelligence will radically transform our societies and economies in the next few years. The worlds democracies, together, have a duty to minimise the risks this new technology poses through smart regulation, without standing in the way of the many benefits it will bring to peoples lives.

There is strong momentum for AI regulation in Australia, following its adoption of a government strategy and a national set of AI ethics. Just as Australia begins to define its regulatory approach, the European Union has reached political agreement on the EU AI Act, the worlds first and most comprehensive legal framework on AI. That provides Australia with an opportunity to reap the benefits from the EUs experiences.

The EU embraces the idea that AI will bring many positive changes. It will improve the quality and cost-efficiency of our healthcare sector, allowing treatments that are tailored to individual needs. It can make our roads safer and prevent millions of casualties from traffic accidents. It can significantly improve the quality of our harvests, reducing the use of pesticides and fertiliser, and so help feed the world. Last but not least, it can help fight climate change, reducing waste and making our energy systems more sustainable.

But the use of AI isnt without risks, including risks arising from the opacity and complexity of AI systems and from intentional manipulation. Bad actors are eager to get their hands on AI tools to launch sophisticated disinformation campaigns, unleash cyberattacks and step up their fraudulent activities.

Surveys, including some conducted in Australia, show that many people dont fully trust AI. How do we ensure that the AI systems entering our markets are trustworthy?

The EU doesnt believe that it can leave responsible AI wholly to the market. It also rejects the other extreme, the autocratic approach in countries like China of banning AI models that dont endorse government policies. The EUs answer is to protect users and bring trust and predictability to the market through targeted product-safety regulation, focusing primarily on the high-risk applications of AI technologies and powerful general-purpose AI models.

The EUs experience with its legislative process offers five key lessons to approaching AI governance.

First, any regulatory measures must focus on ensuring that AI systems are safe and human-centric before they can be used. To generates the necessary trust, AI systems must be checked for core principles such as non-discrimination, transparency and explainability. AI developers must train their systems on adequate datasets, maintain risk-management systems and provide for technical measures for human oversight. Automated decisions must be explainable; arbitrary black box decisions are unacceptable. Deployers must also be transparent and inform users when an AI system generates content such as deepfakes.

Second, rules should focus not on the AI technology itselfwhich develops at lightning speedbut on governing its use. Focusing on use casesfor example, in health care, finance, recruitment or the justice systemensures that regulations are future-proof and dont lag behind rapidly evolving AI technologies.

The third lesson is to follow a risk-based approach. Think of AI regulation as a pyramid, with different levels of risk. In most cases, the use of AI poses no or only minimal risksfor example, when receiving music recommendations or relying on navigation apps. For such uses, no or soft rules should apply.

However, in a limited number of situations where AI is used, decisions can have material effects on peoples livesfor example, when AI makes recruitment decisions or decides on mortgage qualifications. In these cases, stricter requirements should apply, and AI systems must be checked for safety before they can be used, as well as monitored after theyre deployed. Some uses that pose unacceptable risks to democratic values, such as social scoring systems, should be banned completely.

Specific attention should be given to general-purpose AI models, such as GPT-4, Claude and Gemini. Given their potential for downstream use for a wide variety of tasks, these models should be subject to transparency requirements. Under the EU AI Act, general-purpose AI models will be subject to a tiered approach. All models will be required to provide technical documentation and information on the data used to train them. The most advanced models, which can pose systemic risks to society, will be subject to stricter requirements, including model evaluations (red-teaming), risk identification and mitigation measures, adverse event reporting and adequate cybersecurity protection.

Fourth, enforcement should be effective but not burdensome. The act aligns with the EUs longstanding product-safety approach: certain risky systems need to be assessed before being put on the market, to protect the public. The act classifies AI systems into the high-risk category if they are used in products covered by existing product-safety legislation, and when they are used in certain critical areas, including employment and education. Providers of these systems must ensure that their systems and governance practices conform to regulatory requirements. Designated authorities will oversee providers conformity assessments and take action on non-compliant providers. For the most advanced general-purpose AI models, the new regulation establishes an EU AI Office to ensure efficient, centralised oversight of the models posing systemic risks to society.

Lastly, developers of AI systems should be held to account when those systems cause harm. The EU is currently updating its liability rules to make it easier for those who have suffered damages from AI systems to bring claims and obtain reliefsurely prompting developers to exercise even greater due diligence before putting AI into the market.

The EU believes an approach built around these five key tenets is balanced and effective. However, while the EU may be the first democracy to establish a comprehensive framework, we need a global approach to be truly effective. For this reason, the EU is also active in international forums, contributing to the progress made, for example, in the G7 and the OECD. To ensure effective compliance, though, we need binding rules. Working closely together as like-minded countries will enable us to shape an international approach to AI that is consistent withand based onour shared democratic values.

The EU supports Australias promising efforts to put in place a robust regulatory framework. Together, Australia and the EU can promote a global standard for AI governancea standard that boosts innovation, builds public trust and safeguards fundamental rights.

Read this article:
Building trust in artificial intelligence: lessons from the EU AI Act | The Strategist - The Strategist

Sheryl Crow Questions Artificial Intelligence on Anthemic Single Evolution – Rolling Stone

Sheryl Crow has released a new single, Evolution. The song is the title track off the musicians forthcoming 11th studio LP, out March 29 via The Valory Music Group, and grapples with the future impact of artificial intelligence on humanity and the planet.

Stephen Hawking worried that A.I. would replace humans, Crow said in a statement. As a mom, I want to leave a better world for my children, a healthier planet is A.I. going to be a benevolent partner in these goals or not? Its unsettling, and this song deals with those anxieties.

Crow recorded the song with Mike Elizondo, who produced the album. It features a memorable guitar solo from Rage Against The Machine guitarist Tom Morello.

I wrote the song with just me on guitar and vocals, sent it to Mike Elizondo and said, this is bigger than me, can you take a crack at it? Crow explained. To me, Toms playing comes from some other planet. Its a cool bit of kismet that we were inducted into the Rock and Roll Hall of Fame in the same year, and his solo on Evolution just ejects you into space.

Crow previously teased her album with an upbeat single, Alarm Clock, which was co-written by Crow with Elizondo and Emily Weisband. The singles music video arrived earlier this month. Thealbum announcementin November came as a surprise since Crow publicly said she would not release another full-length album after her 2018 effortThreads.

Everything is more song oriented now with streaming, and making an album is a huge endeavor, Crow explained in a statement.I started sending just a couple of demos to Mike, but the songs just kept flowing out of me and it was pretty obvious this was going to be an album.

The musician was inducted into the Rock and Roll Hall of Fame late last year. During the ceremony, Crow performed her 1996 hit If It Makes You Happy with Olivia Rodrigo and was joined by Stevie Knicks for Strong Enough.

Read the original:
Sheryl Crow Questions Artificial Intelligence on Anthemic Single Evolution - Rolling Stone

Pioneering AI artist says the technology is ultimately ‘limiting’, left her ‘burnt out’ – HT Tech

An artist who shook up the cultural world with a haunting female portrait created by artificial intelligence (AI) has decided she's had enough of the new technology for now. Working with AI to create art is ultimately "very frustrating and very limiting," Swedish-based artist and writer Supercomposite told AFP. For the moment, she has stopped working with AI and is writing a screenplay instead, saying her experience with AI art left her "burned out".

"It creates this dopamine path in your brain. It's very addictive to keep pushing that button and getting these results," she said.

Supercomposite created the red-cheeked, hollow-eyed woman called "Loab" in 2022 when she was testing out the new artistic possibilities offered by AI.

Her posts on social media of Loab and about the process to create her went viral, with commentators describing the images as "disturbing" and saying they had "sparked some lengthy ethical conversations around visual aesthetics, art and technology".

We are now on WhatsApp. Click to join.

Tools like Midjourney, Stable Diffusion and DALL-E have made it possible to generate images from written prompts.

Supercomposite -- whose real name is Steph Maj Swanson and is originally from the United States -- had been looking at so-called "negative prompts", designed to exclude certain elements from an image.

- 'That was the spookiest' -

She typed in the negative prompt "Brando::-1", asking one tool to come up with something as far as possible from the late American actor Marlon Brando.

What appeared at first was a black logo with green lettering that spelt out "DIGITA PNTICS", the 32-year-old told AFP in an interview at the Chaos Communication Congress, which brings the hacker scene together every year in late December in Hamburg.

But when the artist requested the opposite of this again with the query "DIGITA PNTICS" skyline logo::-1", the image of "this really sad, haunting looking woman with long hair and red cheeks" appeared for the first time, she said.

The text "Loab" appeared in truncated letters on one of the images -- giving a name to the creature that looked like it sprang from a horror movie.

Swanson then sought to get AI to modify Loab with another request. And to that new generated image, she made another different request, and another. But a strange trend surfaced.

"Sometimes she would reappear, after vanishing for a few generations of the lineage. That was the spookiest," she said.

More disturbingly, Loab appeared regularly alongside children, "sometimes dismembered", and always in a "macabre" and "bloody" world, she said.

Of the hundreds of images including Loab that were generated, Swanson decided not to show those she deemed the most shocking.

- 'My life changed' -

Loab's existence was first revealed in September 2022 in a series of posts on Twitter, since renamed X.

"It became viral, my life changed," she said, explaining how she became "so obsessed" with Loab.

"I wanted to explore who she was, the different scenarios in which she would appear and her limits, to see how far I could push the model."

The reasons for the character's recurring appearance are unclear. Experts have noted it is impossible to know how generative AI interprets abstract requests.

Swanson has not revealed which tool she used to create Loab, wanting to avoid "shifting the focus away from art and onto the makers of the model" and being accused of "marketing," she said.

But her refusal to name Loab's creator has led to doubts over how she was created, with some internet users suspecting Swanson of re-touching the images to create a so-called "creepypasta" -- a kind of digital horror theme cooked up to haunt social networks.

Swanson denied she'd dreamt up or manually altered Loab, saying she took the claims as a compliment: "It meant people were interacting with it."

But it has been over a year since Swanson has touched Loab, saying the whole affair left her exhausted and burned out. She has stopped creating AI images as she devoted herself to a screenplay.

She summed up her current sentiment about such tools with a quote from South Korean-born video art pioneer Nam June Paik: "I use technology in order to hate it properly".

Original post:
Pioneering AI artist says the technology is ultimately 'limiting', left her 'burnt out' - HT Tech

The Tech Doomers Are Wrong about Artificial Intelligence – National Review

Today, American policy-makers must choose with respect to AI: freedom or technocracy, prosperity or economic insignificance.

NRPLUS MEMBER ARTICLE{T}ypifying the unseriousness with which many in Washington treat AI, President Joe Bidens inspirations to regulate the technology reportedly include a viewing of Mission: Impossible Dead Reckoning Part One (you know, the famous documentary). Lets hope a rewatch of Ghostbusters doesnt persuade him to prevent urban rampage by nationalizing the marshmallow industry.

Unfortunately, many in government, the media, and private industry share Bidens overwrought fears. Technologists such as Elon Musk often speak of AI primarily as a mortal threat. Softer variations include theories that AI, if not micromanaged by regulators, inevitably will cause mass unemployment or widespread discrimination. One commentator on X (formerly Twitter) recently voiced the maximalist version of this perspective, advocating that we kill the demon robots before they kill us.

The catastrophist perspective has manifested itself in legislative proposals such as Senators Josh Hawley (R., Mo.) and Richard Blumenthals (D., Conn.) bill to strip generative AI products of Section 230 protections. In it, the senators aim to create legal carve-outs and special liabilities that categorically disadvantage AI products not just the nascent supercomputer overlords.

Section 230 protects online content-hosting platforms from civil liability for third-party speech. Without it, those who host websites that allow third-party posts from micro-bloggers to the largest social-media companies would face potentially crippling liability for third-party user-generated posts. Removing its protections from AI-generated content would disincentivize investment and innovation in AI without regard for any specific products potential benefits or risks.

This effort smacks of culture-war-driven technophobia, not clear-eyed policy-making. These irrational fears would retard American economic growth and technological innovation. According to a recent Goldman Sachs report, generative AI could drive a 7% (or almost $7 trillion) increase in global GDP and lift productivity growth by 1.5 percentage points over a 10-year period. Enacting law to discourage American innovation in the sector would hamstring U.S. firms that compete with foreign firms (e.g., Chinese firms). This not only would make American consumers poorer, but it likely would end decades of Americas global technological dominance, which Washington has thus far promoted through light-touch regulation.

Largely free from technocratic strangulation, the U.S. digital economy has generated tremendous wealth. In 2021, it accounted for $3.70 trillion of gross output, $2.41 trillion of value added (translating to 10.3 percent of U.S. gross domestic product (GDP)), $1.24 trillion of compensation, and 8.0 million jobs, according to the Bureau of Economic Analysis. Should Washington impose significant new burdens (related to AI or otherwise), the tech sectors productivity would atrophy accordingly.

By framing AI policy in apocalyptic terms, policy-makers ignore the fact that most AI-enabled products have more to do with mundane activities such as shipping logistics, data analysis, and spell-check than with supercomputers trying to take over the world. These common tools, which never star in movies, help individuals complete ordinary daily tasks or businesses increase operational efficiencies.

For example, the aforementioned HawleyBlumenthal bill would impact many common tools including Grammarly, Vimeo, and smartphone cameras, as the R Street Institutes Shoshana Weissmann explains. Because its impossible to know if content will be used in illegal ways, its unclear how these companies could comply with the law without removing all AI features from their products, Weissmann writes. The resulting deluge of lawsuits could bring AI development in the United States to a grinding halt.

Government certainly should monitor advanced systems that could (if abused) threaten national security. Regulatory regimes must grow from realistic assessments of risk rather than Hollywood plotlines. Moreover, they must promote permissionless innovation and, in turn, economic and technological dynamism.

Prosperity occurs where government opts against erecting barriers to private citizens innovating, collaborating, trading, and pursuing their own goals. This dynamic has asserted itself throughout history, from Ancient Egypt to post-communist Europe and China. It has caused Americas relatively free tech sector to dominate, and Europes heavily regulated tech sector to stagnate.

Today, American policy-makers must choose with respect to AI: freedom or technocracy, prosperity or economic insignificance.

Continue reading here:
The Tech Doomers Are Wrong about Artificial Intelligence - National Review

2 Stock-Spilt Stocks Crushing It in Artificial Intelligence That Could Soar in 2024 – The Motley Fool

Data from Grand View Research shows that the artificial intelligence (AI) market is projected to expand at a compound annual growth rate of 37% through 2030, which would see it exceed a value of $1 trillion before the decade's end. So it's not surprising that countless tech firms have restructured their businesses to prioritize AI, thus creating multiple ways to invest in the budding industry.

Despite a surge in AI stocks last year, the market's immense potential indicates it's not too late for new investors to see major gains from the market. Meanwhile, companies that have recently split their shares are attractive options, as the move is often followed by significant growth.

Here are two stock-split stocks crushing it in AI that could soar in 2024.

Nvidia's (NVDA -0.20%) business has exploded in recent years, with its shares soaring more than 1,300% since 2019. Stellar growth led management to trigger a 4-to-1 stock split in July 2021, its fifth split since 2000. And the company appears to just be getting started.

Over the last 12 months, Nvidia emerged as one of the biggest names in artificial intelligence, achieving an estimated 90% market share in AI chips. The company's years of dominance in graphics processing units (GPUs) allowed it to get a head start, while rivals like AMD and Intel have yet to catch up.

Increased demand for AI graphics processing units (GPUs) has seen Nvidia's earnings soar. In the third quarter of fiscal 2024 (ended October 2023), Nvidia posted revenue growth of 206%, with operating income up more than 1,600% thanks to a spike in chip sales in its data center segment.

Data by YCharts

This chart shows Nvidia's earnings could hit $24 per share by fiscal 2026. That figure, multiplied by its forward price-to-earnings ratio of 45, implies a potential stock price of $1,080, projecting growth of 97% over the next two fiscal years.

As a leading chipmaker, Nvidia has a lucrative role in AI and tech in general. The company will need to contend with increased competition this year as other companies release new chips. However, its dominance will be challenging to overcome.

Meanwhile, the market's growth potential suggests there's enough room for Nvidia to retain its lead and welcome newcomers. As a result, this stock-split stock is too good to pass up in the new year.

As the home of potent brands like Google, Android, and YouTube, it's impossible to deny Alphabet's (GOOG 0.40%) (GOOGL 0.40%) powerful role in tech. Its stock has risen 402% over the last decade, with its last split occurring in July 2022 in a 20-to-1 split.

Much of the company's success over the years stems from the billions of users its services attract. Alphabet has used its massive user base to build a lucrative digital advertising business, responsible for about 25% of the $740 billion digital ad market. Popular platforms like Google Search and YouTube present almost endless advertising opportunities for the company and have helped its earnings soar in recent years.

Since 2019, Alphabet's annual revenue rose 75%, with operating income up 108%. Meanwhile, the company's free cash flow has climbed 200% in the last five years to $78 billion, indicating that Alphabet has the funds to invest heavily in its research and development and venture into burgeoning areas of tech -- such as AI.

In December, the tech giant unveiled its highly anticipated AI model, Gemini, which is expected to compete with OpenAI's GPT-4. The new model could open the door to countless growth opportunities in AI for Alphabet.

Gemini and the popularity of platforms like Google Search, Cloud, and YouTube could be a powerful combination. The company could have an advantage in AI with the ability to create a Search experience closer to ChatGPT, add new AI tools on Google Cloud, offer more efficient advertising, and better track viewing trends on YouTube.

Data by YCharts

These charts show Alphabet's stock is also significantly cheaper than that of its biggest competitors in AI, fellow cloud giants Microsoft and Amazon. The company has lower figures in two key valuation metrics: forward P/E and price-to-free cash flow (P/FCF) ratios. Forward P/E is calculated by dividing a company's current share price by its estimated future earnings per share. Meanwhile, P/FCF divides its market cap by free cash flow. For both metrics, the lower the figure, the better the value.

Forward P/E and P/FCF are great ways to determine the value of a company's shares as they take into account its financial health against its stock price. In this case, Alphabet is a far bigger bargain than Microsoft or Amazon.

Alongside a solid outlook in AI and consistent financial growth, Alphabet is a screaming buy in 2024.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Dani Cook has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Amazon, Microsoft, and Nvidia. The Motley Fool recommends Intel and recommends the following options: long January 2023 $57.50 calls on Intel, long January 2025 $45 calls on Intel, and short February 2024 $47 calls on Intel. The Motley Fool has a disclosure policy.

Link:
2 Stock-Spilt Stocks Crushing It in Artificial Intelligence That Could Soar in 2024 - The Motley Fool

1 Artificial Intelligence (AI) Growth Stock to Buy Hand Over Fist in 2024 – The Motley Fool

Artificial intelligence (AI) could pave the way for dramatic productivity improvements and even help cure life-threatening diseases. This incredible tech movement is still just starting to affect the world.

At the same time, AI is also being used by bad actors to carry out cyberattacks that have the potential to be hugely destabilizing. The scale and capabilities of these AI-powered cyberattacks will continue to evolve. Thankfully, top cybersecurity companies have scale and resource advantages that should help them combat the rising tide of threats. And the increasing importance of shutting down such threats suggests that investors have opportunities to score big long-term wins.

If you're looking for stocks that can help you capitalize on surging AI and cybersecurity trends, CrowdStrike (CRWD -0.41%) looks like a great stock to consider right now.

CrowdStrike's core service is providing software that prevents hardware devices from being used to attack networks. The company's cloud-based Falcon platform uses AI and machine-learning technologies to fend off threats and adapt to new forms of attacks. Falcon offers best-in-class protections, and demand for its capabilities is surging.

Not only is the cybersecurity specialist continuing to attract new customers, it's also seeing increased spending from existing customers. Right now, 63% of customers use five or more of the company's more than two dozen modules. Meanwhile, 42% of customers use at least six modules, and 26% of its client base uses at least seven.

For comparison, 60% of customers were using at least five or more modules, 36% used at least six modules, and 21% used at least seven at the end of last year's third quarter. Expanded spending from Falcon customers suggests that the platform is delivering high performance and value.

Thanks to customer additions and strong net revenue retention, CrowdStrike's sales climbed 35% year over year to reach $786 million in the third quarter. Meanwhile, non-GAAP (adjusted) net income more than doubled to hit $199.2 million.

But while CrowdStrike has been using efficiency initiatives to help boost its earnings, the company hasn't been skimping on research and development (R&D) spending. The company's R&D spending jumped 38% annually across its last three reported quarters to reach $410 million.

CrowdStrike continues to invest heavily to spur initiatives capable of generating long-term growth and fortifying its competitive positioning in the cybersecurity space.

CrowdStrike has generated $655 million in free cash flow across the first three quarters of its current fiscal year, up 40% year over year. The company's free-cash-flow generation across the stretch comes in at roughly 30% of revenue across the period -- a very impressive performance.

Even better, the company sees room for continued margin expansion over the long term. With time, company management believes that CrowdStrike can consistently hit a subscription gross margin between 82% and 85%. Meanwhile, it expects to reach an operating income margin between 28% and 32% and a free cash flow margin between 34% and 38%.

CRWD PE Ratio (Forward) data by YCharts

Even with the stock trading at 93 times this year's expected earnings, CrowdStrike looks attractively valued. The business is posting impressive growth, and excellent margins combined with a favorable long-term demand outlook suggest that the stock can continue to serve up wins for long-term investors.

The need for high-performance cybersecurity services will only continue to grow. CrowdStrike's leadership position in AI-enhanced protections puts the company in a good position to benefit as demand continues to increase and industry consolidation trends funnel sales to top players in the space.

For long-term investors interested in capitalizing on the intersection of powerful AI and cybersecurity demand tailwinds, CrowdStrike stock has the makings of a massive long-term winner.

Go here to see the original:
1 Artificial Intelligence (AI) Growth Stock to Buy Hand Over Fist in 2024 - The Motley Fool