Category Archives: Ai
New Class of Antibiotics Discovered Using AI – Scientific American
December 20, 2023
4 min read
A deep-learning algorithm helped identify new compounds that are effective against antibiotic-resistant infections in mice, opening the door to AI-guided drug discovery
By Tanya Lewis
Antibiotic resistance is among the biggest global threats to human health. It was directly responsible for an estimated 1.27 million deaths in 2019 and contributed to nearly five million more. The problem only got worse during the COVID pandemic. And no new classes of antibiotics have been developed for decades.
Now researchers report that they have used artificial intelligence to discover a new class of antibiotic candidates. A team at the laboratory of James Collins of the Broad Institute of the Massachusetts Institute of Technology and Harvard University used a type of AI known as deep learning to screen millions of compounds for antibiotic activity. They then tested 283 promising compounds in mice and found several that were effective against methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant enterococcisome of the most stubbornly hard-to-kill pathogens. Unlike a typical AI model, which operates as an inscrutable black box, it was possible to follow this models reasoning and understand the biochemistry behind it.
The development builds on previous research by this group and others, including work by Csar de la Fuente, an assistant professor in the department of psychiatry at the University of Pennsylvanias Perelman School of Medicine, and his colleagues. Scientific American spoke with de la Fuente about the significance of the new study for using AI to help guide the development of new antibiotics.
[An edited transcript of the interview follows.]
How significant is this finding of a new class of antibiotics using AI?
Im very excited about this new work at the Collins LabI think this is a great next breakthrough. Its an area of research that was not even a field until five years ago. Its an extremely exciting and very emerging area of work, where the main goal is to use AI for antibiotic discovery and antibiotic design. My own laboratory has been working toward this for the past half-decade. In this study, the researchers used deep learning to try to discover a new type of antibiotic. They also implemented notions of explainable AI, which is interesting, because when we think about machine learning and deep learning, we think of them as black boxes. So I think its interesting to start incorporating explainability into some of the models were building that apply AI to biology and chemistry. The authors were able to find a couple of compounds that seemed to reduce infection in mouse models, so thats always exciting.
What advantage does AI have over humans in being able to screen and identify new antibiotic compounds?
AI and machines in general can systematically and very rapidly mine structures or any sort of dataset that you give them. If you think about the traditional antibiotic discovery pipeline, it takes around 12 years to discover a new antibiotic, and it takes between three and six years to discover any clinical candidates. Then you have to transition them to phase I, phase II and phase III clinical trials. Now, with machines, weve been able to accelerate that. In my and my colleagues own work, for example, we can discover in a matter of hours thousands or hundreds of thousands of preclinical candidates instead of having to wait three to six years. I think AI in general has enabled that. And I think another example of that is this work by the Collins Labwhere, by using deep learning in this case, the team has been able to sort through millions of chemical compounds to identify a couple that seemed promising. That would be very hard to do manually.
What are the next steps needed in order to translate this new class of antibiotics into a clinical drug?
Theres still a gap there. You will need systematic toxicity studies and then pre-IND [investigational new drug] studies. The U.S. Food and Drug Administration requires you do these studies to assess whether your potentially exciting drug could transition into phase I clinical trials, which is the first stage in any clinical trial. So those different steps still need to take place. But again, I think this is another very exciting advance in this really emerging area of using AI in the field of microbiology and antibiotics. The dream we have is that hopefully someday AI will create antibiotics that can save lives.
The compounds identified in this new study were effective at killing microbes such as MRSA in mice, right?
Yes, they showed that in two mouse models, which is interesting. Whenever you have mouse infection data, thats always a lot more excitingit shows those compounds were actually able to reduce infection in realistic mouse models.
As another example of using AI, we recently mined the genomes and proteomes of extinct organisms in my own lab, and we were able to identify a number of clinical antibiotic candidates.
Why is it important that the AI model is explainable?
I think it's important if we are to think about AI as an engineering discipline someday. In engineering, youre always able to take apart the different pieces that constitute some sort of structure, and you understand what each piece is doing. But in the case of AI, and particularly deep learning, because its a black box, we don't know what happens in the middle. Its very difficult to re-create what happened in order to give us compound X or Y or solution X or Y. So beginning to dig into the black box to see whats actually happening in each of those steps is a critical step for us to be able to turn AI into an engineering discipline. A first step in the right direction is to use explainable AI in order to try to comprehend what the machine is actually doing. It becomes less of a black boxperhaps a gray box.
Follow this link:
New Class of Antibiotics Discovered Using AI - Scientific American
3 Up-and-Coming Artificial Intelligence (AI) Stocks to Buy in 2024 – The Motley Fool
Artificial intelligence was a hot field in 2023, leading to soaring stock prices for big-name tech names like Nvidia (thanks to its advanced chips) and Microsoft (thanks to its partnership with ChatGPT creator OpenAI). Investors who didn't buy these stocks before the AI frenzy drove up share prices may feel they've missed out.
Fortunately, plenty of up-and-coming tech firms provide new opportunities to benefit from the advent of AI, and 2024 is a good time to scoop up shares of some of these rising stars. Here is a trio of young tech companies well-positioned to deliver robust returns in the new year.
The transformative power of AI is particularly evident in Symbotic (SYM 1.42%). The company specializes in providing warehouses with robotic workers managed by AI. These robots can process freight quickly, accurately, and safely alongside humans. And Symbotic's AI can continuously analyze and refine the work performed by the robots, routinely improving their efficiency.
The company's customers include Walmart, which owns a stake in Symbotic, and Southern Glazer's Wine and Spirits, the largest distributor of alcoholic beverages in the U.S.
But Symbotic is just getting started. In its 2023 fiscal year, ended September 30, Symbotic had installed 12 systems for customers, a substantial jump from 2022's seven. This growth translated into fiscal 2023 revenue of $1.2 billion, nearly double the sales generated in the prior year.
More revenue growth lies ahead for the company. Symbotic was in the process of installing 35 robotic systems at the end of fiscal 2023, more than double the 17 systems that were in process the previous year. As a result, the company anticipates fiscal Q1 revenue of at least $350 million, up from the prior year's $206.3 million.
UiPath (PATH 0.63%) provides clients with an AI platform that can analyze their business workflows, identify areas for improvement, and then automate those tasks. Organizations are flocking to UiPath's AI solution, particularly in industries such as finance, healthcare, and government, since these sectors involve a ton of administrative tasks that AI can handle.
UiPath's success is seen in its strong sales growth. The company's revenue of $325.9 million in its fiscal third quarter, ended October 31, represented a 24% year-over-year increase. The company expects more revenue growth in Q4, forecasting at least $381 million versus the prior year's $308.5 million.
Despite the strong sales, UiPath is not profitable, like the other high-growth tech companies on this list. But UiPath made a concerted effort over the past year to reign in costs. So its fiscal Q3 net loss of $31.5 million was a substantial drop from the prior year's loss of $57.7 million. This is a positive sign of the company's improving financial health.
Another positive is its improvement in free cash flow (FCF). UiPath's Q3 adjusted FCF was $44 million, up from negative FCF of $24.1 million in the prior year.
IonQ (IONQ -1.09%) operates in the emerging field of quantum computing. Quantum computers offer the potential for AI to evolve exponentially, because once quantum technology progresses far enough, these machines will be able to perform complex calculations beyond the abilities of the world's most powerful supercomputers.
Quantum machines are potent since they use quantum physics to perform multiple computing tasks simultaneously, rather than processing them sequentially like today's computers. IonQ developed quantum computers in 2023 that achieved 29 algorithmic qubits.
This milestone signals IonQ could reach 35 algorithmic qubits in 2024. Algorithmic qubits are a benchmark measuring a system's ability to run quantum workloads. The higher the number, the more computing work the quantum machine can successfully complete.
At 35 algorithmic qubits, IonQ's system will be on the verge of exceeding the abilities of today's conventional computers, and the emergence of quantum-powered AI can begin.
IonQ generates revenue by charging for access to its quantum technology, and that revenue is rising quickly. The company's Q3 sales zoomed up 122% year over year to $6.1 million. Through three quarters, IonQ's 2023 revenue stood at $15.9 million, more than double 2022's $7.3 million.
As its sales success shows, IonQ's technology is attracting customers. In September the company signed a deal with the U.S. Air Force worth $25.5 million to provide it with a quantum system.
Because IonQ, UiPath, and Symbotic are all nascent businesses successfully capturing customers in their respective fields, they possess the potential for years of sales growth ahead, making them worthwhile buys for 2024 -- or at least worthy of going on your watchlist. And given how fast their revenue is rising, they're great stocks for growth investors.
See the rest here:
3 Up-and-Coming Artificial Intelligence (AI) Stocks to Buy in 2024 - The Motley Fool
This Blue Chip Artificial Intelligence (AI) Stock Is a Buy for 2024 – The Motley Fool
The rise of artificial intelligence (AI) in 2023 sent many tech stocks soaring. As a result, a plethora of businesses touted AI capabilities. Sifting through them to figure out which are worthwhile long-term investments can prove challenging.
But one blue-chip stock possesses so many compelling qualities, it makes sense to pick up shares and hold on to them through 2024 and beyond. That stock is tech stalwart International Business Machines (IBM 0.85%).
It may be a good time to buy IBM stock, and not because a new year is upon us. At the time of this writing, Big Blue's share price has retreated a bit from its 52-week high of $166.34, reached on December 12. And now, consider these other factors that make IBM a good long-term investment.
Before Arvind Krishna, who used to oversee IBM's cloud computing and AI division, rose to the CEO spot in 2020, Big Blue was struggling under the weight of a vast organization with too many irons in the fire. Mr. Krishna focused the company on AI and cloud computing, while divesting businesses that no longer made sense for the company.
Today's IBM is leaner, and now on a growth trajectory thanks to these moves. The company's third-quarter revenue jumped 5% year over year to $14.8 billion as a number of areas across its businesses experienced growth.
IBM's data and AI division saw revenue rise 6% year over year, while its Red Hat cloud computing solution increased by 9% as organizations continue to migrate IT operations to the cloud.
IBM also possesses a substantial consulting business, which grew revenue 6% year over year to $5 billion. IBM's clients are looking for help integrating AI capabilities into their businesses, which led to growth in Big Blue's consulting division. As more businesses seek to to capitalize on the advent of AI, IBM's consulting capabilities are likely to prosper.
IBM's work with AI technology stretches back to the 1950s. Its latest AI platform, watsonx, debuted in July. This platform is helping IBM clients achieve business improvements such as automating mundane operational tasks, improving customer service, and modernizing the software code used in their organizations. AI clients include Samsung Electronics and NASA.
Big Blue is continuously enhancing its AI platform. For example, on December 18, IBM announced its acquisition of two companies from Software AG, which will help watsonx integrate with a customer's systems and ingest the mountains of data needed for accurate AI decision-making.
The company is also working in the emerging field of quantum computing, which offers key technology in AI's evolution. These machines use quantum physics to perform calculations multi-dimensionally rather than with the sequential approach used by today's computers.
This allows quantum machines to perform calculations too complex for even the most powerful supercomputers on the planet, and that kind of potency can substantially advance AI's capabilities. In fact, customers today can use watsonx to perform quantum code programming. Customers using IBM's quantum computing technology include the U.S. government and Harvard University.
Although IBM competes against other well-known tech firms, such as Microsoft, in the AI and cloud computing industries, these markets are large enough to support multiple players. Moreover, IBM's revenue growth shows it is successfully capturing its share of customers.
And in contemplating an investment in IBM, consider Big Blue's stock valuation versus rival Microsoft's. IBM's price-to-earnings ratio (P/E ratio) over the trailing 12 months is just under 22, whereas Microsoft's P/E multiple of 36 is significantly higher, suggesting IBM is the better value.
And its value to investors doesn't stop there. IBM offers a robust dividend, currently yielding over 4%, which can provide you with years of passive income. The company has paid dividends since 1916 and boasts an impressive streak of dividend increases spanning 28 consecutive years.
IBM's growing business, driven by its ever-evolving AI and cloud computing technologies, its attractive dividend, and its reasonable valuation combine to make this blue-chip stock a solid investment for 2024 and beyond.
Robert Izquierdo has positions in International Business Machines and Microsoft. The Motley Fool has positions in and recommends Microsoft. The Motley Fool recommends International Business Machines. The Motley Fool has a disclosure policy.
Visit link:
This Blue Chip Artificial Intelligence (AI) Stock Is a Buy for 2024 - The Motley Fool
Donald Trump said an ad used AI to make him look bad. The clips are real. – Tampa Bay Times
Published Dec. 22
Former President Donald Trump has a few gripes with the Lincoln Project, a political advocacy group composed of Republicans who oppose Trumps leadership. A recent complaint: that the group is showing altered footage of him committing gaffes.
The perverts and losers at the failed and once disbanded Lincoln Project, and others, are using A.I.(Artificial Intelligence) in their Fake television commercials in order to make me look as bad and pathetic as Crooked Joe Biden, Trump posted Dec. 4 on Truth Social.
In the Lincoln Projects Dec. 4 video, titled Feeble, a narrator addresses Trump directly with a taunt. Hey, Donald, the female voice says. We notice something. More and more people are saying it. Youre weak. You seem unsteady. You need help getting around. The video flashes through scenes showing Trump tripping over his words, gesturing, misspeaking and climbing steps to a plane with something white stuck to his shoe.
Are these clips the work of AI? We reviewed them and found the Trump clips are legitimate and not generated using AI. We reached out to the Trump campaign but did not hear back.
The Lincoln Project posted on X, formerly Twitter, that its Feeble ad was not AI-generated. We also looked at two other ads the group published in the days preceding Trumps post and found no evidence they included AI-generated content, either.
We identified the origin of all but one of the 31 photos and videos used in the Feeble ad, 21 of them featuring Trump. Weve corroborated them with footage from C-SPAN, news outlets, and/or government archives. In some of the clips, Trump is trying to publicly mock President Joe Biden, which the Lincoln Project ad does not make clear.
Subscribe to our free Buzz newsletter
Well send you a rundown on local, state and national politics coverage every Thursday.
Want more of our free, weekly newslettersinyourinbox? Letsgetstarted.
For good measure, we also checked the clips in the video that didnt feature Trump. These included clips and photos of Biden and stock videos. None of them were AI-generated, either.
We were unable to find the source for a 1-second video of Biden smiling at the 0:45 timestamp in the ad.
But of the 21 Trump-related images and clips in the ad, we found no evidence they were created or altered using AI.
The Lincoln Project also uploaded two other ads near the time of Trumps post that appeared to attack Trump. One called Christian Trump was also published on YouTube on Dec. 4. Another, titled Welcome to the clown show was uploaded Dec. 3.
We checked those, too, and found no evidence that AI was used to alter Trumps appearance or make him seem to say something he didnt.
At the 1:09 timestamp of Christian Trump, the Lincoln Project included a photo of Bibles stacked in a bathroom, which appears to have been altered. The original photo shows a bathroom in Trumps Mar-a-Lago estate in Palm Beach, which an indictment said was used to store boxes of records; it did not include a stack of Bibles.
In Welcome to the clown show we were unable to identify the source for a clip of a person talking about his preferred leader at the 0:58 timestamp. We were also unable to identify the source of the audio at the end of Christian Trump, which sounds like Trump saying Jesus Christ.
But there were no AI-generated clips of Trumps likeness.
We rate Trumps claim that the Lincoln Project is using AI in its television commercials about Trump False.
PolitiFact Researcher Caryn Baird contributed to this report.
Read more from the original source:
Donald Trump said an ad used AI to make him look bad. The clips are real. - Tampa Bay Times
AI image-generators are being trained on explicit photos of children, a study shows – The Associated Press
Hidden inside the foundation of popular artificial intelligence image-generators are thousands of images of child sexual abuse, according to a new report that urges companies to take action to address a harmful flaw in the technology they built.
Those same images have made it easier for AI systems to produce realistic and explicit imagery of fake children as well as transform social media photos of fully clothed real teens into nudes, much to the alarm of schools and law enforcement around the world.
Until recently, anti-abuse researchers thought the only way that some unchecked AI tools produced abusive imagery of children was by essentially combining what theyve learned from two separate buckets of online images adult pornography and benign photos of kids.
But the Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions thats been used to train leading AI image-makers such as Stable Diffusion. The watchdog group based at Stanford University worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement. It said roughly 1,000 of the images it found were externally validated.
The response was immediate. On the eve of the Wednesday release of the Stanford Internet Observatorys report, LAION told The Associated Press it was temporarily removing its datasets.
LAION, which stands for the nonprofit Large-scale Artificial Intelligence Open Network, said in a statement that it has a zero tolerance policy for illegal content and in an abundance of caution, we have taken down the LAION datasets to ensure they are safe before republishing them.
While the images account for just a fraction of LAIONs index of some 5.8 billion images, the Stanford group says it is likely influencing the ability of AI tools to generate harmful outputs and reinforcing the prior abuse of real victims who appear multiple times.
Its not an easy problem to fix, and traces back to many generative AI projects being effectively rushed to market and made widely accessible because the field is so competitive, said Stanford Internet Observatorys chief technologist David Thiel, who authored the report.
Taking an entire internet-wide scrape and making that dataset to train models is something that should have been confined to a research operation, if anything, and is not something that should have been open-sourced without a lot more rigorous attention, Thiel said in an interview.
A prominent LAION user that helped shape the datasets development is London-based startup Stability AI, maker of the Stable Diffusion text-to-image models. New versions of Stable Diffusion have made it much harder to create harmful content, but an older version introduced last year which Stability AI says it didnt release is still baked into other applications and tools and remains the most popular model for generating explicit imagery, according to the Stanford report.
We cant take that back. That model is in the hands of many people on their local machines, said Lloyd Richardson, director of information technology at the Canadian Centre for Child Protection, which runs Canadas hotline for reporting online sexual exploitation.
Stability AI on Wednesday said it only hosts filtered versions of Stable Diffusion and that since taking over the exclusive development of Stable Diffusion, Stability AI has taken proactive steps to mitigate the risk of misuse.
Those filters remove unsafe content from reaching the models, the company said in a prepared statement. By removing that content before it ever reaches the model, we can help to prevent the model from generating unsafe content.
LAION was the brainchild of a German researcher and teacher, Christoph Schuhmann, who told the AP earlier this year that part of the reason to make such a huge visual database publicly accessible was to ensure that the future of AI development isnt controlled by a handful of powerful companies.
It will be much safer and much more fair if we can democratize it so that the whole research community and the whole general public can benefit from it, he said.
About the use of AI image-generators to produce illicit images
The problemSchools and law enforcement have been alarmed at the use of AI tools -- some more accessible than others -- to produce realistic and explicit deepfake images of children. In a growing number of cases, teens have been using the tools to transform real photos of their fully-clothed peers into nudes.
How it happensWithout proper safeguards, some AI systems have been able to generate child sexual abuse imagery when prompted to do so because theyre able to produce novel images based on what theyve learned from the patterns of a huge trove of real images pulled from across the internet, including adult pornography and benign photos of kids. Some systems have also been trained on actual child sexual abuse imagery, including more than 3,200 images found in the giant AI database LAION, according to a report from the Stanford Internet Observatory.
SolutionsThe Stanford Internet Observatory and other organizations combating child abuse are urging AI researchers and tech companies to do a better job excluding harmful material from the training datasets that are the foundations for building AI tools. Its hard to put open-source AI models back in the box when theyre already widely accessible, so theyre also urging companies to do what they can to take down tools that lack strong filters and are known to be favored by abusers.
Much of LAIONs data comes from another source, Common Crawl, a repository of data constantly trawled from the open internet, but Common Crawls executive director, Rich Skrenta, said it was incumbent on LAION to scan and filter what it took before making use of it.
LAION said this week it developed rigorous filters to detect and remove illegal content before releasing its datasets and is still working to improve those filters. The Stanford report acknowledged LAIONs developers made some attempts to filter out underage explicit content but might have done a better job had they consulted earlier with child safety experts.
Many text-to-image generators are derived in some way from the LAION database, though its not always clear which ones. OpenAI, maker of DALL-E and ChatGPT, said it doesnt use LAION and has fine-tuned its models to refuse requests for sexual content involving minors.
Google built its text-to-image Imagen model based on a LAION dataset but decided against making it public in 2022 after an audit of the database uncovered a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes.
Trying to clean up the data retroactively is difficult, so the Stanford Internet Observatory is calling for more drastic measures. One is for anyone whos built training sets off of LAION5B named for the more than 5 billion image-text pairs it contains to delete them or work with intermediaries to clean the material. Another is to effectively make an older version of Stable Diffusion disappear from all but the darkest corners of the internet.
Legitimate platforms can stop offering versions of it for download, particularly if they are frequently used to generate abusive images and have no safeguards to block them, Thiel said.
As an example, Thiel called out CivitAI, a platform thats favored by people making AI-generated pornography but which he said lacks safety measures to weigh it against making images of children. The report also calls on AI company Hugging Face, which distributes the training data for models, to implement better methods to report and remove links to abusive material.
Hugging Face said it is regularly working with regulators and child safety groups to identify and remove abusive material. Meanwhile, CivitAI said it has strict policies on the generation of images depicting children and has rolled out updates to provide more safeguards. The company also said it is working to ensure its policies are adapting and growing as the technology evolves.
The Stanford report also questions whether any photos of children even the most benign should be fed into AI systems without their familys consent due to protections in the federal Childrens Online Privacy Protection Act.
Rebecca Portnoff, the director of data science at the anti-child sexual abuse organization Thorn, said her organization has conducted research that shows the prevalence of AI-generated images among abusers is small, but growing consistently.
Developers can mitigate these harms by making sure the datasets they use to develop AI models are clean of abuse materials. Portnoff said there are also opportunities to mitigate harmful uses down the line after models are already in circulation.
Tech companies and child safety groups currently assign videos and images a hash unique digital signatures to track and take down child abuse materials. According to Portnoff, the same concept can be applied to AI models that are being misused.
Its not currently happening, she said. But its something that in my opinion can and should be done.
See the original post:
Forget Nvidia: Buy This Magnificent Artificial Intelligence (AI) Stock Instead – The Motley Fool
Excitement over artificial intelligence (AI) has created many millionaires this year, as chip stocks like Nvidia (NVDA -0.33%) have skyrocketed 230% since Jan. 1. The company has significantly profited from increased demand for graphics processing units (GPUs), which are crucial for training AI models.
Nvidia's business exploded this year. However, it is worth looking at companies at slightly earlier stages in their AI journeys, as they could have more room to run in the coming years.
Intel (INTC 1.95%) is an exciting option, with years of experience in the chip market. The company also plans to launch a new AI GPU in 2024.
So, forget Nvidia. Here is why Intel is a magnificent AI stock to buy instead.
It hasn't been easy to be an investor in Intel over the last few years. The company was responsible for more than 80% of the central processing unit (CPU) market for at least a decade, and was the primary chip supplier for Apple's MacBook lineup for years. However, Intel's dominance saw it grow complacent, leaving it vulnerable to more innovative competitors.
As a result, Advanced Micro Devices started gradually eating away at Intel's CPU market share in 2017, with Intel's share now down to 69%. Then, in 2020, Apple cut ties with Intel in favor of far more powerful in-house hardware. Intel's stock subsequently dipped 4% over the last three years. Meanwhile, annual revenue tumbled 19%, with operating income down 90%.
However, the fall from grace has seemingly lit a fire under Intel again. According to Mercury Research, from the second quarter of 2022 to Q2 2023, Intel regained 3% of its CPU market share from AMD.
Moreover, Intel has pivoted its business to the $137 billion AI market, with plans to challenge Nvidia's dominance in 2024. The sector is projected to expand at a compound annual growth rate of 37% through 2030, which would see it rise more than $1 trillion before the end of the decade.
As a result, even if Intel can't dethrone Nvidia, projections show there will be plenty of opportunities for Intel to snap up market share and profit significantly from the industry's development.
Earlier this month, Intel unveiled Gaudi3, a generative AI chip meant to compete directly with Nvidia's H100. The GPU will begin shipping in 2024 alongside Core Ultra and Xeon chips that include neural processing units, making them capable of running AI programs faster.
Shares in Intel have soared more than 70% in 2023, almost entirely thanks to its prospects in AI. While that is nowhere near Nvidia's stock growth in the period, it could mean Intel has more to offer new investors in the coming years.
Data by YCharts
The charts show Intel's earnings could hit nearly $3 per share over the next two fiscal years, while Nvidia's are expected to reach $24 per share. Therefore, on the surface, Nvidia might look like a no-brainer. However, multiplying these figures by the companies' forward price-to-earnings ratios yields a stock price of $130 for Intel and $939 for Nvidia.
Looking at their current positions, the figures project Intel's stock will rise 184% and Nvidia's 95% within the next two fiscal years. While both boast impressive growth, Intel is forecast to deliver far more significant gains.
The figures align with Nvidia's meteoric rise this year compared to Intel's more gradual expansion. Intel is just getting started in AI and could be in for a lucrative 2024. So if you're looking for an AI stock to add before the new year, Intel is a screaming buy right now instead of Nvidia.
Dani Cook has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Advanced Micro Devices, Apple, and Nvidia. The Motley Fool recommends Intel and recommends the following options: long January 2023 $57.50 calls on Intel, long January 2025 $45 calls on Intel, and short February 2024 $47 calls on Intel. The Motley Fool has a disclosure policy.
View post:
Forget Nvidia: Buy This Magnificent Artificial Intelligence (AI) Stock Instead - The Motley Fool
Generative AI: the Shortcut to Digital Modernisation – CIO
THE BOOM OF GENERATIVE AI
Digital transformation is the bleeding edge of business resilience. For years, it was underpinned by the adoption of cloud and the modernisation of the IT platform. As transformation is an ongoing process, enterprises look to innovations and cutting-edge technologies to fuel further growth and open more opportunities. Notably, organisations are now turning to Generative AI to navigate the rapidly evolving tech landscape.
Albeit emerging recently, the potential applications of GenAI for businesses are significant and wide-ranging. Businesses are rapidly implementing AI-driven tools into their daily workflows to save valuable time. A recent McKinsey study estimated that automation integrated with Generative AI could accelerate 29.5 percent of the working hours in the US economy. Generative AI can help businesses achieve faster development in two main areas: low/no-code application development and mainframe modernisation.
As Generative AI and low-code technology are increasingly merging, businesses can unlock numerous opportunities while using them in tandem:
Generative AI also plays a role in assisting organisations with the transformation and modernisation of their mainframes, which continue to be in wide use in key sectors such as retail, banking, and aviation.
Research from IBM found that 93 percent of companies still use mainframes for financial management, 73 percent for customer transaction systems, and more than 70 percent of Fortune 500 companies run business-critical applications on mainframes.
However, mainframes are a challenging prospect for transformation because the applications they run are highly complex and difficult to change. Over time, these applications become outdated, the associated cost becomes higher, and operational disruption can occur due to maintaining and updating the system.
Organisations are shifting workloads to hybrid cloud environments while modernising mainframe systems to serve the most critical applications. However, this migration process may involve data transfer vulnerabilities and potential mishandling of sensitive information and outdated programming languages. A poorly structured approach to application modernisation also potentially leads to data breaches.
Hence, organisations are turning to Generative AI to mitigate these risks, bolstering reliability and efficiency in the areas where human error might create vulnerabilities.
By leveraging AI, engineers can quickly generate the code they need for an application migration exercise, ensure its quality, and create the necessary documentation. Even after migration, AI can help generate test cases, maintain and add more features to existing legacy systems, as well as evaluate the similarity between mainframe functions and migrated functions.
Given the scarcity of experts in legacy languages like Cobol on which many mainframe applications are built Generative AI also provides the bridge that allows a broader range of engineers and coding experts to tackle modernisation and migration projects. It equips developers with the necessary knowledge, improving developer efficiency, rapidly resolving issues, and easily maintaining and modernising enterprise systems of various industries.
For instance, FPT Software has recently introduced the development of Masterful AI Assistant or Maia, a special Generative AI concept of an agent specifically assisting with highly complex processes. Its vision is to be the co-pilot and co-worker for developers and engineers, boosting productivity and making the development process more enjoyable and fulfilling.
Through its conversational interface, Maia will deliver guidance and domain know-how along with automating code documentation and co-programming. Maia is also expected to analyse the complexities of legacy systems to ensure accuracy, generate missing documents and suggest suitable modern architecture during the assessment phase, and generate test cases during the testing phase.
While the benefits of embracing AI are significant, maximising those opportunities requires extensive expertise. There are three key considerations that companies need to consider in strategically collaborating with an AI partner:
To this end, the IT service provider FPT Software is currently adopting an ecosystem and partnership approach, covering various areas from research and solutions development to responsible AI, to propel innovation and the practical application of AI.
Particularly, FPT Software, in collaboration with Mila, a Canadian research institute specialising in machine learning, have formed an AI Residency program in which resident researchers work directly with leading academics while participating in real-world projects, assisting organisations to build a suite of products backed by a strong R&D base.
Both organisations have successfully promoted Responsible AI to support sustainable growth, human development, and social progress. This agenda is further strengthened on a global scale with FPT Software joining the recently established AI Alliance, a pivotal initiative formed by leading organisations like IBM and Meta.
The IT firm also partners with visionary partners to develop impactful solutions. A few highlights include its collaboration with Landing AI to develop a computer vision quality inspection solution with visual prompting to shorten labeling time from months to minutes or partner with Silicon Valleys Aitomatic to expand the provision of advanced industrial AI solutions, integrating Open Source Small Specialist Agent (OpenSSA) technology.
Generative AI helps companies accelerate their digital transformation and empower their entire workforce to engage with technology while running the risk of human error.
To successfully harness the power of AI, a partner-led approach is highly critical in navigating potential AI challenges. With the right partner, the results of this next wave of transformation will be remarkable.
Explore how FPT Softwares AI solutions can accelerate your digital transformation.
View post:
FBI fears China is stealing AI technology to ramp up spying and steal personal information to build terrifying – Daily Mail
China is feared to be stealing artificial intelligence technology to carry out massive cyberattacks on the US and elsewhere.
The FBI is increasingly concerned about the dictatorship's frequent high-profile data thefts from American corporations and government agencies.
Sophisticated AI would allow China to boost the scale and effectiveness of what it could collect and, crucially, analyze, sources told the Wall Street Journal.
The FBI is so worried about this escalation that it and other Western intelligence agencies met with industry leaders in October to discuss the threat.
The US and China are locked in an arms race over the rapidly developing technology that has the capacity to reshape their rivalry and how wars are fought.
China's quest for dominance includes corporate espionage efforts to steal AI technology from the firms developing it.
Former Apple workerXiaolang Zhang was arrested in July 2018 as he tried to board a flight to Beijing with stolen self-driving vehicle trade secrets.
He pleaded guilty to stealing trade secrets and will be sentenced in February.
Then last year,Applied Materials sued Chinese-owned rivalMattson Technology claiming a defecting engineer stole trade secrets.
Rather that AI algorithms, the company makes computer chips powerful enough to run high-end AI programs.
Federal prosecutors got involved but no charges were filed andMattson said there was no evidence it ever used anything allegedly stolen fromApplied in its products.
The FBI was in recent years more interested in thefts from firms like Applied as even if China got its hands on the latest AI programs, they would be obsolete within months.
China was linked to huge data breaches at Marriott, where millions of guest records were stolen,health insurer Elevance Health, and credit agency Equifax.
TheOffice of Personnel Management also had 20 millionpersonnel files of government workers and their families stolen in 2015.
Then in 2021, tens of thousands of servers running Microsoft Exchange Server, which underpins Outlook, were hit - and experts fear previously stolen personal data was used to target the attack.
Earlier this month analysts revealedBeijing's military burrowed into more than 20 major suppliers in the last year alone including a water utility in Hawaii, a major West Coast port and at least one oil and gas pipeline.
They bypassed elaborate cyber security systems by intercepting passwords and log-ins unguarded by junior employees, leaving China'sitting on a stockpile of strategic' vulnerabilities.
Hackers were in August spotted trying to penetrate systems run by the Public Utility Commission of Texas and the Electric Reliability Council of Texas which provide the state's power.
Codenamed Volt Typhoon, the project has coincided with growing tension over Taiwanand could unplug US efforts to protect its interests in the South China Sea.
Communications, manufacturing, utility, transportation, construction, maritime, government, information technology, and education organizations were targeted by Volt Typhoon.
The Director of National Intelligence warned in February that China is already 'almost certainly capable' of launching cyberattacks to disable oil and gas pipelines and rail systems.
'If Beijing feared that a major conflict with the United States were imminent, it almost certainly would consider undertaking aggressive cyber operations against U.S. homeland critical infrastructure and military assets worldwide,' the annual assessment reported.
China was so good at hacking into US companies and government databases that it likely collected more data than it could process and make useful.
But AI technology, combined with its army of hackers, would allow it to comb through billions of records and extract useful information with ease.
Intelligence operatives could use data gleaned from multiple sources to build dossiers on millions of specific people.
This could include fingerprints, financial and health records, passport information, and personal contacts.
China could use them to identify and track spies and monitor the travel of government officials, and figure out who has a security clearance worth targeting.
'China can harness AI to build a dossier on virtually every American, with details ranging from their health records to credit cards and from passport numbers to the names and addresses of their parents and children,' Glenn Gerstell, a former general counsel at the National Security Agency, told the Wall Street Journal.
'Take those dossiers and add a few hundred thousand hackers working for the Chinese government, and we've got a scary potential national security threat.'
Such escalating threats from China meant developing AI technology to counter them was increasingly important.
Industry experts believed AI would be better on defense than offense, and be able to identify and counter attacks from China and elsewhere.
Read more here:
The Big Questions About AI in 2024 – The Atlantic
Let us be thankful for the AI industry. Its leaders may be nudging humans closer to extinction, but this year, they provided us with a gloriously messy spectacle of progress. When I say year, I mean the long year that began late last November, when OpenAI released ChatGPT and, in doing so, launched generative AI into the cultural mainstream. In the months that followed, politicians, teachers, Hollywood screenwriters, and just about everyone else tried to understand what this means for their future. Cash fire-hosed into AI companies, and their executives, now glowed up into international celebrities, fell into Succession-style infighting. The year to come could be just as tumultuous, as the technology continues to evolve and its implications become clearer. Here are five of the most important questions about AI that might be answered in 2024.
Is the corporate drama over?
OpenAIs Greg Brockman is the president of the worlds most celebrated AI company and the golden-retriever boyfriend of tech executives. Since last month, when Sam Altman was fired from his position as CEO and then reinstated shortly thereafter, Brockman has appeared to play a dual rolepart cheerleader, part glue guyfor the company. As of this writing, he has posted no fewer than five group selfies from the OpenAI office to show how happy and nonmutinous the staffers are. (I leave to you to judge whether and to what degree these smiles are forced.) He described this years holiday party as the companys best ever. He keeps saying how focused, how energized, how united everyone is. Reading his posts is like going to dinner with a couple after an infidelity has been revealed: No, seriously, were closer than ever. Maybe its true. The rank and file at OpenAI are an ambitious and mission-oriented lot. They were almost unanimous in calling for Altmans return (although some have since reportedly said that they felt pressured to do so). And they may have trauma-bonded during the whole ordeal. But will it last? And what does all of this drama mean for the companys approach to safety in the year ahead?
An independent review of the circumstances of Altmans ouster is ongoing, and some relationships within the company are clearly strained. Brockman has posted a picture of himself with Ilya Sutskever, OpenAIs safety-obsessed chief scientist, adorned with a heart emoji, but Altmans feelings toward the latter have been harder to read. In his post-return statement, Altman noted that the company was discussing how Sutskever, who had played a central role in Altmans ouster, can continue his work at OpenAI. (The implication: Maybe he cant.) If Sutskever is forced out of the company or otherwise stripped of his authority, that may change how OpenAI weighs danger against speed of progress.
Is OpenAI sitting on another breakthrough?
During a panel discussion just days before Altman lost his job as CEO, he told a tantalizing story about the current state of the companys AI research. A couple of weeks earlier, he had been in the room when members of his technical staff had pushed the frontier of discovery forward, he said. Altman declined to offer more details, unless you count additional metaphors, but he did mention that only four times since the companys founding had he witnessed an advance of such magnitude.
During the feverish weekend of speculation that followed Altmans firing, it was natural to wonder whether this discovery had spooked OpenAIs safety-minded board members. We do know that in the weeks preceding Altmans firing, company researchers raised concerns about a new Q* algorithm. Had the AI spontaneously figured out quantum gravity? Not exactly. According to reports, it had only solved simple mathematical problems, but it may have accomplished this by reasoning from first principles. OpenAI hasnt yet released any official information about this discovery, if it is even right to think of it as a discovery. As you can imagine, I cant really talk about that, Altman told me recently when I asked him about Q*. Perhaps the company will have more to say, or show, in the new year.
Does Google have an ace in the hole?
When OpenAI released its large-language-model chatbot in November 2022, Google was caught flat-footed. The company had invented the transformer architecture that makes LLMs possible, but its engineers had clearly fallen behind. Bard, Googles answer to ChatGPT, was second-rate.
Many expected OpenAIs leapfrog to be temporary. Google has a war chest that is surpassed only by Apples and Microsofts, world-class computing infrastructure, and storehouses of potential training data. It also has DeepMind, a London-based AI lab that the company acquired in 2014. The lab developed the AIs that bested world champions at chess and Go and intuited protein-folding secrets that nature had previously concealed from scientists. Its researchers recently claimed that another AI they developed is suggesting novel solutions to long-standing problems of mathematical theory. Google had at first allowed DeepMind to operate relatively independently, but earlier this year, it merged the lab with Google Brain, its homegrown AI group. People expected big things.
Then months and months went by without Google so much as announcing a release date for its next-generation LLM, Gemini. The delays could be taken as a sign that the companys culture of innovation has stagnated. Or maybe Googles slowness is a sign of its ambition? The latter possibility seems less likely now that Gemini has finally been released and does not appear to be revolutionary. Barring a surprise breakthrough in 2024, doubts about the companyand the LLM paradigmwill continue.
Are large language models already topping out?
Some of the novelty has worn off LLM-powered software in the mold of ChatGPT. Thats partly because of our own psychology. We adapt quite quickly, OpenAIs Sutskever once told me. He asked me to think about how rapidly the field has changed. If you go back four or five or six years, the things we are doing right now are utterly unimaginable, he said. Maybe hes right. A decade ago, many of us dreaded our every interaction with Siri, with its halting, interruptive style. Now we have bots that converse fluidly about almost any subject, and we struggle to remain impressed.
AI researchers have told us that these tools will only get smarter; theyve evangelized about the raw power of scale. Theyve said that as we pump more data into LLMs, fresh wonders will emerge from them, unbidden. We were told to prepare to worship a new sand god, so named because its cognition would run on silicon, which is made of melted-down sand.
ChatGPT has certainly improved since it was first released. It can talk now, and analyze images. Its answers are sharper, and its user interface feels more organic. But its not improving at a rate that suggests that it will morph into a deity. Altman has said that OpenAI has begun developing its GPT-5 model. That may not come out in 2024, but if it does, we should have a better sense of how much more intelligent language models can become.
How will AI affect the 2024 election?
Our political culture hasnt yet fully sorted AI issues into neatly polarized categories. A majority of adults profess to worry about AIs impact on their daily life, but those worries arent coded red or blue. Thats not to say the generative-AI moment has been entirely innocent of American politics. Earlier this year, executives from companies that make chatbots and image generators testified before Congress and participated in tedious White House roundtables. Many AI products are also now subject to an expansive executive order.
But we havent had a big national election since these technologies went mainstream, much less one involving Donald Trump. Many blamed the spread of lies through social media for enabling Trumps victory in 2016, and for helping him gin up a conspiratorial insurrection following his 2020 defeat. But the tools of misinformation that were used in those elections were crude compared with those that will be available next year.
A shady campaign operative could, for instance, quickly and easily conjure a convincing picture of a rival candidate sharing a laugh with Jeffrey Epstein. If that doesnt do the trick, they could whip up images of poll workers stuffing ballot boxes on Election Night, perhaps from an angle that obscures their glitchy, six-fingered hands. There are reasons to believe that these technologies wont have a material effect on the election. Earlier this year, my colleague Charlie Warzel argued that people may be fooled by low-stakes AI imagesthe pope in a puffer coat, for examplebut they tend to be more skeptical of highly sensitive political images. Lets hope hes right.
Soundfakes, too, could be in the mix. A politicians voice can now be cloned by AI and used to generate offensive clips. President Joe Biden and former President Trump have been public figures for so longand voters perceptions of them are so fixedthat they may be resistant to such an attack. But a lesser-known candidate could be vulnerable to a fake audio recording. Imagine if during Barack Obamas first run for the presidency, cloned audio of him criticizing white people in colorful language had emerged just days before the vote. Until bad actors experiment with these image and audio generators in the heat of a hotly contested election, we wont know exactly how theyll be misused, and whether their misuses will be effective. A year from now, well have our answer.
Read more from the original source:
Apple engages in talks with leading News outlets for AI advancements: Report – Mint
In the past few weeks, Apple has initiated talks with prominent news and publishing entities, aiming to secure approval for utilizing their content in the company's advancement of generative artificial intelligence systems, as reported by the New York Times on Friday.
The California-based tech giant has proposed multiyear agreements, valued at a minimum of $50 million, to obtain licenses for the archives of news articles, as indicated by sources familiar with the negotiations, as reported in the article.
Apple has reached out to news entities such as Cond Nast, the publisher of Vogue and the New Yorker, along with NBC News and IAC, the owner of People, the Daily Beast, and Better Homes and Gardens, as reported by the New York Times.
According to the report, certain publishers approached by Apple showed a tepid response to the outreach. Meanwhile, Apple has also reportedly developed an internal service akin to ChatGPT, intended to assist employees in testing new features, summarizing text, and answering questions based on accumulated knowledge.
In July, Mark Gurman suggested that Apple was in the process of creating its own AI model, with the central focus on a new framework named Ajax. The framework has the potential to offer various capabilities, with a ChatGPT-like application, unofficially dubbed "Apple GPT," being just one of the many possibilities. Recent indications from an Apple research paper suggest that Large Language Models (LLMs) may run on Apple devices, including iPhones and iPads.
This research paper, initially discovered by VentureBeat, is titled "LLM in a flash: Efficient Large Language Model Inference with Limited Memory." It addresses a critical issue related to on-device deployment of Large Language Models (LLMs), particularly on devices with constrained DRAM capacity.
Keivan Alizadeh, a Machine Learning Engineer at Apple and the primary author of the paper, explained, "Our approach entails developing an inference cost model that aligns with the characteristics of flash memory, directing us to enhance optimization in two crucial aspects: minimizing the amount of data transferred from flash and reading data in larger, more cohesive segments."
(With inputs from Reuters)
Milestone Alert!Livemint tops charts as the fastest growing news website in the world Click here to know more.
Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed it's all here, just a click away! Login Now!
Published: 25 Dec 2023, 11:59 AM IST
Excerpt from:
Apple engages in talks with leading News outlets for AI advancements: Report - Mint