Category Archives: Ai
10 Scary AI Predictions From Movies And TV Shows – Hollywood Reporter
Artificial intelligence has gained new technological and cultural relevance in the past year, to the excitement (we assume) of some and the fear of pretty much anyone whos ever seen a sci-fi movie. Indeed, one of the major reasons for the dual writers and actors strike is concern that studios will use AI to replace them without fair compensation.
But since long before AI became a threat to anyone in the real world, Hollywood has been grappling with how it could help us, harm us or destroy our entire race. A common thread joining these films and TV shows is that they all explore the implications of what it means that artificial intelligence, by definition, has a mind of its own. No matter what purpose it was created for, self-aware AI at least at the level envisioned by sci-fi writers, which granted is miles beyond what exists in the real world is going to make its own unpredictable decisions, and that may or may not be in the best interests of humanity.
So, while the workers of Hollywood might be worried that AI is coming for their jobs, at least they can be relieved that its nowhere near as dangerous as the writers wildest dreams yet.
Here, we round up some of the most memorable AI in film and television, ranging from friendly helper robots to murderous destroyers of mankind.
First appearing on Star Trek: The Next Generation in 1987, the android Data has long been played by Brent Spiner alongside Patrick Stewart as Jean-Luc Picard. Through he was built, not born, in the likeness of his creator, Data is an officer of the U.S.S. Enterprise and is an essential member of the crew. Hes able to compute with efficiency but struggles to understand human emotion and idiosyncrasies. On a never-ending quest for self-improvement, he is constantly striving to become more human, including by adopting a pet cat and eventually implanting an emotion chip. In the end, he proves capable of self-sacrifice to save his friends. He was even inducted into Carnegie Mellons prestigious Robot Hall of Fame (yes, really).
The Abbott & Costello of a galaxy far, far away, C-3PO and R2-D2 are always helping their owner Luke Skywalker and the Rebel forces in their fight against the Empire. R2-D2 (originally portrayed by Kenny Baker) can co-pilot a small fighter ship, convey holographic messages and shoot electricity in self-defense. Meanwhile, C-3PO (Anthony Daniels), a protocol droid, provides helpful calculations such as the odds of surviving a flight through an asteroid field (whether or not Han wants to hear it) and can translate 6 million forms of communication. Just dont ask him to do anything particularly brave, unless the fearless R2-D2 is leading the way. These droids, and fellow Robot Hall of Famers, can also be sent on missions that might be dangerous for humans, such presenting a list of demands to Jabba the Hutt. Honorable Mention: BB-8
If AI television news presenters ever become mainstream, theyll owe a lot to the original: Max Headroom. Matt Frewer starred in several incarnations, including a 1985 British TV movie (Max Headroom: 20 Minutes Into the Future) and a 1987-88 ABC series, as a computer-generated TV journalist (with help from some prosthetics and fancy film editing) whose technological nature is highlighted to comic effect with lots of stuttering glitches it was the 80s, remember? Created in the likeness of human journalist Edison Carter after hes almost killed, Max investigates with his three-dimensional counterpart and colleagues to uncover the truth in a dystopian future. Even if the show only ran for two seasons, it made its mark on the culture: Headroom was interviewed by David Letterman and starred in Ridley Scott-directed New Coke commercials.
Though lesser known than some of the other androids on this list, the nameless robot in 2012s Robot & Frank evokes an interesting argument: Maybe artificial intelligence is neither good nor evil, but it can mirror the morality of the person using it. In the not-too-distant future, retired jewel thief Frank (Frank Langella) lives alone and has been experiencing memory problems when his son (James Marsden) buys him a medical helper robot (voiced by Peter Sarsgaard). When Frank realizes the robot doesnt have laws integrated into its core programming, he uses it to help execute a couple of high-value, white-collar heists. (The only people who get hurt are the insurance companies, it says at one point, repeating Franks mantra.) The robot is adamant that it doesnt have feelings about its own existence, but the one thing it does seem to care about is Franks welfare whether that means cooking him healthy meals or keeping him out of jail. Are they friends, or is that all just programming?
First appearing on the big screen in 2008s Iron Man as voiced by Paul Bettany, JARVIS (Just a Rather Very Intelligent System) begins its existence as a somewhat lowly disembodied AI created by Tony Stark to help run computations and act as a kind of electronic butler or smart home device, which also augments the Iron Man suit.
But the much more dangerous potential of AI is explored in Avengers: Age of Ultron (2015) when Tonys new (accidental) creation, Ultron, decides the best way to achieve world peace is the obliteration of mankind. In that film, JARVIS is nearly destroyed, but is saved by his own quick thinking with help from Tony, Bruce Banner and Thor, who give him android form as Vision. His powers grow with the addition of an Infinity Stone, and he shows how human he has become when he falls in love with the witch Wanda Maximoff.
In a future where AI robots have become ubiquitous, bound by Three Laws meant to keep them from harming humans, only Will Smith (as Chicago police detective Del Spooner) recognizes that they could, in fact, be deadly. His beef with the machines may be noble he doesnt trust them after a robot chose to save him from drowning instead of a child based on their odds of survival but his personal grudge is so well known that he gets blamed when a battalion of rogue robots swarm his vehicle and cause a car accident. (At least, thats the mild description given by a robot that has just punched a hole through Dels windshield.) The lesson of this film seems to be that no matter how many safeguards you have in place, never trust AI to choose wisely when making life-and-death decisions.
In season four of the WB series, Buffy (Sarah Michelle Gellar) goes to college and discovers the morally dubious military unit The Initiative, of which her boyfriend Riley (Marc Blucas) is an agent. The clandestine group is run by Dr. Maggie Walsh (Lindsay Crouse), and it soon comes to light that she has been playing Dr. Frankenstein, building a creature known as Adam (George Hertzberg). Made from a jumble of robotic and monster parts The Initiative has gathered, the first thing Adam does is kill his creator. He has a philosophical side, too, and is interested in discovering the reason behind his existence. But he comes to the wrong conclusions, and eventually attempts to create an army of human-demon-machine hybrids like himself with dreams of forging a new, superior race.
As soon as we started thinking for you, it really became our civilization, Agent Smith (played by Hugo Weaving) taunts a captive Morpheus in one of the most haunting monologues of The Matrix (1999). An AI super-soldier disguised as a guy in a black suit and sunglasses, Agent Smith and his ilk can dodge bullets, land punches at nearly the speed of light and take over the bodies of unsuspecting humans trapped in the Matrixs program.
While some of the evil AI on this list tries to destroy the world, the agents instead tend the garden that is the Matrix. Their goal is to keep the humans inside docile and powerless so that they can be used as a fuel source for the robots out in the real (extremely dystopian) world and, to extend the metaphor, pull weeds like Neo and his friends.
The James Cameron/Gale Ann Hurd franchise birthed a couple of the most iconic catchphrases of the 80s and 90s (Arnold Schwarzenegger recently told THR about the origins of Ill be back), and it explored the possibility of multiple timelines long before the MCU was conceived of as a big-screen phenomenon. Schwarzenegger stars in most of the film iterations as various incarnations of the Terminator, but even though he can rip out a street thugs heart with his bare hands (why does he need guns, anyway?), this cyborg can also be programmed to protect Sarah Connor and her savior son (see: Judgment Day, Rise of the Machines, Genisys) just as easily as kill them. The true brains of the operation is Skynet, the self-aware military tech that, in the future, comes to see humanity as a threat and launches a nuclear war.
Stanley Kubricks 1968 masterpiece starts out innocently enough: After a prehistoric vignette that hints at the existence of alien life, a time jump occurs and a group of astronauts set off on a deep-space mission. Their ship is equipped with the latest in technology, H.A.L. 9000, which is renowned because it has never made a mistake.
On the trip, two crewmembers, Dave and Frank, suspect H.A.L. has made an error regarding a malfunctioning piece of equipment, and for the missions safety they hatch a plan to deactivate the AI program. This leads H.A.L. which thinks the humans are the ones compromising the mission to become murderous, killing the helpless members of the crew who have been kept in suspended animation during the long journey, and turning the ship against Dave and Frank.
After a power struggle, Dave successfully deactivates H.A.L., which regresses intellectually as its being unplugged and endearingly sings Daisy Bell (Bicycle Built for Two) before it loses function entirely. Thats not the end of the film, which has philosophical aspirations way beyond artificial intelligence, but it still offers the iconic cinematic warning on the subject.
Link:
10 Scary AI Predictions From Movies And TV Shows - Hollywood Reporter
Texas A&M Professor Receives NSF Grant To Study AI-Powered … – Texas A&M University Today
With the unprecedented tools now available through artificial intelligence, Dr. Zixiang Xiong will work to create new parameters for the evolving process for data compression.
Getty Images
Each day, an estimated 330,000 billion bytes of data are generated in various forms. This data is shared in many ways videos, images, music, gaming, streaming content and video calls. This immense amount of data requires compression to save on storage capacity, speed up file transfer and decrease costs for storage hardware and network bandwidth.
Dr. Zixiang Xiong, professor and associate department head in the Department of Electrical and Computer Engineering at Texas A&M University, recently received aNational Science Foundationgrant to research the fundamental limits of learned source coding or data compression that uses machine learning now that new machine learning methods have permeated the scene.
The project is a culmination of over 30 years of research conducted by Xiong. Since the late 1980s, he has studied the area of data compression and has seen the evolution of the process.
In the 1990s, successfully sharing an image file required converting the file into text and back into an image. The flip side is now possible; machine learning generative models such as ChatGPT create new content and images based on input text into the model. With the unprecedented tools now available through artificial intelligence, Xiong will work to create new parameters for the evolving process.
We always ask ourselves before we begin any engineering project, Whats the theoretical limit? said Xiong. Thats very fundamental now because AI is completely different. Theres no current theory because we dont know the theoretical limit.
This project aims to understand what types of machine learning algorithms can compress data well and how many samples are needed to learn compression well. While gaining a fundamental understanding of data compression that utilizes machine learning, Xiong hopes to develop more powerful compression methods, leading to more efficient use of wireless communication and less energy consumption by mobile devices.
Traditional compression methods include the well-known JPEG compression for smartphone images; this is a lossy compression method, which means that some image quality is lost. Lossless compression, meaning no quality is lost, is typically used for compressing computer files, such as with Zip, and for music streaming. This project aims to develop boundaries for the performance of machine learning for both compression methods.
In 2020, Xiong worked on a project titled Deep Learning based Scalable Compression Framework with Amazon Prime Video, which was preliminary work that led to this new project.
Collaborators for this project include Dr. Anders Hst-Madsen and Dr. Narayana Santhanam, both professors at the University of Hawaii at Mnoa.
View post:
Texas A&M Professor Receives NSF Grant To Study AI-Powered ... - Texas A&M University Today
AI isnt great at decoding human emotions. So why are regulators targeting the tech? – MIT Technology Review
This article is from The Technocrat, MIT Technology Reviews weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.
Recently, I took myself to one of my favorite places in New York City, the public library, to look at some of the hundreds of original letters, writings, and musings of Charles Darwin. The famous English scientist loved to write, and his curiosity and skill at observation come alive on the pages.
In addition to proposing the theory of evolution, Darwin studied the expressions and emotions of people and animals. He debated in his writing just how scientific, universal, and predictable emotions actually are, and he sketched characters with exaggerated expressions, which the library had on display.
Dont settle for half the story.Get paywall-free access to technology news for the here and now.
The subject rang a bell for me.
Lately, as everyone has been up in arms about ChatGPT, AI general intelligence, and the prospect of robots taking peoples jobs, Ive noticed that regulators have been ramping up warnings against AI and emotion recognition.
Emotion recognition, in this far-from-Darwin context, is the attempt to identify a persons feelings or state of mind using AI analysis of video, facial images, or audio recordings.
The idea isnt super complicated: the AI model may see an open mouth, squinted eyes, and contracted cheeks with a thrown-back head, for instance, and register it as a laugh, concluding that the subject is happy.
But in practice, this is incredibly complexand, some argue, a dangerous and invasive example of the sort of pseudoscience that artificial intelligence often produces.
Certain privacy and human rights advocates, such as European Digital Rights and Access Now, are calling for a blanket ban on emotion recognition. And while the version of the EU AI Act that was approved by the European Parliament in June isnt a total ban, it bars the use of emotion recognition in policing, border management, workplaces, and schools.
Meanwhile, some US legislators have called out this particular field, and it appears to be a likely contender in any eventual AI regulation; Senator Ron Wyden, who is one of the lawmakers leading the regulatory push, recently praised the EU for tackling it and warned, Your facial expressions, eye movements, tone of voice, and the way you walk are terrible ways to judge who you are or what youll do in the future. Yet millions and millions of dollars are being funneled into developing emotion-detection AI based on bunk science.
But why is this a top concern? How well founded are fears about emotion recognitionand could strict regulation here actually hurt positive innovation?
A handful of companies are already selling this technology for a wide variety of uses, though its not yet widely deployed. Affectiva, for one, has been exploring how AI that analyzes peoples facial expressions might be used to determine whether a car driver is tired and to evaluate how people are reacting to a movie trailer. Others, like HireVue, have sold emotion recognition as a way to screen for the most promising job candidates (a practice that has been met with heavy criticism; you can listen to our investigative audio series on the company here).
Im generally in favor of allowing the private sector to develop this technology. There are important applications, such as enabling people who are blind or have low vision to better understand the emotions of people around them, Daniel Castro, vice president of the Information Technology and Innovation Foundation, a DC-based think tank, told me in an email.
But other applications of the tech are more alarming. Several companies are selling software to law enforcement that tries to ascertain if someone is lying or that can flag supposedly suspicious behavior.
A pilot project called iBorderCtrl, sponsored by the European Union, offers a version of emotion recognition as part of its technology stack that manages border crossings. According to its website, the Automatic Deception Detection System quantifies the probability of deceit in interviews by analyzing interviewees non-verbal micro-gestures (though it acknowledges scientific controversy around its efficacy).
But the most high-profile use (or abuse, in this case) of emotion recognition tech is from China, and this is undoubtedly on legislators radars.
The country has repeatedly used emotion AI for surveillancenotably to monitor Uyghurs in Xinjiang, according to a software engineer who claimed to have installed the systems in police stations. Emotion recognition was intended to identify a nervous or anxious state of mind, like a lie detector. As one human rights advocate warned the BBC, Its people who are in highly coercive circumstances, under enormous pressure, being understandably nervous, and thats taken as an indication of guilt. Some schools in the country have also used the tech on students to measure comprehension and performance.
Ella Jakubowska, a senior policy advisor at the Brussels-based organization European Digital Rights, tells me she has yet to hear of any credible use case for emotion recognition: Both [facial recognition and emotion recognition] are about social control; about who watches and who gets watched; about where we see a concentration of power.
Whats more, theres evidence that emotion recognition models just cant be accurate. Emotions are complicated, and even human beings are often quite poor at identifying them in others. Even as the technology has improved in recent years, thanks to the availability of more and better data as well as increased computing power, the accuracy varies widely depending on what outcomes the system is aiming for and how good the data is going into it.
The technology is not perfect, although that probably has less to do with the limits of computer vision and more to do with the fact that human emotions are complex, vary based on culture and context, and are imprecise, Castro told me.
Which brings me back to Darwin. A fundamental tension in this field is whether science can ever determine emotions. We might see advances in affective computing as the underlying science of emotion continues to progressor we might not.
Its a bit of a parable for this broader moment in AI. The technology is in a period of extreme hype, and the idea that artificial intelligence can make the world significantly more knowable and predictable can be appealing. That said, as AI expert Meredith Broussard has asked, can everything be distilled into a math problem?
A new study from researchers in Switzerland finds that news is highly valuable to Google Search and accounts for the majority of its revenue. The findings offer some optimism about the economics of news and publishing, especially if you, like me, care deeply about the future of journalism. Courtney Radsch wrote about the study in one of my favorite publications, Tech Policy Press. (On a related note, you should also read this sharp piece on how to fix local news from Steven Waldman in the Atlantic.)
Read the original post:
Chances are you havent used A.I. to plan a vacation. Thats about to change – CNBC
Travelers are still skeptical about AI, but most major travel companies aren't.
Nuthawut Somsuk | Istock | Getty Images
According to a global survey of more than 5,700 travelers commissioned by Expedia Group, the average traveler spends more than five hours researching a trip and reviews 141 pages of content for Americans, it's a whopping 277 pages.
And that's just in the final 45 days before departing.
Enter generative artificial intelligence a technology set to simplify that process, and allow companies to better tailor recommendations to travelers' specific interests.
What could that look like? The hope is that AI will not only plan itineraries, but communicate with hotels, draft travel budgets, even function as a personal travel assistant and in the process fundamentally alter the way companies approach travelers.
A typical home search on Airbnb, for example, produces results that don't take past searches into account. You may have a decade of booking upscale, contemporary homes under your belt, but you'll likely still be offered rustic, salt-of-the-earth rentals if they match the filters you've set.
But that could soon change.
During an earnings call in May, CEO Brian Chesky discussed how AI could alter Airbnb's approach. He said: "Instead of asking you questions like: 'Where are you going, and when are you going?' I want us to build a robust profile about you, learn more about you and ask you two bigger and more fundamental questions: Who are you, and what do you want?"
While AI that provides the ever-elusive goal of "personalization at scale" isn't here yet, it's the ability to search massive amounts of data, respond to questions asked using natural language and "remember" past questions to build on a conversation the way humans do that has the travel industry (and many others) sold.
In a survey conducted in April by the market research firm National Research Group, 61% of respondents said they're open to using conversational AI to plan trips but only 6% said they actually had.
Furthermore, more than half of respondents (51%) said that they didn't trust the tech to protect their personal data, while 33% said they feared it may provide inaccurate results.
Yet while travelers are still debating the safety and merits of using AI for trip planning, many major travel companies are already diving headfirst into the technology.
Just look at the names on this list.
Then the summer of 2023 saw a burst of AI travel tech announcements.
In June:
HomeToGo's new "AI Mode" allows travelers to find vacation rental homes using natural language requests.
Source: HomeToGo
In July:
Now, more travel companies have ChatGPT plugins, including GetYourGuide, Klook, Turo and Etihad Airways. And a slew of AI-powered trip planners from Roam Around (for general travel), AdventureGenie (for recreational vehicles), Curiosio (for road trips) added more options to the growing AI travel planning market.
Travel planning is the most visible use of AI in the travel industry right now, but companies are already planning new features.
Trip.com's Senior Product Director Amy Wei said the company is considering developing a virtual travel guide for its latest AI product, TripGenie.
"It can help provide information, such as an introduction to historical buildings and objects in a museum," she told CNBC. "The vision is to create a digital travel companion that can understand and converse with the traveler and provide assistance at every step of the journey."
The travel news site Skift points out AI may be used to predict flight delays and help travel companies respond to negative online reviews.
The company estimates chatbots could bring $1.9 billion in value to the travel industry by allowing companies to operate with leaner customer service staff, freeing up time for humans to focus on complex issues. Chatbots needn't be hired or trained, can speak multiple languages, and "have no learning curve," as Skift points out in a report titled "Generative AI's Impact on Travel."
Overall, Skift's report predicts generative AI could be a $28.5 billion opportunity for the travel industry, an estimate that if the tools are used to "their full potential ... will look conservative in hindsight."
Read this article:
Chances are you havent used A.I. to plan a vacation. Thats about to change - CNBC
Achieving the Singularity is ‘All About Progress’: AI Executive – Decrypt
As artificial intelligence continues its rapid advance, one word has computer scientists and science fiction fans waiting with bated breath: singularity. The word defines a pivotal moment in the future where technological growth becomes uncontrollable and irreversible, and disrupts civilization.
Whether that moment is tantalizing or terrifying, one firm working to bring it about is aptly named AI and blockchain developer SingularityNET.
Our vision is to drive towards a positive, beneficial, benevolent singularity for the benefit of all humankind, SingularityNET COO Janet Adams told Decrypt in an interview.
Founded in 2017 by Ben Goertzel and David Hanson, SingularityNET is a decentralized marketplace for artificial intelligence programs. The company says it wants to make advanced AI available to everyone through blockchain technology. Hanson holds a Ph.D. in Interactive Arts and Engineering from The University of Texas and a Master of Science in Applied Neuroscience from King's College London, while Goertzel earned his Ph.D. in Mathematics from Temple University.
A major step towards singularity is bridging the gap between artificial intelligence and robotics, Adams explainedanother focus of the company.
In computer science, singularity is achieved when artificial intelligence surpasses human intelligence, resulting in rapid, unpredictable technological advancements and societal changes. Why, Decrypt asked, would anyone want to create a robot or entity that could one day outsmart humans?
The answer, according to Adams, is progress.
Progress just happens all by itself, Adams said. Technological progress is a forward wayartificial intelligence and the programming of statistics into computer programs it's been happening for decades.
While many in the fields of science and science fiction have helped develop the idea of the singularity, the term was coined by Hungarian-American mathematician John von Neumann in the late 1950s. In his book, The Singularity Is Near, computer scientist, author, and futurist Ray Kurzweil predicted singularity would occur by 2045.
Adams says we are running ahead of schedule.
We acknowledge that there are a number of research breakthroughs to happen before we get to human-level AGI (artificial general intelligence), she said. But we have built the technology stack for that AGI, and they could even emerge sooner than three to seven years.
While AI and AGI may sound similar, they are years apart in scope. AI (Artificial Intelligence) is like a calculator that's good at a specific task. AGI (Artificial General Intelligence), on the other hand, is like a human brain that can learn and perform any intellectual task that a human can.
In 2021, SingularityNET co-founder and CEO of Hanson Robotics David Hanson released Sophia, a robot thatin collaboration with artist Andrea Bonacetolaunched a series of AI and neural network-powered NFT artwork on Nifty Gateway. That same year, SingularityNET launched the Sophia DAO, a decentralized autonomous organization dedicated to Sophias growth, well-being, and development.
SingularityNETs latest AI project is an AI Diva named Desdemona or Desi, created during the COVID pandemic. The plan for Desdemona, Adams said, includes becoming an AI popstar, celebrity, and influencer.
Adams said people form strong connections with humanoid robots, like Desdemona and Sophia, because of their highly expressive faces.
Desdemona has 36 motors in her face, and they can move in any emotion you can think of, and more emotions than you can think of, Adams said. She can perceive and mirror human emotions using facial recognition, voice tone, and word analysis.
Image: Desdemona/SingularityNET
Adams said that because of its rich suite of inputs, Desdemona can understand how a person is feeling and respond appropriately, for example dropping her tone of voice to match that of the person to whom she is speaking.
While SingularityNET is optimistic about human/robot relations, including for young people, psychologists and experts are sounding the alarm about what this bonding could mean, especially for children.
Last week, the Center on Countering Digital Hate released a report titled AI and Eating Disorders, that accused AI chatbots like OpenAIs ChatGPT and Google Bard of promoting eating disorders and unhealthy and unrealistic body images, and not doing enough to safeguard users.
Other AI-focused Web3 projects include The Graph, Fetch.AI, Numeraire, and Ocean Protocol. These projects and their associated tokens received substantial attention from the launch of OpenAIs GPT-4 in March, with the price of their respective tokens hitting double digits.
What we live by and breathe by at SingularityNet is that every algorithm we develop and every action we take across our decentralized community is for good, Adams said.
She asserted that decentralizing the development of AI technology is a crucial step in creating artificial intelligence that benefits all humanity and not a small group of developers.
We're really we're pushing the boundary with our decentralization program, Adams said. We're looking to outsource our decisions, the oversight of our AI, to a great decentralized group globally.
Cybersecurity is essential in safely developing these models. Adams said SingularityNET has put considerable effort into protecting user privacy and data. Adams pointed to blockchain technology as a means toward ensure privacy, as that data is used with permissions, and that users benefit from allowing companies to use their data.
In order for AI to develop responsibility, Adams said, it has to be programmed, overseen, regulated, and developed by a wide range of people to ensure the best outcome.
Humans will progress, she said. The way, from our perspective, is to massively reduce human suffering and inequality and transform our existence on the planet, eradicate diseases resolvable supply chain, finding all new fixes and solutions for global warming.
It's the upsidethe utopic upside of artificial intelligence is almost unimaginable, she concluded.
Visit link:
Achieving the Singularity is 'All About Progress': AI Executive - Decrypt
2 AI Stocks That Could Help You Build Generational Wealth – The Motley Fool
Generational wealth is a common objective of stock investors. With the market's ability to generate long-term returns, it's an excellent place to preserve and grow wealth that can eventually be passed down to the next generation.
Generating significant long-term returns may have become a bit easier in the past year with the rise of artificial intelligence (AI) and its potential to grow businesses. The benefit of AI-driven applications is spreading into many diverse industries and exciting investors to the possibilities it can create. As a result, many AI-related stocks saw their prices rise significantly, especially when news came out about advances being made possible by Open AI's ChatGPT.
Two AI-related stocks that got fresh attention are Alphabet (GOOGL 1.37%) (GOOG 1.27%) and Broadcom (AVGO 2.93%). Both of these stocks positioned themselves to drive wealth creation through the AI initiatives they are associated with. Let's take a closer look at what these two AI stocks are doing to build generational wealth for their investors.
Alphabet is a quintessential AI stock. Since declaring itself an "AI-first" company in 2016, it has integrated the technology into products ranging from YouTube to the cameras in its Pixel phone. The most profound AI-related efforts may come from Google DeepMind, the merger of Google Research and the AI research company DeepMind. Their efforts enhanced Google's search engine through its Search Generative Experience (SGE). It has also capitalized on the technology by developing and improving Bard, Alphabet's its alternative to ChatGPT.
Additionally, Alphabet uses AI to optimize ads. Although Alphabet has worked to diversify its revenue sources, advertising accounted for 78% of the company's revenue in Q2. That means AI technology is driving a critical part of Alphabet's business.
So far this year, Alphabet has generated $144 billion in revenue, 5% more than the same period last year. And even though it significantly increased research and development spending, it grew net income over that timeframe by 3% to $33 billion.
Admittedly, the stock is not cheap at a 27 P/E ratio, especially with this year's growth. But with revenue rising 41% in 2021 and 10% in 2022, growth could return as AI boosts its advertising and cloud products.Finally, with $118 billion in liquidity and $39 billion in free cash flow generated so far this year, Alphabet can not only preserve generational wealth, but also grow it as conditions improve.
Broadcom's potential to thrive thanks to AI comes from how the technology is being used in both its semiconductor solutions and infrastructure software segments. Its chip segment works closely with clients to develop specialized semiconductors for their needs. This segment recently released Jericho3-AI, an accelerator chip that runs massive machine learning (ML) workloads. The company claims it will balance workloads and operate congestion-free as it enables high-performance AI.
Its infrastructure software segment also offers its AIOps solution. This applies automation and data science to deliver actionable insights powered by AI and ML. Additionally, Broadcom's AI should experience a considerable boost when the company completes its takeover of VMWare at the end of Octover 2023. VMWare provides cloud computing and virtualization software, positioning it to support the workloads that power AI and ML.
Even without VMWare in the fold, Broadcom generated $18 billion in revenue in the first half of 2023, rising 12% from year-ago levels. With the company reducing the cost of revenue and keeping expense growth in check, the net income for the first six months of 2023 of $7.3 billion surged higher by 43%.
Broadcom's power to build generational wealth also comes from its dividend. The payout of $18.40 per year works out to a dividend yield of 2.2%, well above the S&P 500 average of 1.5%. Moreover, that payout cost Broadcom $3.8 billion so far this year. But with Broadcom generating $8.3 billion in free cash flow this year, it should be able to cover the payout costs and continue to raise the payout, which has risen at least once yearly since 2010.
Indeed, new investors will have to pay about 26 times its earnings to benefit from that income stream. But with this tech stock trading up 50% so far in 2023 and nearly 2,200% over the last 10 years, Broadcom has proven its ability to generate rising income and long-term wealth for its shareholders.
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Will Healy has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet. The Motley Fool recommends Broadcom and VMware. The Motley Fool has a disclosure policy.
Excerpt from:
2 AI Stocks That Could Help You Build Generational Wealth - The Motley Fool
Humane will share more about its mysterious Ai Pin the same day … – The Verge
Humane, a startup founded by ex-Apple employees, plans to share more about its mysterious AI-powered wearable on the same day as a solar eclipse in October, co-founder Imran Chaudhrisaid in a video on the companys Discord (via Inverse). The solar eclipse is set to happen on October 14th.
The device, officially called the Humane Ai Pin (in the Discord video, Chaudhri pronounces that middle word like you would say the word AI), is being promoted as something that can replace your smartphone. In a wild demo at this years TED conference, Chaudhri uses the device, which is somehow attached to his jacket at chest height, to do things like:
Theres an incredible celestial event thats happening in October: an eclipse, Chaudhri said in the Discord video. An eclipse is an important symbol for us. Its a new beginning spiritually, thats what it means. Its something that the whole world notices and comes together. We are certainly looking forward to being able to have a special moment on that day.
We cant wait all of us to be able to walk down the street and see people using what weve built, Bongiorno said.
If youd like to hear their comments for yourself, weve embedded the video from the Humane Discord below.
Read more:
Humane will share more about its mysterious Ai Pin the same day ... - The Verge
Forget SEO: Why ‘AI Engine Optimization’ may be the future – VentureBeat
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
According to founder, investor and longtime industry analyst Jeremiah Owyang, Bill Gates vision of a personal AI is coming.
That future is one that will disrupt SEO and e-commerce and require marketers and creators to move beyond optimizing traditional search engines to optimize AI, he told VentureBeat in an interview. And, it means planning for disruption and developing new strategies now.
The advertising model as we know it getting people to go to your website and view it thats going to breakI dont see how that sustains, he said.
AI agents and foundational models, instead, will capture the ad dollars as advertisers pay to get their messages included in generated responses.
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
For example, we may see sponsored sentences in an AI emerge, or ads next to generated content, he said. Marketers and creators, he explained, have to think about how to be discovered beyond the search engine, within the AI itself.
Last week, OpenAI launched its web crawler to fetch real-time info from the web. But soon, web crawling may not be efficient enough as more and more consumers stay in GPT tools to get information rather than going to marketing or news sites, Owyang predicts.
The data schemas are too varied, he said.
So how will chatbots get their data and what does that mean for business who want customers to find them online?
As consumers increasingly use automated tools to go through the marketing funnel, marketers and creators need to consider something that many might think is counterintuitive: That is, you actually want, no need, LLMs to train on your data.
If I was a journalist, I would want my articles ingested by all of the LLMs, he explained, adding that more and more chatbots are including citations, including Bing, You.com and Perplexity. So when people search for that information, I show up first its the same as SEO strategy, he said cautioning that this would not apply to gated content, which employs a different business model.
Sound strange? Well, keep in mind that marketers have been disrupted again and again over the past couple of decades, said Owyang. Since the advent of Google search, for example, they have worked to influence the influencers in order to boost SEO including journalists, financial analysts, industry analysts and media and government relations. Over the past 10 years, theyve added content creators and other influencers to that mix.
Now, Owyang explained, AI is another influencer marketers will have to cater to, by feeding them information.
That means you may need to create a special API that can be adjusted by the foundational models, he said, adding that he could see companies reducing the central nature of their websites, and instead offering an API. We may find that the most efficient way to influence an autonomous agent is to build an autonomous agent.
In an interview with Bill Gates in May during a Goldman Sachs and SV Angel event on AI, Bill Gates said the first company to develop a personal agent to disrupt SEO would have a leg up on competitors.
According to Owyang, thats why Gates along with Nvidia, Microsoft, Reid Hoffman and Eric Schmidt invested in Inflection AI as part of an eye-popping $1.3 billion funding round in June.
In May, the companylaunchedPi, which stands for personal intelligence and was meant to be empathetic, useful and safe that is, acting more personally and colloquially than OpenAIs GPT-4, Microsofts Bing or Googles Bard, while not veering into the super-creepy.
During a panel at the Bloomberg Technology Summit, Inflection CEO Reid Hoffman said that the Pi chatbot takes a more personal, emotional approach compared with ChatGPT. IQ is not the only thing that matters here, he said. EQ matters as well.
In June, Inflection also announced that it would release a new large language model (LLM) to power Pi, calledInflection-1, which it said outperforms OpenAIs GPT-3.5.
Owyang says he imagines a future where every brand has an autonomous agent that will interact with the buyer side agents.
My agents talk to your agent and negotiate which car that I want, which clothes that I want, which restaurants to eat at and even choose the cuisine perhaps the menu with my dietary needs within budget, he explained. Thats the future.
Of course, a chatbot like Pi is still far away from the kind of personal AI agent Bill Gates and Owyang are imagining. And full disclosure: Owyang says he is planning an investment in Inflection AI.
But even now, AI chatbots are already offering recommendations and Owyang said it is becoming clear that if marketers, publishers and creators want to succeed (at least the ones that depend on SEO) they will need to start catering to the wants and needs of AI agents that is, through AI Engine Optimization.
Unlike SEO, AI Engine Optimization is not about waiting for a crawler to come to a website. Now, marketers will likely want two things, he explained. One is to create an API that feeds in real-time information to foundational models.
That standard API protocol hasnt really emerged yet for how that can be done, which is why OpenAIs API is just crawling, for example, he said. But eventually, he predicted, users will ask OpenAI questions before theyll ask Google Search so you need that real-time feed.
Secondly, marketers will want to take the same corporate API with all of its product information and use it to train its own branded AI that would interact with consumers and buyers, whether thats on a website or an app. That would also interact with the buyer side agents that are starting to emerge, he said.
Owyang said that at a recent AI conference in Las Vegas, 2,000 corporate and government leaders were in the room. They all, he insisted, are moving very quickly to explore the possibilities when it comes to building their own LLMs that could, in the future, interact with customers and their AI agents.
The future, he predicted, will go beyond BloombergGPT and Einstein GPT soon, Walmart or Macys could have its own LLM, or even the New York Times.
Many of these companies are getting ready, he said.
The bottom line, Owyang said bluntly in a recent blog post, is this: As we stand on the brink of this seismic shift, the call to action for marketers is clear: We must ready ourselves to not only influence human decision-making but also shape AI behaviors.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
View post:
Forget SEO: Why 'AI Engine Optimization' may be the future - VentureBeat
Artificial intelligence (AI) in the healthcare sector market to Grow by USD 11,827.99 million from 2022 to 2027: The growing demand for reduced…
NEW YORK, Aug. 14, 2023 /PRNewswire/ -- The artificialintelligence (AI) in the healthcare market size is estimated to growat aCAGR of 23.5%between 2022 and 2027. Themarket size is forecast to increase byUSD11,827.99 million.Discover some insights on market size historic period (2017 to 2021) and Forecast (2023 to 2027) before buying the full report-Request a sample report
Technavio has announced its latest market research report titled Global Artificial Intelligence (AI) in Healthcare Sector Market 2023-2027
Artificial Intelligence (AI) in the Healthcare sector market Company AnalysisCompany Landscape - The global artificial intelligence (AI) in the healthcare sector market is fragmented, with the presence of several global as well as regional Companies. A few prominent companies that offer artificial intelligence (AI) in the healthcare sector in the market are Ada Health GmbH, Alphabet Inc., Amazon.com Inc., Atomwise Inc., BenchSci Analytics Inc., CarePredict Inc., Catalia Health, Cyclica, Deep Genomics Inc., Entelai, Exscientia PLC, General Electric Co., Intel Corp., International Business Machines Corp., Koninklijke Philips NV, MaxQ AI, Medtronic Plc, Microsoft Corp., NVIDIA Corp., and Siemens Healthineers AG and others.
What's New? -
Special coverage on the Russia-Ukraine war; global inflation; recovery analysis from COVID-19; supply chain disruptions, global trade tensions; and risk of recession
Global competitiveness and key competitor positions
Market presence across multiple geographical footprints - Strong/Active/Niche/Trivial -Buy the report!
Company Offerings -
Ada Health GmbH:The company offers artificial intelligence in the healthcare sector such as Ada.
Alphabet Inc.:The company offers artificial intelligence in the healthcare sector such as Google Health.
Amazon.com Inc.:The company offers artificial intelligence in the healthcare sector such as Amazon HealthLake.
For details on the companyand its offerings Request a sample report
Artificial Intelligence (AI) In The Healthcare Sector Market - Segmentation Assessment
Segment Overview
Thisartificial intelligence (AI) in healthcare market report extensively coversmarket segmentation by application (medical imaging and diagnostics, drug discovery, virtual assistants, operations management, and others), component (software, hardware, and services), and geography (North America, Europe, APAC, South America, and Middle East and Africa).
Story continues
The market share growth by themedical imaging and diagnosticssegmentwill be significant during the forecast period.Medical imaging is the creation of a visual representation of the body or the functioning of organs or tissues for the purpose of clinical analysis and medical diagnosis. Medical imaging includes X-rays, CT scans, and magnetic resonance imaging. Managing high-resolution imaging data for treatment and diagnosis is a challenge, even for large healthcare facilities and experienced clinicians. In addition, the increasing use of medical imaging data and technological advancements such as AI in healthcare are contributing to the adoption of medical imaging in healthcare practice. Hence, such factors will increase segment growth during the forecast period.
Geography Overview
North Americais estimated tocontribute38%to the growth of the global market duringthe forecast period. The early adoption of the technology and the growing investment from market players such as Microsoft, Google, and IBM are indicative of the growing demand for AI in the region. The US is one of the top countries in the world in terms of the number of AI patents filed. The US and Canada together hold nearly 26% of all AI patent applications worldwide. IBM holds the majority of AI-related patents, followed by Microsoft and Google. Thus, such factors will drive the growth of the market in this region during the forecast period.
For insights on global, regional, and country-level parameters with growth opportunities from 2017 to 2027 -Download a Sample Report
Artificial Intelligence (AI) in the Healthcare Sector Market Market Dynamics
Leading Driver -
The growing demand for reduced healthcare costsis notably driving AI in healthcare market growth.
Optimizing the activities and resources of healthcare providers significantly reduces costs and increases efficiency. The experience of patients and healthcare professionals is improved through affordable, quality treatment and care.
AI can reduce traditional medical costs and improve treatment while allowing patients to meet their own healthcare needs through the use of virtual assistants such as doctors or chatbots that reduce considerable human labor.
Therefore, the demand to minimize healthcare costs will drive the AI market in the healthcare sector during the forecast period.
Key Trend-
The development of precision medicine is a key trend in the AI in healthcare market.
AI uses DL algorithms to process large data sets to understand human genes and identify biological factors that cause disease.
Drug development companies are increasingly using artificial intelligence to accelerate drug discovery in the healthcare sector.
Researchers and scientists are using AI to personalize disease prevention and treatment strategies by analyzing large sets of genetic databases.
In the area of precision medicine, AI is being used by healthcare providers and researchers as well as others such as medicinal product developers,and technology companies which will boost the growth of the market during the forecast period.
Major challenge-
Regulatory challenges to promote the safety and effectiveness of products are challenging AI in healthcare market growth.
AI is recognized as a complex term with more solutions, such as DL, neural networks, and different approaches to setting up each technology. Regulatory standards for software as medical devices (SAMD) have been developed over the past few years.
AI technologies must comply with regulations and data protection requirements in order to be adopted by providers and gain patients' trust. Regulatory compliance helps healthcare professionals to reduce the impact of bias and increase transparency.
Therefore, these regulatory challenges can reduce the adoption of AI in the healthcare sector, which will impede the growth of the AI market in the healthcare sector during the forecast period.
Driver, Trend & Challenges are the factor of market dynamics that states about consequences & sustainability of the businesses, find some insights from a sample report!
What are the key data covered in this Artificial Intelligence (AI) In Healthcare Sector Market report?
CAGR of the market during the forecast period
Detailed information on factors that will drive the growth of artificial intelligence (AI) in the healthcare sector market between 2023 and 2027
Precise estimation of the artificial intelligence (AI) in the healthcare sector market size and its contribution to the market in focus on the parent market
Accurate predictions about upcoming trends and changes in consumer behavior
Growth of artificial intelligence (AI) in the healthcare sector market across North America, Europe, APAC, South America, and Middle East and Africa
A thorough analysis of the market's competitive landscape and detailed information about companies
Comprehensive analysis of factors that will challenge the growth of artificial intelligence (AI) in the healthcare sector market companies
Gain instant access to 17,000+ market research reports.
Technavio's SUBSCRIPTION platform
Related Reports:
The Artificial Intelligence (AI) in Asset Management Market size is estimated to grow at aCAGR of 37.88%between 2022 and 2027. Themarket size is forecast to increase byUSD 10,373.18 million. Furthermore, this Artificial Intelligence (AI) in Asset Management Market report extensively coversmarket segmentation by deployment (on-premises and cloud), industry application (BFSI, retail and e-commerce, healthcare, energy and utilities, and others), and geography (North America, Europe, APAC, Middle East and Africa, and South America).The rapidadoption of artificial intelligence in asset management and the growing importance of asset tracking is notably driving the market growth during the forecast period.
TheGenerative AIMarket sizeis estimated to grow at aCAGR of 32.65%between 2022 and 2027. Themarket size is forecast to increase byUSD34,695.37 million. Furthermore, thisgenerative AI market report extensively coversmarket segmentation by component (software and services), technology (transformers, generative adversarial networks (GANs), variational autoencoder (VAE), and diffusion networks), and geography (North America, APAC, Europe, South America, and Middle East and Africa).
Artificial Intelligence (AI) In Healthcare Sector Market Scope
Report Coverage
Details
Historic period
2017-2021
Forecast period
2023-2027
Growth momentum & CAGR
Accelerate at a CAGR of 23.5%
Market growth 2023-2027
USD 11,827.99 million
Market structure
Fragmented
YoY growth 2022-2023(%)
21.73
Regional analysis
North America, Europe, APAC, South America, and Middle East and Africa
Performing market contribution
North America at 38%
Key countries
US, China, Japan, Germany, and UK
Competitive landscape
Leading Companies, Market Positioning of Companies, Competitive Strategies, and Industry Risks
Key companies profiled
Ada Health GmbH, Alphabet Inc., Amazon.com Inc., Atomwise Inc., BenchSci Analytics Inc., CarePredict Inc., Catalia Health, Cyclica, Deep Genomics Inc., Entelai, Exscientia PLC, General Electric Co., Intel Corp., International Business Machines Corp., Koninklijke Philips NV, MaxQ AI, Medtronic Plc, Microsoft Corp., NVIDIA Corp., and Siemens Healthineers AG
Market dynamics
Parent market analysis, Market growth inducers and obstacles, Fast-growing and slow-growing segment analysis, COVID-19 impact and recovery analysis and future consumer dynamics, Market condition analysis for forecast period.
Customization purview
If our report has not included the data that you are looking for, you can reach out to our analysts and get segments customized.
Tbale of Contents
1 Executive Summary
2 Market Landscape
3 Market Sizing
4 Historic Market Size
5 Five Forces Analysis
6 Market Segmentation by Application
7 Market Segmentation by Component
8 Customer Landscape
9 Geographic Landscape
10 Drivers, Challenges, and Trends
11 Vendor Landscape
12 Vendor Analysis
13 Appendix
About Us
Technavio is a leading global technology research and advisory company. Their research and analysis focuses on emerging market trends and provides actionable insights to help businesses identify market opportunities and develop effective strategies to optimize their market positions. With over 500 specialized analysts, Technavio's report library consists of more than 17,000 reports and counting, covering 800 technologies, spanning across 50 countries. Their client base consists of enterprises of all sizes, including more than 100 Fortune 500 companies. This growing client base relies on Technavio's comprehensive coverage, extensive research, and actionable market insights to identify opportunities in existing and potential markets and assess their competitive positions within changing market scenarios.
ContactTechnavio ResearchJesse MaidaMedia & Marketing ExecutiveUS: +1 844 364 1100UK: +44 203 893 3200Email: media@technavio.comWebsite: http://www.technavio.com
Global Artificial Intelligence (AI) in Healthcare Sector Market 2023-2027
Cision
View original content to download multimedia:https://www.prnewswire.com/news-releases/artificial-intelligence-ai-in-the-healthcare-sector-market-to-grow-by-usd-11-827-99-million-from-2022-to-2027-the-growing-demand-for-reduced-healthcare-costs-is-notably-driving-market-growth--technavio-301898991.html
Link:
The ‘Godfather of AI’ Has a Hopeful Plan for Keeping Future AI Friendly – WIRED
That sounded to me like he was anthropomorphizing those artificial systems, something scientists constantly tell laypeople and journalists not to do. Scientists do go out of their way not to do that, because anthropomorphizing most things is silly, Hinton concedes. But they'll have learned those things from us, they'll learn to behave just like us linguistically. So I think anthropomorphizing them is perfectly reasonable. When your powerful AI agent is trained on the sum total of human digital knowledgeincluding lots of online conversationsit might be more silly not to expect it to act human.
But what about the objection that a chatbot could never really understand what humans do, because those linguistic robots are just impulses on computer chips without direct experience of the world? All they are doing, after all, is predicting the next word needed to string out a response that will statistically satisfy a prompt. Hinton points out that even we dont really encounter the world directly.
Some people think, hey, there's this ultimate barrier, which is we have subjective experience and [robots] don't, so we truly understand things and they dont, says Hinton. That's just bullshit. Because in order to predict the next word, you have to understand what the question was. You can't predict the next word without understanding, right? Of course they're trained to predict the next word, but as a result of predicting the next word they understand the world, because that's the only way to do it.
So those things can be sentient? I dont want to believe that Hinton is going all Blake Lemoine on me. And hes not, I think. Let me continue in my new career as a philosopher, Hinton says, jokingly, as we skip deeper into the weeds. Lets leave sentience and consciousness out of it. I don't really perceive the world directly. What I think is in the world isn't what's really there. What happens is it comes into my mind, and I really see what's in my mind directly. That's what Descartes thought. And then there's the issue of how is this stuff in my mind connected to the real world? And how do I actually know the real world? Hinton goes on to argue that since our own experience is subjective, we cant rule out that machines might have equally valid experiences of their own. Under that view, its quite reasonable to say that these things may already have subjective experience, he says.
Now consider the combined possibilities that machines can truly understand the world, can learn deceit and other bad habits from humans, and that giant AI systems can process zillions of times more information that brains can possibly deal with. Maybe you, like Hinton, now have a more fraughtful view of future AI outcomes.
But were not necessarily on an inevitable journey toward disaster. Hinton suggests a technological approach that might mitigate an AI power play against humans: analog computing, just as you find in biology and as some engineers think future computers should operate. It was the last project Hinton worked on at Google. It works for people, he says. Taking an analog approach to AI would be less dangerous because each instance of analog hardware has some uniqueness, Hinton reasons. As with our own wet little minds, analog systems cant so easily merge in a Skynet kind of hive intelligence.
Go here to read the rest:
The 'Godfather of AI' Has a Hopeful Plan for Keeping Future AI Friendly - WIRED