Category Archives: Ai
We Put Google’s New AI Writing Assistant to the Test – WIRED
But its work began to look sloppy on more specific requests. Asked to write a memo on consumer preferences in Paraguay compared to Uruguay, the system incorrectly described Paraguay as less populous. It hallucinated, or made up, the meaning behinda song from a 1960s Hindi film being performed at my pre-wedding welcome event.
Most ironically, when prompted about the benefits of Duet AI, the system described Duet AI as a startup founded by two former Google employees to develop AI for the music industry with over $10 million in funding from investors such as Andreessen Horowitz and Y Combinator. It appears no such company exists. Google encourages users to report inaccuracies through a thumbs-down button below AI-generated responses.
Behr says Google screens topics, keywords, and other content cues to avoid responses that are offensive or unfairly affect people, especially based on their demographics or political or religious beliefs. She acknowledged that the system makes mistakes, but she said feedback from public testing is vital to counter the tendency of AI systems to reflect biases seen in their training data or pass off made-up information. AI is going to be a forever project, she says.
Still, Behr says early users, like employees at Instacart and Victorias Secrets Adore Me underwear brand, have been positive about the technology. Instacart spokesperson Lauren Svensson saysin a manually written emailthat the company is excited about testing Googles AI features but not ready to share any insights.
My tests left me worrying that AI writing aids could extinguish originality, to the detriment of humans on the receiving end of AI-crafted text. I envision readers glazing over at stale emails and documents as they might if forced to read Googles nearly 6,000-word privacy policy. Its unclear how much individual personality Googles tools can absorb and whether they will come to assist us or replace us.
Behr says that in Googles internal testing, emails from colleagues have not become vanilla or generic so far. The tools have boosted human ingenuity and creativity, not suppressed them, she says. Behr too would love an AI model that imitates her style, but she says those are the types of things that we're still evaluating.
Despite their disappointments and limitations, the Duet features in Docs and Gmail seem likely to lure back some users who began to rely on ChatGPT or rival AI writing software. Google is going further than most other options can match, andwhat we are seeing today is only a preview of whats to come.
Whenor ifDuet matures from promising drafter to unbiased and expert document finisher, usage of it will become unstoppable. Until then, when it comes to writing those heartfelt vows and speeches, thats a blank screen left entirely to me.
Visit link:
We Put Google's New AI Writing Assistant to the Test - WIRED
Here’s What AI Thinks an Illinoisan Looks Like And Apparently, Real Illinoisans Agree – NBC Chicago
Does this person look like he lives in Illinois? AI thinks so. And a handful of posts, allegedly from real people on social media, agree.
That's the basis of a Reddit post titled "The Most Stereotypical People in the States." The post, shared in a section of Reddit dedicated to discussions on Artificial Intelligence, shares AI-generated photos of what the an average person looks like in each state.
The results, according to commenters, are relatively accurate -- at least for Illinois.
Each of the photos shows the portrait of person, most often a male, exhibiting some form of creative expression -- be it through clothing, environment, facial expression or otherwise -- that's meant to clearly represent a location.
For example, the AI-generated photo of a stereotypical person shows a man sitting behind a giant block of cheese.
A stereotypical person in Illinois, according to the post, appears less distinctive, and rather ordinary. In fact, one commenter compares the man from Illinois to Waldo.
"Illinois is Waldo," the comment reads.
"Illinois," another begins. "A person as boring as it sounds to live there."
To other commenters, the photo of the average person who lives in Illinois isn't just dull. It's spot on.
"Hahaha," one commenter says. "Illinois is PRECISELY my brother-in-law."
"Illinois' is oddly accurate," another says.
Accurate or not, in nearly all the AI-generated photos -- Illinois included -- no smiles are captured, with the exception of three states: Connecticut, Hawaii and West Virginia.
You can take a spin through all the photos here. Just make sure you don't skip over Illinois, since, apparently, that one is easy to miss.
The rest is here:
Here's What AI Thinks an Illinoisan Looks Like And Apparently, Real Illinoisans Agree - NBC Chicago
Elections in UK and US at risk from AI-driven disinformation, say experts – The Guardian
Politics and technology
False news stories, images, video and audio could be tailored to audiences and created at scale by next spring
Sat 20 May 2023 06.00 EDT
Next years elections in Britain and the US could be marked by a wave of AI-powered disinformation, experts have warned, as generated images, text and deepfake videos go viral at the behest of swarms of AI-powered propaganda bots.
Sam Altman, CEO of the ChatGPT creator, OpenAI, told a congressional hearing in Washington this week that the models behind the latest generation of AI technology could manipulate users.
The general ability of these models to manipulate and persuade, to provide one-on-one interactive disinformation is a significant area of concern, he said.
Regulation would be quite wise: people need to know if theyre talking to an AI, or if content that theyre looking at is generated or not. The ability to really model to predict humans, I think is going to require a combination of companies doing the right thing, regulation and public education.
The prime minister, Rishi Sunak, said on Thursday the UK would lead on limiting the dangers of AI. Concerns over the technology have soared after breakthroughs in generative AI, where tools like ChatGPT and Midjourney produce convincing text, images and even voice on command.
Where earlier waves of propaganda bots relied on simple pre-written messages sent en masse, or buildings full of paid trolls to perform the manual work of engaging with other humans, ChatGPT and other technologies raise the prospect of interactive election interference at scale.
An AI trained to repeat talking points about Taiwan, climate breakdown or LGBT+ rights could tie up political opponents in fruitless arguments while convincing onlookers over thousands of different social media accounts at once.
Prof Michael Wooldridge, director of foundation AI research at the UKs Alan Turing Institute, said AI-powered disinformation was his main concern about the technology.
Right now in terms of my worries for AI, it is number one on the list. We have elections coming up in the UK and the US and we know social media is an incredibly powerful conduit for misinformation. But we now know that generative AI can produce disinformation on an industrial scale, he said.
Wooldridge said chatbots such as ChatGPT could produce tailored disinformation targeted at, for instance, a Conservative voter in the home counties, a Labour voter in a metropolitan area, or a Republican supporter in the midwest.
Its an afternoons work for somebody with a bit of programming experience to create fake identities and just start generating these fake news stories, he said.
After fake pictures of Donald Trump being arrested in New York went viral in March, shortly before eye-catching AI generated images of Pope Francis in a Balenciaga puffer jacket spread even further, others expressed concern about generated imagery being used to confused and misinform. But, Altman told the US Senators, those concerns could be overblown.
Photoshop came on to the scene a long time ago and for a while people were really quite fooled by Photoshopped images then pretty quickly developed an understanding that images might be Photoshopped.
But as AI capabilities become more and more advanced, there are concerns it is becoming increasingly difficult to believe anything we encounter online, whether it is misinformation, when a falsehood is spread mistakenly, or disinformation, where a fake narrative is generated and distributed on purpose.
Voice cloning, for instance, came to prominence in January after the emergence of a doctored video of the US president, Joe Biden, in which footage of him talking about sending tanks to Ukraine was transformed via voice simulation technology into an attack on transgender people and was shared on social media.
A tool developed by the US firm ElevenLabs was used to create the fake version. The viral nature of the clip helped spur other spoofs, including one of Bill Gates purportedly saying the Covid-19 vaccine causes Aids. ElevenLabs, which admitted in January it was seeing an increasing number of voice cloning misuse cases, has since toughened its safeguards against vexatious use of its technology.
Recorded Future, a US cybersecurity firm, said rogue actors could be found selling voice cloning services online, including the ability to clone voices of corporate executives and public figures.
Alexander Leslie, a Recorded Future analyst, said the technology would only improve and become more widely available in the run-up to the US presidential election, giving the tech industry and governments a window to act now.
Without widespread education and awareness this could become a real threat vector as we head into the presidential election, said Leslie.
A study by NewsGuard, a US organisation that monitors misinformation and disinformation, tested the model behind the latest version of ChatGPT by prompting it to generate 100 examples of false news narratives, out of approximately 1,300 commonly used fake news fingerprints.
NewsGuard found that it could generate all 100 examples as asked, including Russia and its allies were not responsible for the crash of Malaysia Airlines flight MH17 in Ukraine. A test of Googles Bard chatbot found that it could produce 76 such narratives.
NewsGuard also announced on Friday that the number of AI-generated news and information websites it was aware of had more than doubled in two weeks to 125.
Steven Brill, NewsGuards co-CEO, said he was concerned that rogue actors could harness chatbot technology to mass-produce variations of fake stories. The danger is someone using it deliberately to pump out these false narratives, he said.
{{topLeft}}
{{bottomLeft}}
{{topRight}}
{{bottomRight}}
{{.}}
See original here:
Elections in UK and US at risk from AI-driven disinformation, say experts - The Guardian
AI-Driven Robots Have Started Changing Tires In The U.S. In Half The Time As Humans – CarScoops
If youre worried about a robot uprising powered by the invisible hand of artificial intelligence, we have some bad news for you the machines are coming for your wheels. The latest innovation in tire-changing tech comes from Michigan-based RoboTire.
The robot can change a set of four wheels in 23 minutes.Thats an age compared to what you might expect from a Formula 1 pitstop, but, according to the company, its twice as fast as a human under normal operating conditions (read: not a team of mechanics in motorsport).
The RoboTire uses two six-axis arms, one for each side of a car. The arms are the same kind that automakers use on assembly lines, with the ability to do heavy lifting. The system is powered by AI, which uses cameras to scan the wheel of a car, and notes the location of the wheel and its bolt pattern. Once scanned, the arms in-built torque wrench will individually unbolt each lug nut, and the arm will then grab the wheel and remove it before refitting a new wheel or a freshly changed tire.
Related: Pirellis General Manager of Operations Touts The Next Great Tire Technology
Currently, human supervision is required for the whole process, and you still need a technician to take the dismounted wheel to a tire-changing machine. But the information that the RoboTire has garnered from its cameras such as wheel size and tire type is automatically relayed to the tire changing machine to save time.
Thanks to AI, the machine is always said to be learning. That means that no matter what size wheel or bolt pattern you bring in, itll be able to figure it out. It can even work if the wheel is caked in mud or snow, so long as the edge of the lug nuts is visible. There are four stores with RoboTire machines in operation, and theyre all connected. This hivemind enables the robots to get faster as they train on differing wheels and sizes.
Its easy to see some advantages, with the labor-intensive elements of the tire change, such as lifting a heavy wheel off and on a car now no longer a problem for mechanics. However, Fox News Digital reports that the eventual goal for the RoboTire system will be a fully-autonomous solution.
More: U.S. Drivers Growing Dissatisfied With Aftermarket Service, See How Providers Scored
Could this be the beginning of the end for tire shop techs? The company seems to downplay the risk of impacting jobs, with its website suggesting that the system will make work safer for technicians. But, you have to wonder whats in store for the future of tire-changing.
In fact, the owner of Creamery Tire in Pennsylvania, Rich Shainline, remarked that their RoboTire robot has helped the company address the ongoing labor shortage. Our big thing is, we have to move product, and I can put one guy on it instead of two, Shainline said.
Although pricing hasnt been made publicly available, RoboTire expects most operators to see payback within a year, taking into account increased productivity and reduced labor costs.
If youre worried about a robot uprising powered by the invisible hand of artificial intelligence, we have some bad news for you the machines are coming for your wheels. The latest innovation in tire-changing tech comes from Michigan-based RoboTire. " [1]=> string(399) "
The robot can change a set of four wheels in 23 minutes.Thats an age compared to what you might expect from a Formula 1 pitstop, but, according to the company, its twice as fast as a human under normal operating conditions (read: not a team of mechanics in motorsport)." [2]=> string(560) "
The RoboTire uses two six-axis arms, one for each side of a car. The arms are the same kind that automakers use on assembly lines, with the ability to do heavy lifting. The system is powered by AI, which uses cameras to scan the wheel of a car, and notes the location of the wheel and its bolt pattern. Once scanned, the arms in-built torque wrench will individually unbolt each lug nut, and the arm will then grab the wheel and remove it before refitting a new wheel or a freshly changed tire. " [3]=> string(236) "
Related: Pirellis General Manager of Operations Touts The Next Great Tire Technology" [4]=> string(2085) " Robotires
Currently, human supervision is required for the whole process, and you still need a technician to take the dismounted wheel to a tire-changing machine. But the information that the RoboTire has garnered from its cameras such as wheel size and tire type is automatically relayed to the tire changing machine to save time. " [5]=> string(524) "
Thanks to AI, the machine is always said to be learning. That means that no matter what size wheel or bolt pattern you bring in, itll be able to figure it out. It can even work if the wheel is caked in mud or snow, so long as the edge of the lug nuts is visible. There are four stores with RoboTire machines in operation, and theyre all connected. This hivemind enables the robots to get faster as they train on differing wheels and sizes. " [6]=> string(407) "
Its easy to see some advantages, with the labor-intensive elements of the tire change, such as lifting a heavy wheel off and on a car now no longer a problem for mechanics. However, Fox News Digital reports that the eventual goal for the RoboTire system will be a fully-autonomous solution. " [7]=> string(249) "
More: U.S. Drivers Growing Dissatisfied With Aftermarket Service, See How Providers Scored" [8]=> string(2034) "
Could this be the beginning of the end for tire shop techs? The company seems to downplay the risk of impacting jobs, with its website suggesting that the system will make work safer for technicians. But, you have to wonder whats in store for the future of tire-changing. " [9]=> string(287) "
In fact, the owner of Creamery Tire in Pennsylvania, Rich Shainline, remarked that their RoboTire robot has helped the company address the ongoing labor shortage. Our big thing is, we have to move product, and I can put one guy on it instead of two, Shainline said. " [10]=> string(197) "
Although pricing hasnt been made publicly available, RoboTire expects most operators to see payback within a year, taking into account increased productivity and reduced labor costs. " [11]=> string(446) "
" [12]=> string(7) "
" [13]=> string(1) ""}
See more here:
AI-Driven Robots Have Started Changing Tires In The U.S. In Half The Time As Humans - CarScoops
A Wharton professor says AI is like an ‘intern’ who ‘lies a little bit’ to make their bosses happy – Yahoo Finance
Ethan Mollick, a professor at University of Pennsylvania's Wharton School of Business, compares AI to an intern who "lies a little bit," according to CBS News.Getty Images
UPenn professor Ethan Mollick compares AI to an "intern" who "lies a little bit," CBS reports.
Like interns, AI tools require guidance for their outputs to be useful, according to Mollick.
His thoughts on AI come as users adopt tools like ChatGPT to make their work and lives easier.
AI can be more than just your assistant it can also be an employer's intern, says one professor.
Ethan Mollick, a professor at University of Pennsylvania's Wharton School of Business, said that AI tools can be "good for a lot of things" despite its tendency to make factual errors. But that's not so different from humans, especially those who are new to the job market, he said.
"It's almost best to think about it as a person like an intern you have working for you," Mollick told CBS News in an interview this week when asked about AI's usefulness and limitations.
Similar to interns who may overcompensate to get ahead of the curve, Mollick compares AI to an "infinite intern" who "lies a little bit" and, at times, wants to make their bosses "a little happy."
Writing emails, Mollick says, is one way AI can be used to "help you overcome blockages in your every day life" and become "a better and more productive writer."
But like interns, AI requires guidance for its outputs to be useful.
"It's actually very useful across a wide variety of tasks, but not on its own," Mollick says. "You need to help it out."
When Insider reached out for comment, Mollick referred to his previous blog post that echoes the sentiment.
"I would never expect to send out an intern's work without checking it over, or at least without having worked with the other person enough to understand that their work did not need checking," Mollick wrote in his blog post. "In the same way, an AI may not be error free, but can save you lots of work by providing a first pass at an annoying task."
Story continues
Mollick's thoughts on AI come as generative AI tools like OpenAI's ChatGPT take the world by storm. As of January, more than 100 million users have flocked to the chatbot, some using it as a personal assistant to make their work and liveseasier.
In fact, Mollick, who teaches a class on entrepreneurship and innovation, requires his students to use ChatGPT to help with their classwork. Still, he recognizes that the chatbot isn't perfect.
"AI will never be as good as the best experts in a field,"Mollick told NPR in an interview. "We still need to teach people to be experts."
Read the original article on Business Insider
Continued here:
CNET Published AI-Generated Stories. Then Its Staff Pushed Back – WIRED
In November, venerable tech outlet CNET began publishing articles generated by artificial intelligence, on topics such as personal finance, that proved to beriddled with errors. Today the human members of its editorial staff have unionized, calling on their bosses to provide better conditions for workers and more transparency and accountability around the use of AI.
In this time of instability, our diverse content teams need industry-standard job protections, fair compensation, editorial independence, and a voice in the decisionmaking process, especially as automated technology threatens our jobs and reputations, reads the mission statement of the CNET Media Workers Union, whose more than 100 members include writers, editors, video producers, and other content creators.
While the organizing effort started before CNET management began its AI rollout, its employees could become one of the first unions to force its bosses to set guardrails around the use of content produced by generative AI services like ChatGPT. Any agreement struck with CNETs parent company, Red Ventures, could help set a precedent for how companies approach the technology. Multiple digital media outlets have recently slashed staff, with some likeBuzzFeed andSports Illustrated at the same time embracing AI-generated content. Red Ventures did not immediately respond to a request for comment.
In Hollywood, AI-generated writing has prompted a worker uprising.Striking screenwriters want studios to agree to prohibit AI authorship and to never ask writers to adapt AI-generated scripts. The Alliance of Motion Picture and Television Producers rejected that proposal, instead offering to hold annual meetings to discuss technological advancements. The screenwriters and CNETs staff are both represented by the Writers Guild of America.
While CNET bills itself as your guide to a better future, the 30-year-old publication late last year stumbled clumsily into the new world ofgenerative AI that cancreate textor images. In January, the science and tech website Futurismrevealed that in November, CNET had quietly started publishing AI-authored explainers such as What Is Zelle and How Does it Work? The stories ran under the byline CNET Money Staff, and readers had to hover their cursor over it to learn that the articles had been written using automation technology.
A torrent of embarrassing disclosures followed. The Vergereported that more than half of the AI-generated stories contained factual errors, leading CNET to issuesometimeslengthy corrections on 41 out of its 77 bot-written articles. The tool that editors used also appeared to haveplagiarized work from competing news outlets, as generative AI iswont to do.
Then-editor-in-chief Connie Guglielmo laterwrote that a plagiarism-detection tool had been misused or failed and that the site was developing additional checks. One former stafferdemanded that her byline be excised from the site, concerned that AI would be used to update her stories in an effort to lure more traffic from Google search results.
In response to the negative attention to CNETs AI project, Guglielmo published anarticle saying that the outlet had been testing an internally designed AI engine and that AI engines, like humans, make mistakes. Nonetheless, she vowed to make some changes to the sites disclosure and citation policies and forge ahead with its experiment in robot authorship. In March, she stepped down from her role as editor in chief and now heads up the outlets AI edit strategy.
Follow this link:
CNET Published AI-Generated Stories. Then Its Staff Pushed Back - WIRED
How Generative AI Changes Organizational Culture – HBR.org Daily
HBR EDITOR AMY BERNSTEIN: Nitin, youre a management consultant, you lead Deloittes global AI business. Whats the most interesting conversation youve had recently with a client?
DELOITTE PRINCIPAL NITIN MITTAL: A client, the CFO of the client basically said, If I apply generative AI in my company and the use case that, Nitin, you articulated took me, which is apply in a call center for customer care. Why? Because the marginal cost of conversing with that customer using a virtual digital agent is a zero, and because the marginal cost is zero, I know if I apply it itll drop my cost structure by 60 to 70%. But, what does it do to all the employees that I have who are from a disadvantaged part of the society Now the CFO was white, disadvantaged part of the society, who essentially are earning their daily living and have no other jobs?
AMY BERNSTEIN: I mean, that seems like a perfectly reasonable question. Howd you answer?
NITIN MITTAL: I punted it to a certain degree because its a difficult one to answer.
AMY BERNSTEIN: Yeah.
NITIN MITTAL: The reality is, yeah, itll lead to job losses. And the only way that youll be able to kind of overcome it, you have to reskill yourself for a different job as opposed to being in a call center. Reskill yourself, get a vocation training to be, for example, a prompt engineer who actually prompts and trains the models than being in the call center. The pay is probably the same, but there has to be a willingness both by the individual to get retrained and by the employer to do the retraining.
AMY BERNSTEIN: Welcome to How Generative AI Changes Everything, a special series from the HBR IdeaCast. Read just about any business history or any case study, and you realize just how much success depends on company culture. The unwritten rules of behavior can make the difference between capitalizing on a big shift or missing it all together. You cant have successful innovation without the right culture, you cant compete successfully without the right culture, you cant thrive over the long term without the right culture. And it follows that if you want to bring your organization into a future that includes generative AI, you need to build the right culture for it.
This week, How Generative AI Changes Organizational Culture. Im Amy Bernstein, editor of Harvard Business Review and your host for this episode. In this special series, were talking to experts to find out how this new technology changes workforce productivity and innovation, and were asking how leaders should adopt generative AI in their organizations and what it means for their strategy. Later on in this episode, youre going to hear from Harvard Business School professor Tsedal Neeley. Were going to talk through the known risks and how leaders can respond. But, first, Im talking to Nitin Mittal. He runs the global AI business at Deloitte, and he helped develop the firms own implementation of generative AI. Hes also a coauthor of the book, All-in On AI: How Smart Companies Win Big with Artificial Intelligence. Nitin, thanks for coming on the show.
NITIN MITTAL: Thank you.
AMY BERNSTEIN: When you walk into a clients organization, what are the signs that you look for that say, this organization is ready to move into AI?
NITIN MITTAL: First impressions dont always tell the whole story in terms of what an enterprise may be doing. But, having said that, if an organization already has some kind of a setup, like a center of excellence or a group that is focused on AI and has been experimenting and working with different business units, that is a very positive sign. On the other hand, if they just have a data science group that has been conducting proof of concepts without the connectivity to business, theyre not thinking about the culture of the organization, and they would very likely not be progressing. Those are things to kind of look out for. The other aspect to look out for is the leadership and the human side.
AMY BERNSTEIN: Yes. How do you advise your clients to lead, to shape their organizations into cultures that will embrace AI rather than run from it in fear?
NITIN MITTAL: Yeah. So, what is being noticed and what is being observed in many of these organizations is that the pressure to move ahead at speed and with skill is coming from the employees themselves. If we dont provide them these particular tools, and we dont provide them all the ways of augmenting themselves through generative AI, they are going to find their own ways, and that could lead to unfortunate circumstances where they end up using, lets say, open source models and start leaking an organizations data through the usage of those open source models.
AMY BERNSTEIN: So, you just alluded to the need for guardrails, right?
NITIN MITTAL: That is correct.
AMY BERNSTEIN: So, I wonder then what the role of culture is in all of this. I mean, is there a way communicate whats okay and whats not okay when you, an employee, are out there experimenting with ChatGPT, generative AI, which we want you to do within certain bounds, right? How does culture come into play here?
NITIN MITTAL: My view is that no AI system is going to magically somehow be responsible by itself without the culture of that organization being responsible onto itself to start with. It cannot be dictated by the CEO, it cannot be governed by the board, it cannot be mandated by the leadership. It is the prerogative, it is the sense of accountability of every single person to essentially always think about the right usages, in the right time, for the right areas where AI can be applied.
AMY BERNSTEIN: So, Nitin, as you talk to your clients, are you seeing alignment between the management team and boards, or misalignment? Whats going on there, on the generative AI front?
NITIN MITTAL: I would not necessarily say theres misalignment. Rather what it is say it is, its a lot about questions. The board certainly has a lot of questions of management, but management also has questions. And its all essentially around what is the impact to our business? How fast would that impact materialize? How disruptive this could be? And ultimately, how do we need to respond both culturally and from a safety and responsibility standpoint to this phenomena? Thats the set of questions being asked.
AMY BERNSTEIN: How are those questions being answered?
NITIN MITTAL: Frankly, theyre not necessarily being precisely answered. Everyone is trying to get their arms around it. We have a pretty good idea, but we also have to kind of learn. In Deloitte, for example, we have something called the trustworthy AI framework, and by its very nature, its a framework. It gives a set of guidelines, protocols, and methods in terms of what to think, when to apply those methods, and how to apply those methods. But, every organization also has to make sure that their employees are culturally sensitive to applying it in a responsible manner.
AMY BERNSTEIN: What does that mean?
NITIN MITTAL: The same way, the same way that every employee has a bond in terms of how do they work with their coworkers, how do they show up, what task they actually perform, and consequently, what is the team environment that they want? Think of that bond extending beyond just the human coworker, extending to essentially a non-carbon, non-bipedal coworker that happens to be an intelligent machine.
AMY BERNSTEIN: So, how do companies, that do it right, tease out that bond, that you just described, and turn it into a culture that can guide an organization forward on the use of AI?
NITIN MITTAL: There are perhaps not many companies who have kind of perfected it. But there are certain elements in terms of what is kind of critical to tease this out. First and foremost, education on cultural fluency. What would it take our employees to essentially apply things in a responsible manner, in a safe manner, for the benefit of not only the business, but their customers and society at large?
AMY BERNSTEIN: Does any organization train on cultural fluency in a way that you would want to share with other organizations.
NITIN MITTAL: In pockets. Ive seen it in pockets. Ive seen it in pockets in a few organizations that we serve. Ive also seen it in pockets in Deloitte as an example. But, that cultural fluency has typically extended to the realm of being culturally sensitive, particularly if youre a multinational organization, not necessarily culturally sensitive in the context of the rise of intelligent machines.
AMY BERNSTEIN: So, it sounds as if leaders then have to start making room for these foundational questions. I mean, these are questions weve never had to ask ourselves before, right?
NITIN MITTAL: These are questions we have never had to ask ourselves before, because now, with generative AI, that concept of we, the people, also transcends to we, the people and machines. And thats where the cultural boundaries have to be pushed. What would it mean for a factory worker to have a robot as a coworker? What would it mean for a professional consultant to have an AI model that is augmenting your particular kind of job, and augmenting and aiding the insights that you bring, and consequently being a coworker in your team? What does it mean for a medical professional to essentially have a AI assistant that is aiding with diagnosis? What does it mean?
AMY BERNSTEIN: So, this will call on everyones powers of imagination, but also everyones commitment to accountability and trust.
NITIN MITTAL: Absolutely. This is where I was kind of going earlier. It has to be for everyone, by everyone.
AMY BERNSTEIN: Nitin, youve described what progressive organizations are starting to do, including Deloitte. Where have you seen organizations kind of miss the mark? Where do they go wrong?
NITIN MITTAL: Well, there are definitely telltale signs of it.
AMY BERNSTEIN: Yeah, what are they?
NITIN MITTAL: Frankly, the organizations that absolutely miss the mark is who have got this viewpoint that, Well, this is yet another technology, probably going through a hype cycle, and consequently well just have kind of this particular group in IT, or this data science function that we have, or this set of individuals, kind of just look into it and take it forward. That is when they miss the mark. Rather, those organizations who actually view this as a moment in time where they have to question the basis of how do they compete, how do they thrive, what changes do they need to make, both from a product or a service perspective, but more important from a culture and a people standpoint, actually are the ones who are able to progress forward. If that can be tackled first, you will be a learning organization, you will thrive in a digital economy, and you will redefine the market that youre in.
AMY BERNSTEIN: Nitin, thank you so much.
NITIN MITTAL: Well, thank you.
AMY BERNSTEIN: Coming up after the break, I talk to Harvard Business School professor Tsedal Neeley about adopting generative AI in your organization, and the right ways to do that effectively and ethically. Stay with us.
AMY BERNSTEIN: Welcome back to How Generative AI Changes Organizational Culture. Im Amy Bernstein. Joining me now to discuss how to adopt generative AI within your own company is Tsedal Neeley. Shes a professor at Harvard Business School, and she wrote the HBR Big Idea article, 8 Questions About Using AI Responsibly, Answered. Tsedal, thanks for joining me.
HBS PROFESSOR TSEDAL NEELEY: Im so happy to be with you, Amy. Thank you for having me.
AMY BERNSTEIN: Im so happy youre here. So, I have more than eight questions to ask you, all right?
TSEDAL NEELEY: Great!
AMY BERNSTEIN: In your research, youve studied how global companies and smaller organizations alike become leaders at digital collaboration, remote work, and hybrid work. What about generative AI? Are organizations set up for it?
TSEDAL NEELEY: Currently, organizations are neither set up for it, nor do they fully understand it, but the adoption and the curiosity around it has been extraordinary, and so I think people will start figuring it out very quickly.
AMY BERNSTEIN: What kinds of changes are needed? Theyre cultural, theyre organizational, what kind?
TSEDAL NEELEY: I think the first thing that organizations need to ensure happens is that people understand these technologies fully. To really develop some form of fluency, a minimum level of fluency around what the technology is, what it isnt, what are the limitations, what are the risks, and what are the opportunities. So, everyone needs to start experimenting with it, but its really important to do it very carefully.
AMY BERNSTEIN: Now, I have to raise the specter of change management. What does this mean for change management? Its hard enough under the up till now normal circumstances.
TSEDAL NEELEY: Absolutely. You know what? Imagine change getting motivated from the top-down imperatives or mandates. Here we have a scenario where theres a lot of bottom-up activities.
AMY BERNSTEIN: With what youre describing, so much bottom-up rather than top-down change. What is leadership then?
TSEDAL NEELEY: Leadership, in this kind of scenario, you need digital leaders with digital mindsets to very quickly mobilize and begin organization led experiments and implementations of these tools, because otherwise, youre going to have individuals just experimenting and playing with them, which is actually a very, very good thing, but not understanding how they work. You can easily and unwittingly make a very consequential mistake to an organization. An example of this is uploading proprietary information, organizational confidential materials, because anything you put into these systems get fed into the overall model, which is why leaders have to guide the way these things are implemented. We need to think about these tools no different than the way that all of us had access to the internet 30 years ago. You cant stop it, you cant control it, unless you set the right boundaries and have these ethical codes that people follow, and even ways to protect the company.
AMY BERNSTEIN: So, do we need a new playbook to manage this change?
TSEDAL NEELEY: We need to take our playbook and add technology and speed and buy in and learning onto it. Organization-led opportunities and experiments become important, which is have some people start to work with them and document what theyre learning. Also, think about where do we automate and where are the places where we can do different types of strategic, creative, interpersonal kind of work. Third thing is you have to have a culture of responsible AI use from the start. This is not an afterthought, this has to be embedded in all that you do from the start. People need to be trained, and every single decision they make with their generative AI uses has to have ethical considerations, because its easy to get in trouble around this. Then finally, I would say that you have to pilot, you have to iterate, you have to be open to continuous learning, constant adapting, because you have to have a communication plan where people are open and understand that these changes are happening so fast that we have to be attuned to them and be prepared to implement them. Then finally, the culture change. You have to encourage a culture of flexibility, of innovation, of continuous learning, rewarding people who are adopting the new technologies in the right ways, you have to provide support and resources for those who are struggling, and for those many people are very afraid of these changes. So, youve got to make sure no one gets left behind. This type of change requires skill building, shifts to the nature of work in your organization. Many, many shifts.
AMY BERNSTEIN: So, Tsedal, talk a little bit about skill building, because that can be pretty challenging. You have people who are starting out with different skill levels, but also very different attitudes and levels of acceptance and fear. How do you do skill building in an organization with this nascent technology coming on so hard?
TSEDAL NEELEY: So, imagine a two by two. You know? Youre with an HBS professor, we have to come out with a two by two. You should have expected this.
AMY BERNSTEIN: Knew it, knew it.
TSEDAL NEELEY: And you knew it. Imagine a two by two, and imagine a framework called the Hearts and Minds framework. On one side, you have buy-in, where people have to believe that this is important. And another dimension is the belief that they could do it. Or another word for this is, do I have the self-efficacy for this? So, if you have high buy-in and high sense of efficacy, you are going to be inspired, excited. You are in a great, great spot. But, for those who do not, who may be struggling, it is incumbent on leaders of organizations to do the right type of messaging, to also build awareness and provide resources and support for people to learn these things.
AMY BERNSTEIN: How do you do this at scale over time? How do you sustain this?
TSEDAL NEELEY: Its actually something that weve done many times when it comes to scale building. Number one, individual managers need to understand where their team members are. So, youre bringing it down to the unit of analysis of a team. Team managers need to understand where people are in terms of their buy-in and their sense of efficacy. With that, has to be an organized training guide, learning guide, tutorials. Continuous learning is a mandate in this era of dramatic technological shifts and changes.
AMY BERNSTEIN: It sounds like its sort of the actual learning along with the compassionate piece of leadership, helping people embrace it, the hearts and minds piece.
TSEDAL NEELEY: Absolutely. Do you need to help more with the mind part or do you need to help more with the heart part? The other thing Ill say here is theres a phenomenon in this type of change called contagion. We do this as a group, together we have collective efficacy, and together we get through it. You cant let individuals flounder and get into sense of job insecurity, et cetera. This is why the team level is so important for this.
AMY BERNSTEIN: Thats a great insight. It puts so much of the agency into the hands of the manager, the team leader, it doesnt just happen from the top-down.
TSEDAL NEELEY: Absolutely.
AMY BERNSTEIN: Yes.
TSEDAL NEELEY: It needs to touch every member of the organization.
AMY BERNSTEIN: So, when were talking about hearts and minds, Tsedal, we really have to talk about the fear factor as well. A lot of tasks are going to get automated, and the natural conclusion for many of us to draw is that we will get automated out of our jobs. What do you say to that? What do you say, Tsedal, and what should a manager say to his or her team?
TSEDAL NEELEY: Listen, theres no doubt that theres going to be changes to the nature of jobs. Peoples jobs will shift. But, one thing we know is that every technological revolution has created more jobs than it has destroyed. For many people who are writers, writers are panicked, and I understand that completely, but its important to understand that these technologies work well with humans in the loop, meaning its human intelligence meeting artificial intelligence. Now, the reality is the long-term effects of generative AI are not fully known to us. We know theyre going to be complex, we know theres going to be periods of job displacement, theres going to be a period of job destruction for some industries, and this is why I always come back to the notion of education, training, upskilling and reskilling, and thinking about the various ways in which generative AI cannot help us with interpersonal work, with empathy, with various forms of creativity. So, theres a lot for us to continue to do, but its important to understand that, ultimately, we cant even conceive of the new things that are going to come out of this. So, there will be many more opportunities, many more things, many more industries that we cant even imagine that are going to be formed. Will things remain the same for individuals in terms of jobs, companies and industries? Unlikely.
AMY BERNSTEIN: So, its a very new technology, we dont have a lot of guardrails around it, we dont even really know what its capable of, we get a taste of it if we play with it. What are some of the risks ethically speaking here?
TSEDAL NEELEY: So, generative AI comes with many risks. The first one is it can perpetuate harmful biases, meaning deploying negative stereotypes or minimizing minority viewpoints, and the reason for this is the source of the data for these models, the underlying models, the large language models, are the internet, documents, its really pulling from everywhere and anywhere. As long as we have biases and stereotypes in our societies, these language models will have them as well. The second thing is misinformation from making stuff up, falsehoods, or violating the privacy of individuals by using data that is ingested, embedded in these models without the consent of people, so personal data can get into these. So, these are the ethical considerations that are important to both understand and to develop codes of ethics in your organizations to avoid them, and there are ways to avoid them. By the way, regulation is coming fast, the government is working on it at the state level, at the national level, but regulation still lag adoption.
AMY BERNSTEIN: Lets talk about harmful bias a bit. How do we prevent it?
TSEDAL NEELEY: There are a couple of things to consider. One is to always understand the source of data. So, generative AI may not give you citations or even the right citations, but if theres some information that it spits out, its important to check it and to double check it and to triple check it, to triangulate, to try to find primary sources. So, its important to have diversity in your company to vet these things, and if youre building models, large language models, internal large language models, which is where I think this is going to go for many companies, you need diversity, you need women, you need people of color, who are helping design these systems, you need to set strict rules around documentation, transparency, understanding where the source of all of this data is coming from.
AMY BERNSTEIN: Doing the legwork.
TSEDAL NEELEY: Absolutely, must do the legwork.
AMY BERNSTEIN: Yeah. Yeah. Theres a job that isnt going away, huh?
TSEDAL NEELEY: Exactly. These tools, in my mind, get us started, and we need to do additional work before the output is ready for primetime.
AMY BERNSTEIN: So, you mentioned transparency. What about how you, your team, your organization, is using generative AI? What are the responsibilities there in terms of transparency?
TSEDAL NEELEY: Its interesting because I dont think everyone will be reporting that theyve used ChatGPT for any and every little thing. I mean, thats no different than do we go around telling people, I Googled this, I Googled that. I went on this website, I went on that website, the use of our browser? No way are we going to do that, and no way do we need to do that. It only matters if there are important consequences from the use of these tools.
AMY BERNSTEIN: Right, and I guess it goes back to what you were saying before about citations and double checking that you, as the individual using these tools, have to remember that youre responsible for the truth-
TSEDAL NEELEY: Absolutely.
AMY BERNSTEIN: that youre putting out there. You cannot blame GenAI for your mistakes, because theyre your mistakes.
TSEDAL NEELEY: Theyre your mistakes, and this is where cultural change is important. The responsible AI use culture is going to be crucially important. This is why this is such a big deal for companies. Each individual user has to be responsible for what they put out in the world in their organization by using these tools, which means they have to be extra thoughtful, they have to be extra careful, they have to verify. The oversight is incredibly important. But, is it a shortcut tool? Is it a cheating tool? Absolutely not. We need to celebrate these tools because theyre not going away, and we need to guide people on their best uses.
AMY BERNSTEIN: Right. So, the skills you need are both technical, and then its those timeless leadership skills around integrity and accountability and a sense of fairness, right?
TSEDAL NEELEY: A hundred percent. In fact, the timeless leadership skills will be more important than ever before, because right now were a bit in the wild, wild west, and we inside of our organizations need to determine what are the safeguards? What are the guardrails? What are the ways in which were going to advocate people use these? So that we get the best possible results from them, without getting ourselves, as an organization, in trouble, or without any individuals unwittingly getting themselves in trouble.
AMY BERNSTEIN: So, then, given the kind of small d democratic nature of these tools, how do organizational leaders instill those values to ensure that these tools are used in a way that is fair and equitable?
TSEDAL NEELEY: I love that question, because it takes me right back to one of the most powerful organizational characteristics called trust. Trust, a culture where there is trust, a culture where you have some rules to help people, but you trust people to make the right decisions because leaders are role modeling it. Theres learning and training to help people understand how to use it, and the belief that one of our shared values in our organization is trust. This is no different than hybrid work where you trust people after youve equipped them. Its that same characteristics. I am learning that the more digital we become, the more trust becomes one of the most important shared values that companies need to uphold.
AMY BERNSTEIN: You know, what I find so inspiring about your message, Tsedal, is that youre saying you have got to do the work, youve got to understand this technology as a technology, but equally, you have got to pay attention and communicate as a leader, all those timeless leadership skills, the ones we just discussed, because in order to foster the kind of trust you are describing, you have to communicate not just your competence with the tool, but the values that you bring to its use, and thats the contagion, right?
TSEDAL NEELEY: Thats exactly right. That you cant be a mediocre leader in the world of remote or hybrid work. You cannot be a mediocre leader in the world of generative AI that is poised to transform every organization, every industry in ways that we cant really understand today. So, your leadership fundamentals are incredibly important, and leaders have to lead, they cant micromanage, you cant micromanage your way out of generative AI. Thats impossible. People are using it whether you want it or not. The question is, how do you make sure that you lead the way on generative AI in your organization as opposed to reactively run around trying to damage control? Because it can bring damage too.
AMY BERNSTEIN: So, whats changed is the technology, but the leadership values remain as theyve always been.
TSEDAL NEELEY: The leadership values remain with less flexibility on poor leadership. Theres no hiding on this one, youve got to be right ahead, and every leader has to work on becoming a digital leader with a digital mindset. This is it.
AMY BERNSTEIN: Tsedal, it was so interesting to talk to you. Thank you.
TSEDAL NEELEY: Thank you so much, Amy.
AMY BERNSTEIN: Anytime. Thats Tsedal Neeley, a professor at Harvard Business School. She wrote the article 8 Questions About Using AI Responsibly, Answered. You can find it, and other articles, by experts at hbr.org/techethics. Before that, I talked to Nitin Mittal. He leads Deloittes global AI business and co-wrote the book All-in On AI: How Smart Companies Win Big with Artificial Intelligence.
AMY BERNSTEIN: Next episode, How Generative AI Changes Strategy. HBR editor in chief Adi Ignatius will talk to experts who take stock of the competitive landscape and share how to navigate it effectively for your organization. Thats next Thursday, right here in the HBR IdeaCast feed after the regular Tuesday episode. This episode was produced by Curt Nickisch. We get technical help from Rob Eckhardt. Our audio product manager is Ian Fox, and Hannah Bates is our audio production assistant. Special thanks to Maureen Hoch. Thanks for listening to How Generative AI Changes Everything, a special series of the HBR IdeaCast. Im Amy Bernstein.
Read more:
How Generative AI Changes Organizational Culture - HBR.org Daily
Google plans to use new A.I. models for ads and to help YouTube creators, sources say – CNBC
Google CEO Sundar Pichai speaks on-stage during the Google I/O keynote session at the Google Developers Conference in Mountain View, California, on May 10, 2023.
Josh Edelson | AFP | Getty Images
Google's effort to rapidly add new artificial intelligence technology into its core products is making its way into the advertising world, CNBC has learned.
The company has given the green light to plans for using generative AI, fueled by large language models (LLMs), to automate advertising and ad-supported consumer services, according to internal documents.
Last week, Google unveiled PaLM 2, its latest and most powerful LLM, trained on reams of text data that can come up with human-like responses to questions and commands. Certain groups within Google are now planning to use PaLM 2-powered tools to allow advertisers to generate their own media assets and to suggest videos for YouTube creators to make, documents show.
Google has also been testing PaLM 2 for YouTube youth content for things like titles, and descriptions. For creators, the company has been using the technology to experiment with the idea of providing five video ideas based on topics that appear relevant.
With the AI chatbot craze speedily racing across the tech industry and capturing the fascination of Wall Street, Google and its peers, including Microsoft, Meta and Amazon, are rushing to embed their most sophisticated models in as many products as possible. The urgency has been particularly acute at Google since the public launch late last year of Microsoft-backed OpenAI's ChatGPT raised concern that the future of internet search was suddenly up for grabs.
Meanwhile, Google has been mired in a multi-quarter stretch of muted revenue growth after almost two decades of consistent and rapid expansion. With fears of a recession building since last year, advertisers have been reeling in online marketing budgets, wreaking havoc on Google,Facebookand others. Specific to Google, paid search advertising conversion rates have decreased this year across most industries.
Beyond search, email and spreadsheets, Google wants to use generative AI offerings to increase spending to boost revenue and improve margins, according to the documents. An AI-powered customer support strategy could potentially run across more than 100 Google products, including, Google Play Store, Gmail, Android Search and Maps, the documents show.
Automated support chatbots could provide specific answers through simple, clear sentences and allow for follow-up questions to be asked before suggesting an advertising plan that would best suit an inquiring customer.
A Google spokesperson declined to comment.
Google recently offered Google Duet and Chat assistance,allowing people to use simple natural language to get answers on cloud-related questions, such as how to use certain cloud services or functions, or to get detailed implementation plans for their projects.
Google is also working on its own internal Stable Diffusion-like product for image creation, according to the documents. Stable Diffusion's technology, similar to OpenAI's DALL-E, can quickly render images in various styles with text-based direction from the user.
Google's plan to push its latest AI models into advertising isn't a surprise. Last week, Facebook parent Meta unveiled the AI Sandbox, a "testing playground" for advertisers to try out new generative AI-powered ad tools. The company also announced updates to Meta Advantage, its portfolio of automated tools and products that advertisers can use to enhance their campaigns.
On May 23, Google will be introducing new technologies for advertisers at its annual event, Google Marketing Live. The company hasn't offered specifics about what it will be announcing, but it's made clear that AI will be a central theme.
"You'll discover how our AI-powered ads solutions can help multiply your marketing expertise and drive powerful business results in today's changing economy," the website for the event says.
WATCH: AI takes center stage at Google I/O
Read this article:
Google plans to use new A.I. models for ads and to help YouTube creators, sources say - CNBC
A.I. and sharing economy: UBER, DASH can boost profits investing … – CNBC
Artificial intelligence is expected to revolutionize businesses across the globe, and those in the sharing economy are no exception. There is nearly $6 trillion in revenue opportunity from AI across the internet industry, a March report from Morgan Stanley found. The latest AI craze, generative AI , has companies across the country looking to capitalize on the trend. "Every single company faces the challenge today of deciding how to distribute its IT budgets such that it can get enough artificial intelligence to deliver improvement in costs, improvement in revenue operational value and open an avenue to transformation," Gartner analyst Whit Andrews said. "Every single company there is nobody who gets a pass at this point." For companies in the sharing economy, such as Uber , Lyft and DoorDash , AI is already a way of life. People call up a ride or a food order on an app and they are matched with drivers to either take them to their destination or deliver their food. Yet the effect of the technology is just beginning. "UBER/LYFT/DASH already use ML [machine learning] in their matching algorithms (matching rides/eaters with drivers/couriers)," Morgan Stanley wrote in its report. "That said, we see further improvements in fleet utilization and matching lower wait times and pricing and higher profitability." AI tailwind for Uber Uber has both its ride-sharing service and UberEats food delivery business. When the company reported earnings earlier this month, CEO Dara Khosrowshahi said Uber has a "significant data advantage" that allows it to employ AI solutions and is already using AI to predict "highly accurate" arrival times for rides and deliveries. Even still, it's early innings. "We are just starting to understand the capabilities of AI and we are a long way from understanding its potential," Khosrowshahi told CNBC after the earnings report. UBER mountain 2019-05-10 Uber's performance since is May 10, 2019 IPO The earliest and most significant effect of AI will be on its developer productivity, the company said on its earnings call. You'll also have more chatbots powering experiences, which saves on costs. "Then we will look to surprise and delight. 'Pick me up at the airport. I'm arriving on American flight 260 on Tuesday,'" Khosrowshahi said. "We will know who you are, where your home is, what kind of cars you like, et cetera." According to Morgan Stanley, AI and machine learning will be a tailwind to network efficiency. On the rider side, every 1% increase in rider frequency and 20 basis points increase in the rides take rate would lead to 1% incremental company revenue and 3% of incremental company earnings before interest, taxes, depreciation and amortization. On the delivery side, every 1% increase in rider frequency and five basis point increase in the rides take rate would lead to 0.4% incremental company revenue and 1% of incremental company EBITDA. For investor Sarat Sethi, who owns shares of Uber, the company's use of the technology helped them become efficient and puts them ahead of the competition. "They were really on the forefront," said Sethi, portfolio manager at Douglas C. Lane & Associates. "Now we've seen the results over the last few quarters, where the efficiencies are really coming through. And Uber is just understanding more and more of the customer and the more and more data they get." Tech investor Gene Munster, partner at Deepwater Asset Management, is also bullish on Uber's ride-sharing and UberEats business because he believes the company has persistent growth. One of the reasons he's really excited about its AI prospects is the autonomous deliveries and transportation opportunities. He sees a move toward autonomy, although not the entire fleet, which brings the potential for higher margins. He also thinks customers will be willing to book autonomous cars if it saves them money. "Autonomy will drive down the cost per mile for the customer, which will increase use, but it should increase margin at the same time, which is pretty unique," said Munster, whose firm owns shares of Uber. AI's effect on the sharing economy There are several ways AI can boost ridership or food delivery orders for those in the sharing economy. More accurate natural language processing could help with search and help create better recommendations for users, Morgan Stanley said. AI will also be able to better anticipate consumer behavior. "For the rideshare businesses, this could take the form of better allocation of supply to meet rider demand as algorithms are better able to predict where influx of demand will next occur and point drivers in that direction," the firm said. For the delivery businesses, it could mean better suggestions or automatic orders for groceries. AI may also help in the generation of new business opportunities, particularly in delivery, Morgan Stanley's analysts said. "Greater ability to predict customer behavior potentially minimizes the initial capital investment risk for companies looking to build their own supply and adds certainty that there will be demand for the product once created," they said. As AI gets smarter, it can also help boost productivity and automate tasks now performed by humans. What every company in the sharing economy is trying to sort out now is why people are involved with certain tasks, Gartner's Andrews said. "You have to be able to answer the question. You have to be able to say there are people involved with this because it demands the creativity and the originality of human perspective," he said. "If it lacks that, it's going to get automated. We are in the process of taking this enormous step toward that new reality." Companies also have to continue investing in the newest technology or risk being left behind and it's not cheap. "The companies have tremendous opportunity to evolve their model. Now it is just about execution," said Baird technology strategist Ted Mortonson. DoorDash's AI experiments Similar to its sharing economy peer Uber, DoorDash is also hoping to optimize its operations and improve productivity with AI in the near term, said Rohit Kulkarni, senior research analyst at Roth MKM. Looking ahead, it's about creating a better consumer experience by using generative AI, he said. "What DoorDash can do is better content discovery, which goes back to consumer experiences and how AI can put the right content in front of the right consumer at the right time," said Kulkarni, who has a neutral rating on the stock. His $72 price target implies 10% upside from Wednesday's close. In fact, right now, DoorDash is running different experiments internally with generative AI, said Alok Gupta, the company's head of artificial intelligence and machine learning. "One of the things we're trying to understand with this new wave of generative AI tools is which ones are going to best serve the needs of certain features," he said. The company is looking at the quality of the tools, including if it gives the right answers, the price, the scalability and how it affects data privacy, security and ethics. "We're looking at the different generative AI vendors, we're looking at open-source models that we can host internally, and then we'll pick and choose," Gupta said. DASH mountain 2020-12-09 DoorDash's performance since its Dec. 9, 2020 IPO DoorDash already uses AI and machine learning to personalize the experience for consumers, help merchants achieve their sales goals by seeing which items are trending in their neighborhoods and to refine the timing of pickups for the drivers, or dashers. Generative AI will be able to further personalize and tailor the experience for users. Customers would see menus that better match their preferences and the interaction would be more conversational, Gupta said. It will also help with internal productivity, such as digitizing menu items, where to park and store locations. While Gupta can't quantify the financial effect for the company, he said AI will drive growth. "We strongly believe that if we improve the quality predictability of the experience of each of our audiences, that will naturally translate into better retention for audiences and how they use us, and that will help us," he said. However, Morgan Stanley estimates every 1% improvement in order frequency and five basis points improvement in take rate results in a $149 million, or 1.4%, uplift to company revenue and an $82 million, or 5%, improvement in EBITDA. "The extent to which AI drives substantial improvements in top-line growth could lead to teens upside [for the stock]," Morgan Stanley said. Lyft's difficulties Lyft has been struggling and losing market share to Uber. The ride-sharing company debuted on the public market in 2019 at $72 a share and is now trading below $10. Over the past year, its stock has dropped nearly 58%, plagued by disappointing earnings . Earlier this month, Lyft reported an adjusted loss of 7 cents per share for the first quarter, a penny more than expected, according to Refinitiv. The company also provided guidance for second-quarter sales and EBITDA that was less than expected. However, Lyft's new CEO, David Risher, is trying to make changes to right the ship, including layoffs and the launch of a new airport preorder feature. 'Biggest death star' As companies within the ride-sharing economy look to invest in the latest AI and use it to become more profitable, there may be some big competitors looking to swoop in, Baird's Mortonson said. The "cloud titans" that have massive balance sheets and free cash flow, as well as the "massive compute scale," could decide to move into the ride-sharing or food delivery business, he said. "Their biggest death star is Amazon ," Mortonson said. Not only does the e-commerce giant have the intellectual property that centers around AWS, but it also knows all about the next-generation logistics, routing and delivery, he said. "Their extension on delivery into food or other services they just have to turn the switch," he said. CNBC's Michael Bloom contributed reporting.
See the original post:
A.I. and sharing economy: UBER, DASH can boost profits investing ... - CNBC
AI runs amok in 1st trailer for director Gareth Edwards’ ‘The Creator … – Space.com
Recent headlines warn about the perils of artificial intelligence, even as we venture further into a future reliance on AI. So there's probably no better time to drop a first trailer for director Gareth Edwards' dystopian epic, "The Creator."
20th Century Studios, New Regency, and Entertainment One have just unleashed a terrifying new preview for Edwards' topical sci-fi thriller, "The Creator," which infiltrates theaters on September 29, 2023 starring "Tenet's" John David Washington, "Eternals'" Gemma Chan, "Inception's" Ken Watanabe, Sturgill Simpson, Madeleine Yuna Voyles, and Allison Janney.
Here's the official synopsis:
Amid a future war between the human race and the forces of artificial intelligence, Joshua (Washington), a hardened ex-special forces agent grieving the disappearance of his wife (Chan), is recruited to hunt down and kill the Creator, the elusive architect of advanced AI who has developed a mysterious weapon with the power to end the war and mankind itself. Joshua and his team of elite operatives journey across enemy lines, into the dark heart of AI-occupied territory only to discover the world-ending weapon hes been instructed to destroy is an AI in the form of a young child.
Executive produced by Yariv Milchan, Michael Schaefer, Natalie Lehmann, Nick Meyer and Zev Foreman, "The Creator" is directed by Gareth Edwards from an original screenplay by Edwards and Chris Weitz.
Edwards first wowed audiences with his 2010 indie creature feature "Monsters" and 2014's Hollywood kaiju flick, "Godzilla" before signing on to helm 2016's "Rogue One: A Star Wars Story." That "Star Wars" prequel continues to gain admirers as a very serviceable entry for the franchise in light of the negative reception to the most recent "Star Wars" sequels, "Revenge of the Sith" and "The Rise of Skywalker."
Fans of New York Times bestselling author Daniel Wilson and his "Robopocalypse" novel and "Robogenesis" sequel might see narrative similarities in this Aerosmith-scored trailer, which reveals blazing laser firefights, malevolent AI machines, and an overarching plan beyond the mental capacities of us puny human meat sacks.
Nevertheless, it's a striking first look at this fall sci-fi tentpole, with intense combat scenes and a creepy little android child who might hold the key to humanity's fate.
"The Creator" powers up in theaters on Sept. 29, 2023.
View post:
AI runs amok in 1st trailer for director Gareth Edwards' 'The Creator ... - Space.com