Category Archives: Ai

The Dow drops 45 points as AI and tech stocks fall – Quartz

Stocksstarted the dayhigherbut fell later, as investors grew more cautious as Wednesday went on. The biggest jolts came from the technology and AI sectors, as major stocks fell.

ChatGPT requires 15 times more energy than a traditional web search, says Arm exec

The Dow Jones Industrial Average lost 45 points, or about 0.1%, to end the day at 37,753. The S&P 500 fell 0.7% and the tech-heavy Nasdaq dropped 1.1%.

The crypto market also felt the turbulence as the big Bitcoin halving event approaches. Bitcoin dipped below $60,000, and Ether couldnt cross the $3,000 mark by the end of the day.

The biggest fall was seen in the tech sector, with chip stocks dropping after Dutch company ASML missed first-quarter earnings expectations. ASML stock closed down 7.1%. But the pain spread.

Amazon and Meta both lost 1.1%, and Tesla continued its long slide to close down about 1%.

AI chipmaker Nvidia fared worse, losing 3.9%. Micron Technology stock dropped 4.5%. Super Micro Computer, which soared Tuesday, fell back about 1.7% Wednesday.

Three energy stocks made the top performers list. Consolidated Edison, an energy delivery company, rose 3.3%. NextEra Energy, an electric power and energy infrastructure company, gained 3.4%. And First Solar, a solar technology company, added 2.9%.

United Airlines stock soared 17.5% after it blamed Boeing for a $200 million loss. But its earnings still beat expectations as the the airline reported strong demand and a rebound in business travel.

Other airline stocks moved up with United. American Airlines stock rose 6.6%, Delta Air Lines added 2.9%, and Southwest Airlines stock gained 2.6%.

The surge came a day after the Biden administration announced it would enforce consumer protection laws on airline travelers with the help of officials in 15 states.

Trump Media & Technology Group ended a weeks-long spell of losses Wednesday when its stock spiked.

Shares in Trump Media, the company behind former President Donald Trumps Truth Social, shot up as much as almost 22% in afternoon trading on Wednesday. Its shares traded at a high of $27.77, bringing its market cap back up to $3.8 billion and erasing some of the losses it accrued in recent weeks, before dropping off to close up 15.6%, at $26.40 per share.

Trump Media went public on the Nasdaq under the ticker DJT on March 26, after completing its merger with Digital World Acquisition Corp., a special purpose acquisition company, or SPAC. From its high trading price of $79.38 in its first week, its market cap has fallen more than $4 billion, with the stock price hitting an all-time low of $22.55 per share this week.

Bitcoin got jittery before the big halving event scheduled for Friday. The top cryptocurrency fell to $59,900 on Wednesday for the first time since early March, almost 17% below its all-time high.

Many of the other major cryptocurrencies also dropped Wednesday, including the second-largest, Ether, which has gone below $3,000, according to crypto tracking website CoinMarketCap.

Rocio Fabbro and Francisco Velasquez contributed to this article

View original post here:

The Dow drops 45 points as AI and tech stocks fall - Quartz

Logitech wants you to press its new AI button – The Verge

The Logi AI Prompt Builder doesnt just present you with a chatbot; it gives you preset recipes to help you prompt it, too. After I assigned an AI button to a Logitech mouse, I could ask it to Rephrase paragraphs of text, turn them into bullet points, make them shorter and more concise, or fit a specific word count. Another recipe helped me summarize press releases. And since I pay for ChatGPT Plus, I customized another recipe to generate an image.

Prompt Builder seems like it could be useful. But I had to get a new Logitech mouse to use it, as my Logitech M557 I bought in 2022 but has been around since 2014, was deemed too old and did not support the required Options Plus software. Also, it strangely only launched when I wasnt on either of my two browser windows. (I found myself using a ChatGPT tab in my web browser instead since that way I wouldnt have to click out of my browser.)

Logitech will also sell at least one mouse with a dedicated AI button you wont need to map to its prompt builder: an AI edition of its M750 mouse with a teal-colored key to instantly launch it. Its only available in the US and UK for $49.99 or 54.99, respectively, through Logitechs online store.

You dont need the special-edition AI mouse, but you do need a Logitech device, as the prompt builder is part of the companys bundled Logi Options Plus software.

For now, Logi AI Prompt Builder only works with ChatGPT and understands only English at launch. Logitech did say its working on linking it to other chatbots.

At the end of the day, this seems like a way for Logitech to sell more Logitech peripherals, and it likely wont be the only company with such an idea. When the time comes that all PCs have dueling AI buttons, which one will you push to ask your chatbots a question?

Update April 17th, 2024, 12:20 PM ET: Clarified why the M557 mouse could not launch the AI Prompt Builder.

Continue reading here:

Logitech wants you to press its new AI button - The Verge

Is AI good or bad? A deeper look at its potential and pitfalls – Mashable

We dont know how we feel about AI.

Since ChatGPT was released in 2022, the generative AI frenzy has stoked simultaneous fear and hype, leaving the public even more unsure of what to believe.

According to Edelman's annual trust barometer report, Americans have become less trustworthy of tech year over year. A large majority of Americans want transparency and guardrails around the use of AI but not everyone has even used the tools. People under 40 and college-educated Americans are more aware and more likely to use generative AI, according to a June national poll from BlueLabs reported by Axios. Of course, optimism also falls along political lines: The BlueLabs poll found one in three Republicans believe AI is negatively impacting daily life, compared to one in five Democrats. An Ipsos poll from April came to similar conclusions.

Whether you trust it or not, there is not much of a debate as to whether AI has the potential to be a powerful tool. President Vladimir Putin told Russian students on their first day of school in 2017 that whoever leads the AI race would become the "ruler of the world." Elon Musk quote-tweeted a Verge article that included Putins quote, and added that "competition for AI superiority at national level most likely cause of WW3 imo." That was six years ago.

These discussions all drive one imperative question: Is AI good or bad?

It's an important question, but the answer is more complicated than "yes" or "no." There are ways generative AI is used that are promising, could increase efficiency, and could solve some of society's woes. But there are also ways generative AI can be used that are dark, even sinister, and have the potential to increase the wealth gap, destroy jobs, and spread misinformation.

Ultimately, whether AI is good or bad depends on how it's used and by whom.

The big positive for AI that Big Tech promises is efficiency. AI can automate repetitive tasks in fields like data entry and processing, customer service, inventory management, data analysis, social media management, financial analysis, language translation, content generation, personal assistants, virtual learning, email sorting and filtering, and supply chain optimization, making tedious tasks a bit easier for workers.

You can use AI to make a workout plan or help create a travel itinerary. Some professors use it to clean up their work. For instance, Gloria Washington, an Assistant Professor at Howard University and a member of the Institute of Electrical and Electronics Engineers, uses ChatGPT as a tool to make her life easier where she can. She told Mashable that she uses ChatGPT for two main reasons: to find information quickly and to work differently as an educator.

"If I am writing an email and I want to appear as if I really know what I'm talking about I'll run it through ChatGPT to give me some quick little hints and tips on how to improve the way that I say the information in the email or the communication in general," Washington said. "Or if I'm giving a speech, [I'll ask ChatGPT for help with] something really quick that I can easily incorporate into my talking points."

As an educator, it's revolutionizing how she approaches giving homework assignments. She also encourages students to use ChatGPT to help with emails and coding languages. But it's still a relatively new technology, and you can tell. While 80 percent of teachers said they received "formal training about generative AI use policies and procedures," only 28 percent of teachers said "that they have received guidance about how to respond if they suspect a student has used generative AI in ways that are not allowed, such as plagiarism," according to research from the Center for Democracy & Technology.

"In our research last school year, we saw schools struggling to adopt policies surrounding the use of generative AI, and are heartened to see big gains since then," the President and CEO of the Center for Democracy & Technology, Alexandra Reeve Givens, said in a press release. "But the biggest risks of this technology being used in schools are going unaddressed, due to gaps in training and guidance to educators on the responsible use of generative AI and related detection tools. As a result, teachers remain distrustful of students, and more students are getting in trouble."

AI can improve efficiency and reduce human error in manufacturing, logistics, and customer service industries. It can accelerate scientific research by analyzing large datasets, simulating complex systems, and aiding in data-driven discoveries. It can be used to optimize resource consumption, monitor pollution, and develop sustainable solutions to environmental challenges. AI-powered tools can enhance personalized learning experiences and make education more accessible to a broader range of individuals. AI has the potential to revolutionize medical diagnoses, drug discovery, and personalized treatment plans.

The positives are undeniable, but that doesn't mean the negatives are worth ignoring, Camille Carlton, a senior policy manager at the Center for Humane Technology, told Mashable.

"I don't think that these potential future benefits should be driving our decisions to not pay attention and put up guardrails around these technologies today," she said. "Because the potential for these technologies to increase inequality, to increase polarization, to continue to [affect the deterioration of our] mental health, [and] increase systemic bias, are all very real and they're all happening right now."

You might consider anyone who fears negative aspects of generative AI to be a Luddite, and maybe they are but in a more literal sense than how the word is carried today. Luddites were a group of English workers in the early 1800s who destroyed automated textile manufacturing machines not because they feared the technology, but because there was nothing in place to ensure their jobs were safe from replacement by the tech. Beyond this, they weren't just economically precarious they were starving at the hands of the machines. Now, of course, the word is used to derogatorily describe a person who fears or avoids new technology simply because it is new technology.

In reality, there are loads of questionable use cases for generative AI. When we consider healthcare, for instance, there are too many variables to worry about before we can trust AI with our physical and mental well-being. AI can automate repetitive tasks like healthcare diagnostics by analyzing medical images via X-rays and MRIs to help diagnose diseases and identify abnormalities which can be good, but the majority of Americans are concerned about the increased use of AI in healthcare, according to a survey from Morning Consult. Their fear is reasonable: Training data in medicine is often incomplete, biased, or inaccurate, and the technology is only as good as the data it has, which can lead to incorrect diagnoses, treatment recommendations, or research conclusions. Moreover, medical training data is often not representative of diverse populations which could result in unequal access to accurate diagnoses and treatments particularly for patients of color.

Generative AI models don't understand medical nuance, can't provide any kind of solid bedside manner, lack accountability, and can be misinterpreted by medical professionals. And it becomes far more difficult to ensure patient privacy when data is being passed through AI, obtaining informed consent, and preventing the misuse of generated content become critical issues.

"The public views it as something that whatever it spits out is like God," Washington said. "And unfortunately it is not true." Washington points out that most generative AI models are created by collecting information from the internet and not everything on the internet is accurate or free from bias.

The automation potential of AI could also lead to unemployment and economic inequality. In March, Goldman Sachs predicted that AI could eventually replace 300 million full-time jobs globally, affecting nearly one-fifth of employment. AI eliminated nearly 4,000 jobs in May 2023 and more than one-third of business leaders say AI replaced workers lastyear, according to CNBC. This has led unions in creative industries, like SAG-AFTRA, to fight for more comprehensive protection against AI. OpenAI's new AI video generator Sora makes the threat of job replacement even more real for creative industries with its ability to generate photorealistic videos from a simple prompt.

"If we do get to a place where we can find a cure for cancer with AI, does that happen before inequality is so terrible that we have complete social unrest?" Carlton questioned. "Does it happen after polarization continues to increase? Does it happen after we see more democratic decline?"

We don't know. The fear with AI isn't necessarily that the sci-fi movie iRobot will become some kind of documentary, but more that the people who choose to use it might not have the best intentions or even know the repercussions of their own work.

"This idea that artificial intelligence is going to progress to a point where humans dont have any work to do or dont have any purpose has never resonated with me," Sam Altman, the CEO of OpenAI, which launched ChatGPT, said last year. "There will be some people who choose not to work, and I think thats great. I think that should be a valid choice, and there are a lot of other ways to find meaning in life. But Ive never seen convincing evidence that what we do with better tools is to work less."

A few more questionable use cases for AI include the following: It can be used for invasive surveillance, data mining, and profiling, posing risks to individual privacy and civil liberties; if not carefully developed, AI systems can inherit biases from their training data, leading to discriminatory outcomes in areas such as hiring, lending, and criminal justice; AI can raise ethical questions, such as the potential for autonomous weapons, decision-making in critical situations, and the rights of AI entities; over-reliance on AI systems could lead to a loss of human control and decision-making, potentially impacting society's ability to understand and address complex issues.

And then there's the disinformation. Don't take my word for it Altman fears that, too.

"I'm particularly worried that these models could be used for large-scale disinformation," Altman said. "Now that they're getting better at writing computer code, [they] could be used for offensive cyberattacks." For instance, consider the AI voice-generated robocalls created to sound like President Joe Biden.

Generative AI is great at creating misinformation, University of Washington professor Kate Starbird told Axios. The MIT Technology Review even reported that humans are more likely to believe disinformation generated by AI than by other humans.

"Generative AI creates content that sounds reasonable and plausible, but has little regard for accuracy," Starbird said. "In other words, it functions as a [bullshit] generator." Indeed, some studies show AI-generated misinformation to be even more persuasive than false content created by humans.

"Instead of asking this question about net good or net badwhat is more beneficial for all of us to be asking is, good how?" Carlton said. "What are the costs of these systems to get us to the better place we're trying to get to? And good for who, who is going to experience this better place? How are the benefits going to be distributed to [those] left behind? When do these benefits show up? Do they show up after [the] harms have already happened a society with worse mental health, worse polarization? And does the direction that we're going in reflect our values? Are we creating the world that we want to live in?"

Governments have caught on to AI's risks and created regulations to mitigate harms. The European Parliament passed a sweeping "AI Act" to protect against high-risk AI applications, and the Biden Administration signed an executive order to address AI concerns in cybersecurity and biometrics.

Generative AI is part of our innate interest in growth and progress, moving ahead as fast as possible in a race to be bigger, better, and more technologically advanced than our neighbors. As Donella Meadows, the environmental scientist and educator who wrote The Limits to Growth and Thinking In Systems: A Primer asks, Why?

"Growth is one of the stupidest purposes ever invented by any culture; weve got to have an 'enough,'" Meadows said. "We should always ask 'growth of what, and why, and for whom, and who pays the cost, and how long can it last, and whats the cost to the planet, and how much is enough?'"

The entire point of generative AI is to recreate human intelligence. But who is deciding that standard? Usually, that answer is wealthy, white elites. And who decided that a lack of human intelligence is a problem at all? Perhaps we need more empathy something AI cant compute.

Original post:

Is AI good or bad? A deeper look at its potential and pitfalls - Mashable

Inside Washingtons Role in Microsofts Big AI Deal With G42 – The New York Times

A relatively small deal by Microsofts standards, anyway is leading to big geopolitical ripples on Tuesday.

The tech giant is investing $1.5 billion in G42, an Emirati artificial intelligence company. On its face, that may appear to be just another effort by the tech giant to claim a foothold in a fast-growing A.I. company, as it has done with OpenAI and others.

But details of the transaction reflect a collaboration between the Biden administration and Microsoft to box Beijing out of tech influence in the Gulf, as the U.S. and China compete for A.I. superiority.

The terms of the deal: G42 will be able to sell Microsoft services that use powerful A.I. chips; in return, it will use Microsofts Azure cloud services for its A.I. offerings.

More important, G42 agreed to strip out equipment from Chinese companies like Huawei from its systems, eliminating what U.S. officials worry could be a potential backdoor for Chinese intelligence agencies.

Its meant to bring an influential A.I. company into Americas orbit. G42 is seen as an increasingly important player in the Gulf and beyond: Its chairman is Sheikh Tahnoon bin Zayed, the Emirates top security official and a brother of the countrys ruler, and it has struck a number of high-profile business partnerships. Peng Xiao, the companys C.E.O., was previously associated with DarkMatter, an Emirati spyware company that had employed former spies.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

More here:

Inside Washingtons Role in Microsofts Big AI Deal With G42 - The New York Times

Adobe Previews Breakthrough AI Innovations to Advance Professional Video Workflows Within Adobe Premiere Pro – Adobe

SAN JOSE, Calif. Today, Adobe (Nasdaq: ADBE) previewed breakthrough generative AI innovations within Adobe Premiere Pro that will reimagine video creation and production workflows, delivering new creative possibilities that every pro editor needs to keep up with the high-speed pace of video production. New generative AI tools coming to Premiere Pro this year enable users to streamline editing all videos, including adding or removing objects in a scene or extending an existing clip. These new editing workflows will be powered by a new video model that will join the family of Firefly models including Image, Vector, Design and Text Effects. Adobe is continuing to develop Firefly AI models in the categories where it has deep domain expertise, such as imaging, video, audio, and 3D and will deeply integrate these models across Creative Cloud and Adobe Express.

Adobe also previewed its vision for bringing third-party generative AI models directly into Adobe applications like Premiere Pro. Creative Cloud has always had a rich partner and plugin ecosystem, and this evolution expands Premiere Pro as the most flexible, extensible professional video tool that fits any workflow. Adobe customers want choice and endless possibilities as they create and edit the next generation of entertainment and media.

Early explorations show how professional video editors could, in the future, leverage video generation models from OpenAI and Runway, integrated in Premiere Pro, to generate B-roll to edit into their project. It also shows how Pika Labs could be used with the Generative Extend tool to add a few seconds to the end of a shot.

By delivering new generative AI capabilities powered by Adobe Firefly and a variety of third-party models, Adobe is giving customers access to a range of new capabilities without having to leave the workflows they use every day in Premiere Pro.

Adobe is reimagining every step of video creation and production workflow to give creators new power and flexibility to realize their vision, said Ashley Still, senior vice president, Creative Product Group at Adobe. By bringing generative AI innovations deep into core Premiere Pro workflows, we are solving real pain points that video editors experience every day, while giving them more space to focus on their craft.

Adobe also announced upcoming general availability of AI-powered audio workflows in Premiere Pro, including new fade handles, clip badges, dynamic waveforms, AI-based category tagging and more.

The Future of Generative AI in Premiere Pro

Adobe showcased a technology preview of generative AI workflows coming to Premiere Pro later this year, powered by a new video model for Firefly. In addition, an early sneak shows how professional editors might leverage video generation models from Open AI and Runway in the future to generate B-roll, or how they might use Pika Labs with the Generative Extend tool to add a few seconds to the end of a shot.

While much of the early conversation about generative AI has focused on a competition among companies to produce the best AI model, Adobe sees a future in which thousands of specialized models emerge, each strong in their own niche. Adobes decades of experience with AI shows that AI-generated content is most useful when its a natural part of what you do every day. For most Adobe customers, generative AI is just a starting point and source of inspiration to explore creative directions.

Adobe aims to provide industry-standard tools and seamless workflows that let users use any materials from any sources across any platform to create at the speed of their imaginations. Whether that means Adobe Firefly or other specialized AI models, Adobe is working to make the integration process as seamless as possible from within Adobe applications.

Adobe has developed its own AI models with a commitment to responsible innovation and plans to apply what it's learned to ensure that the integration of third-party models within its applications is consistent with the companys safety standards. As one of the founders of the Content Authenticity Initiative, Adobe pledges to attach Content Credentials free, open-source technology that serves as a nutrition label for online content to assets produced within its applications so users can see how content was made and what AI models were used to generate the content created on Adobe platforms.

AI-Powered Audio Workflows Generally Available in Premiere Pro

In addition to Adobes new generative AI video tools, new audio workflows in Premiere Pro will be generally available to customers in May, giving editors everything they need to precisely control and improve the quality of their sound. The latest features include:

In addition, the AI-powered Enhance Speech tool which instantly removes unwanted noise and improves poorly recorded dialogue has been generally available since February.

About Adobe

Adobe is changing the world through digital experiences. For more information, visit http://www.adobe.com.

2024 Adobe. All rights reserved. Adobe and the Adobe logo are either registered trademarks or trademarks of Adobe in the United States and/or other countries. All other trademarks are the property of their respective owners.

PR Contact Frankie Tobin Adobe Ftobin@Adobe.com

Source: Adobe

Link:

Adobe Previews Breakthrough AI Innovations to Advance Professional Video Workflows Within Adobe Premiere Pro - Adobe

Microsoft invests in Arabic AI firm as U.S. tries to limit China’s sway – The Washington Post

Microsoft plans to invest $1.5 billion in an Abu Dhabi-based artificial intelligence company, a deal that could limit Chinas influence in the Gulf region amid rising technological competition with the United States.

In a blog post Tuesday, Brad Smith, Microsoft vice chair and president, said the deal with G42 will deepen international ties for artificial intelligence while ensuring the technology follows world-leading standards for safe, trusted and responsible AI.

Our two companies will work together not only in the UAE, but to bring AI and digital infrastructure and services to underserved nations, wrote Smith, who will join G42s board of directors.

Microsoft negotiated the deal with the governments of the United States and the United Arab Emirates for more than a year to ensure all parties were comfortable with the terms, according to a person familiar with the matter, who spoke on the condition of anonymity to discuss the private talks. As part of its negotiations with the U.S. government, G42 has agreed to strip Chinese gear from its operations, following concerns about its use of Huawei equipment, the person said.

AI has emerged as a flash point amid increasing tensions between the United States and China. The deal announced Wednesday positions a key American tech giant to have influence over the burgeoning AI sector in the UAE, amid concerns that China seeks to invest more in the region.

The United States regularly works with other countries to expand opportunities for U.S. businesses while balancing national security concerns, Commerce Department spokesperson Brittany Caplin said Tuesday.

That includes collaborating with countries like the UAE, a global player in cutting-edge technology, and working toward verifiable commitments on how these technologies should be safely developed, protected and deployed, she said. When responsibly managed, investments like the one announced today have the potential to further innovation in digital technologies around the world.

Lawmakers from both parties said they were encouraged by the potential of the deal to promote U.S. technological leadership.

American innovation and American values should be leading the world, and deals like this are one of the ways we accomplish that, Sen. Mark R. Warner (D-Va.), chair of the Senate Intelligence Committee, said in a statement. At the same time, we have to make sure any agreement is structured in a way that protects the crown jewels of our intellectual property.

As G42 realized it needed the commercial backing of a larger tech giant, it began talks with Microsoft in late 2022, G42 told The Washington Post in an email.

Microsoft in recent years has thrust itself to the forefront of the AI revolution by partnering with smaller companies, including a multibillion investment in OpenAI, the maker of ChatGPT. Microsoft has recently been increasing its investments outside the United States, such as a major deal with the French company Mistral. The deals have allowed Microsoft to skirt traditional antitrust scrutiny, while asserting itself as a formidable tech leader on the global stage.

Peng Xiao, group chief executive at G42, said the deal will significantly enhance his companys global presence by allowing it to build on Microsofts cloud infrastructure.

G42 told The Post it began to phase out existing Chinese components and incorporate more of Microsofts technology in 2022. In 2023, G42 began discussions with the U.S. Commerce Department, following a more formal collaboration with Microsoft. In April 2023, they announced a joint plan to create artificial intelligence solutions using Microsofts Azure cloud system, and later agreed to introduce AI tools that meet the complicated security needs of government users. And in November, Microsoft made G42s Arabic AI language model, called Jais, available on its cloud.

G42 has been subject to congressional scrutiny over its close ties to China.

In a Jan. 4 letter to Commerce Secretary Gina Raimondo, a congressional committee asked her agency to consider export controls on G42 and several related companies.

In the letter, the House Select Committee on the Chinese Communist Party states that Xiao is affiliated with an expansive network of companies that support the Chinese military and enable its human-rights abuses.

It also states that Xiao served in a leadership position with a subsidiary of the UAE-based company DarkMatter. DarkMatter develops spyware and surveillance tools that can be used to spy on dissidents, journalists, politicians, and U.S. companies, according to the letter.

In addition to hacking and spying on UAE dissidents, DarkMatter drove division in the U.S. government by hiring some Americans while hacking others. Three former U.S. intelligence operatives admitted illegal conduct in 2021, prompting legislation to limit where similar veterans could work.

Rep. Mike Gallagher (R-Wis.), the committee chairman, later said he was satisfied with G42s decision to sell its stake in Chinese companies.

The UAE is a critical and powerful ally, one that will only become more important for regional and global stability as AI advances, Gallagher said in February. Therefore, it is imperative the United States and the UAE further understand and mitigate any high-risk commercial and research relationships with [Peoples Republic of China] entities, he added.

Joseph Menn contributed to this report.

Read more:

Microsoft invests in Arabic AI firm as U.S. tries to limit China's sway - The Washington Post

Google Maps will use AI to help you find out-of-the-way EV chargers – The Verge

Google Maps is rolling out some new updates designed to make locating an electric vehicle charging station less stressful. And to accomplish this, it will (of course) lean heavily on artificial intelligence.

Google says it will use AI to summarize customer reviews of EV chargers to display more specific directions to certain chargers, such as those located in parking garages or more hard-to-find places. And there will be more prompts in the app to encourage users to submit their feedback after using an EV charger which will then be fed into the algorithm for future AI-powered summaries.

Google Maps users will be asked to submit details like whether the charging session was successful or the type of plug they used. Those details will then be used to offer more accurate descriptions of EV chargers for future customers.

This isnt the first time that Google has touted its use of AI to improve the experience for EV owners. Previously, the company has deployed AI tools to help with route planning and EV plug locating.

In addition, EV owners will now be able to see quick and useful information about charging when their vehicles battery starts to get low. Real-time plug availability and charging speeds will be viewable on native versions of Google Maps in cars with the companys software built in, like some existing Volvo and Polestar models. Those cars are also getting native versions of Google Maps that suggest charging breaks on multi-stop journeys.

Lastly, Google Maps will take EV chargers under consideration when travelers are looking for places to stop overnight. The company is adding an EV charger filter to its travel search tool so EV owners can find spots with charging plugs.

Go here to see the original:

Google Maps will use AI to help you find out-of-the-way EV chargers - The Verge

The AI That Could Heal a Divided Internet – TIME

In the 1990s and early 2000s, technologists made the world a grand promise: new communications technologies would strengthen democracy, undermine authoritarianism, and lead to a new era of human flourishing. But today, few people would agree that the internet has lived up to that lofty goal.

Today, on social media platforms, content tends to be ranked by how much engagement it receives. Over the last two decades politics, the media, and culture have all been reshaped to meet a single, overriding incentive: posts that provoke an emotional response often rise to the top.

Efforts to improve the health of online spaces have long focused on content moderation, the practice of detecting and removing bad content. Tech companies hired workers and built AI to identify hate speech, incitement to violence, and harassment. That worked imperfectly, but it stopped the worst toxicity from flooding our feeds.

There was one problem: while these AIs helped remove the bad, they didnt elevate the good. Do you see an internet that is working, where we are having conversations that are healthy or productive? asks Yasmin Green, the CEO of Googles Jigsaw unit, which was founded in 2010 with a remit to address threats to open societies. No. You see an internet that is driving us further and further apart.

What if there were another way?

Jigsaw believes it has found one. On Monday, the Google subsidiary revealed a new set of AI tools, or classifiers, that can score posts based on the likelihood that they contain good content: Is a post nuanced? Does it contain evidence-based reasoning? Does it share a personal story, or foster human compassion? By returning a numerical score (from 0 to 1) representing the likelihood of a post containing each of those virtues and others, these new AI tools could allow the designers of online spaces to rank posts in a new way. Instead of posts that receive the most likes or comments rising to the top, platforms couldin an effort to foster a better communitychoose to put the most nuanced comments, or the most compassionate ones, first.

Read More: How Americans Can Tackle Political Division Together

The breakthrough was made possible by recent advances in large language models (LLMs), the type of AI that underpins chatbots like ChatGPT. In the past, even training an AI to detect simple forms of toxicity, like whether a post was racist, required millions of labeled examples. Those older forms of AI were often brittle and ineffectual, not to mention expensive to develop. But the new generation of LLMs can identify even complex linguistic concepts out of the box, and calibrating them to perform specific tasks is far cheaper than it used to be. Jigsaws new classifiers can identify attributes like whether a post contains a personal story, curiosity, nuance, compassion, reasoning, affinity, or respect. It's starting to become feasible to talk about something like building a classifier for compassion, or curiosity, or nuance, says Jonathan Stray, a senior scientist at the Berkeley Center for Human-Compatible AI. These fuzzy, contextual, know-it-when-I-see-it kind of concepts we're getting much better at detecting those.

This new ability could be a watershed for the internet. Green, and a growing chorus of academics who study the effects of social media on public discourse, argue that content moderation is necessary but not sufficient to make the internet a better place. Finding a way to boost positive content, they say, could have cascading positive effects both at the personal levelour relationships with each otherbut also at the scale of society. By changing the way that content is ranked, if you can do it in a broad enough way, you might be able to change the media economics of the entire system, says Stray, who did not work on the Jigsaw project. If enough of the algorithmic distribution channels disfavored divisive rhetoric, it just wouldnt be worth it to produce it any more.

One morning in late March, Tin Acosta joins a video call from Jigsaws offices in New York City. On the conference room wall behind her, there is a large photograph from the 2003 Rose Revolution in Georgia, when peaceful protestors toppled the countrys Soviet-era government. Other rooms have similar photos of people in Syria, Iran, Cuba and North Korea using tech and their voices to secure their freedom, Jigsaws press officer, who is also in the room, tells me. The photos are intended as a reminder of Jigsaws mission to use technology as a force for good, and its duty to serve people in both democracies and repressive societies.

On her laptop, Acosta fires up a demonstration of Jigsaws new classifiers. Using a database of 380 comments from a recent Reddit thread, the Jigsaw senior product manager begins to demonstrate how ranking the posts using different classifiers would change the sorts of comments that rise to the top. The threads original poster had asked for life-affirming movie recommendations. Sorted by the default ranking on Redditposts that have received the most upvotesthe top comments are short, and contain little beyond the titles of popular movies. Then Acosta clicks a drop-down menu, and selects Jigsaws reasoning classifier. The posts reshuffle. Now, the top comments are more detailed. You start to see people being really thoughtful about their responses, Acosta says. Heres somebody talking about School of Rocknot just the content of the plot, but also the ways in which the movie has changed his life and made him fall in love with music. (TIME agreed not to quote directly from the comments, which Jigsaw said were used for demonstrative purposes only and had not been used to train its AI models.)

Acosta chooses another classifier, one of her favorites: whether a post contains a personal story. The top comment is now from a user describing how, under both a heavy blanket and the influence of drugs, they had ugly-cried so hard at Ke Huy Quans monologue in Everything Everywhere All at Once that theyd had to pause the movie multiple times. Another top comment describes how a movie trailer had inspired them to quit a job they were miserable with. Another tells the story of how a movie reminded them of their sister, who had died 10 years earlier. This is a really great way to look through a conversation and understand it a little better than [ranking by] engagement or recency, Acosta says.

For the classifiers to have an impact on the wider internet, they would require buy-in from the biggest tech companies, which are all locked in a zero-sum competition for our attention. Even though they were developed inside Google, the tech giant has no plans to start using them to help rank its YouTube comments, Green says. Instead, Jigsaw is making the tools freely available for independent developers, in the hopes that smaller online spaces, like message boards and newspaper comment sections, will build up an evidence base that the new forms of ranking are popular with users.

Read More: The Subreddit /r/Collapse Has Become the Doomscrolling Capital of the Internet. Can Its Users Break Free?

There are some reasons to be skeptical. For all its flaws, ranking by engagement is egalitarian. Popular posts get amplified regardless of their content, and in this way social media has allowed marginalized groups to gain a voice long denied to them by traditional media. Introducing AI into the mix could threaten this state of affairs. A wide body of research shows that LLMs have plenty of ingrained biases; if applied too hastily, Jigsaws classifiers might end up boosting voices that are already prominent online, thus further marginalizing those that arent. The classifiers could also exacerbate the problem of AI-generated content flooding the internet, by providing spammers with an easy recipe for AI-generated content thats likely to get amplified. Even if Jigsaw evades those problems, tinkering with online speech has become a political minefield. Both conservatives and liberals are convinced their posts are being censored; meanwhile, tech companies are under fire for making unaccountable decisions that affect the global public square. Jigsaw argues that its new tools may allow tech platforms to rely less on the controversial practice of content moderation. But theres no getting away from the fact that changing what kind of speech gets rewarded online will always have political opponents.

Still, academics say that given a chance, Jigsaws new AI tools could result in a paradigm shift for social media. Elevating more desirable forms of online speech could create new incentives for more positive onlineand possibly offlinesocial norms. If a platform amplifies toxic comments, then people get the signal they should do terrible things, says Ravi Iyer, a technologist at the University of Southern California who helps run the nonprofit Psychology of Technology Research Network. If the top comments are informative and useful, then people follow the norm and create more informative and useful comments.

The new algorithms have come a long way from Jigsaws earlier work. In 2017, the Google unit released Perspective API, an algorithm for detecting toxicity. The free tool was widely used, including by the New York Times, to downrank or remove negative comments under articles. But experimenting with the tool, which is still available online, reveals the ways that AI tools can carry hidden biases. Youre a f-cking hypocrite is, according to the classifier, 96% likely to be a toxic phrase. But many other hateful phrases, according to the tool, are likely to be non-toxic, including the neo-Nazi slogan Jews will not replace us (41%) and transphobic language like trans women are men (36%). The tool breaks when confronted with a slur that is commonly directed at South Asians in the U.K. and Canada, returning the error message: We don't yet support that language, but we're working on it!

To be sure, 2017 was a very different era for AI. Jigsaw has made efforts to mitigate biases in its new classifiers, which are unlikely to make such basic errors. Its team tested the new classifiers on a set of comments that were identical except for the names of different identity groups, and said it found no hint of bias. Still, the patchy effectiveness of the older Perspective API serves as a reminder of the pitfalls of relying on AI to make value judgments about language. Even todays powerful LLMs are not free from bias, and their fluency can often conceal their limitations. They can discriminate against African American English; they function poorly in some non-English languages; and they can treat equally-capable job candidates differently based on their names alone. More work will be required to ensure Jigsaws new AIs dont have less visible forms of bias. Of course, there are things that you have to watch out for, says Iyer, who did not work on the Jigsaw project. How do we make sure that [each classifier] captures the diversity of ways that people express these concepts?

In a paper published earlier this month, Acosta and her colleagues set out to test how readers would respond to a list of comments ranked using Jigsaws new classifiers, compared to comments sorted by recency. They found that readers preferred the comments sorted by the classifiers, finding them to be more informative, respectful, trustworthy, and interesting. But they also found that ranking comments by just one classifier on its own, like reasoning, could put users off. In its press release launching the classifiers on Monday, Jigsaw says it intends for its tools to be mixed and matched. Thats possible because all they do is return scores between zero and oneso its possible to write a formula that combines several scores together into a single number, and use that number as a ranking signal. Web developers could choose to rank comments using a carefully-calibrated mixture of compassion, respect, and curiosity, for example. They could also throw engagement into the mix as well to make sure that posts that receive lots of likes still get boosted too.

Just as removing negative content from the internet has received its fair share of pushback, boosting certain forms of desirable content is likely to prompt complaints that tech companies are putting their thumbs on the political scales. Jigsaw is quick to point out that its classifiers are not only apolitical, but also propose to boost types of content that few people would take issue with. In tests, Jigsaw found the tools did not disproportionately boost comments that were seen by users as unfavorable to Republicans or Democrats. We have a track record of delivering a product thats useful for publishers across the political spectrum, Green says. The emphasis is on opening up conversations. Still, the question of power remains: who gets to decide which kinds of content are desirable? Jigsaws hope is that by releasing the technology publicly, different online spaces can each choose what works for themthus avoiding any one hegemonic platform taking that decision on behalf of the entire internet.

For Stray, the Berkeley scientist, there is a tantalizing prospect to an internet where positive content gets boosted. Many people, he says, think of online misinformation as leading to polarization. And it can. But it also works the other way around, he says. The demand for low-quality information arises, at least in part, because people are already polarized. If the tools result in people becoming less polarized, then that should actually change the demand-side for certain types of lower quality content. Its hypothetical, he cautions, but it could lead to a virtuous circle, where declining demand for misinformation feeds a declining supply.

Why would platforms agree to implement these changes? Almost by definition, ranking by engagement is the most effective way to keep users onsite, thus keeping eyeballs on the ads that drive up revenue. For the big platforms, that means both the continued flow of profits, and the fact that users arent spending time with a competitors app. Replacing engagement-based ranking with something less engaging seems like a tough ask for companies already battling to keep their users attention.

Thats true, Stray says. But, he notes that there are different forms of engagement. Theres short-term engagement, which is easy for platforms to optimize for: is a tweak to a platform likely to make users spend more time scrolling during the next hour? Platforms can and do make changes to boost their short-term engagement, Stray saysbut those kinds of changes often mean boosting low-quality, engagement-bait types of content, which tend to put users off in the long term.

The alternative is long-term engagement. How might a change to a platform influence a users likelihood of spending more time scrolling during the next three months? Long-term engagement is healthier, but far harder to optimize for, because its harder to isolate the connection between cause and effect. Many different factors are acting upon the user at the same time. Large platforms want users to be returning over the long term, Stray says, and for them to cultivate healthy relationships with their products. But its difficult to measure, so optimizing for short-term engagement is often an easier choice.

Jigsaws new algorithms could change that calculus. The hope is, if we get better at building products that people want to use in the long run, that will offset the race to the bottom, Stray says. At least somewhat.

See the rest here:

The AI That Could Heal a Divided Internet - TIME

2 Superb Artificial Intelligence (AI) Growth Stocks to Buy Before They Soar 63% and 70%, According to Select Wall … – The Motley Fool

Two of last year's biggest winners still have room to run.

The past year or so has marked a coming-of-age story for artificial intelligence (AI). Generative AI's ability to generate original content and streamline time-consuming processes represents a potential step change in how business gets done. The opportunity to profit from the productivity gains made possible by these next-generation algorithms has many companies scrambling to determine how best to integrate them into their day-to-day operations.

Despite generating market-beating performances in 2023, some market watchers believe there's more to come for AI stocks. In fact, a pair of Wall Street analysts suggest two still have potential upside of 63% and 70% over the coming year.

Image source: Getty Images.

If there is one stock that exemplifies the potential represented by recent advancements in AI, Nvidia (NVDA -3.87%) would certainly be in the running. Its graphics processing units (GPUs) use parallel processing, the ability to process a magnitude of mathematical calculations simultaneously by breaking the data into smaller chunks to make it more manageable. This not only revolutionized gaming but also enabled the evolution of AI.

In the company's fiscal 2024 (ended Jan. 28), Nvidia delivered revenue that grew 126% year over year to roughly $61 billion, while its diluted earnings per share (EPS) soared 586% to $11.93. For its fiscal 2025 first quarter (ends April 30), Nvidia is guiding for record revenue of $24 billion, an increase of 234% year over year. Management left no doubt that the accelerating demand for generative AI was behind the surge.

Despite the stock rising 488% since the start of 2023 (as of this writing), Rosenblatt analyst Hans Mosesmann, the self-professed "most bullish analyst on Nvidia," has a buy rating and a Street-high price target of $1,400 on the stock. That represents potential upside of 63% compared to Monday's closing price. Mosesmann said, "The shift to accelerated compute away from general compute is reaching a tipping point, and a disruptive new app, generative AI, is creating a whole new industry."

The analyst isn't alone in his bullish take. Of the 56 analysts who issued an opinion in March, 52 rated the stock a buy or strong buy, and not one recommended selling. That's amazing, considering Wall Street never agrees on anything.

Nvidia stock is currently selling for 34 times forward earnings. While that's a premium to the multiple of 27 for the S&P 500, the company's triple-digit growth and strong tailwinds suggest it's worthy of a premium.

While Nvidia provides the GPUs necessary to train and run AI systems, Super Micro Computer (SMCI -1.66%), also known as Supermicro, incorporates these state-of-the-art chips and others into high-end servers specially designed to withstand the rigors of AI processing.

The company's focus on energy efficiency is well-documented, as is its building-block architecture. Supermicro offers a variety of free-air, liquid-cooling, and traditional air-cooling technologies, providing AI-centric server solutions for every budget and technology level.

In the company's fiscal 2024 second quarter, Supermicro generated revenue that grew 103% year over year to roughly $3.7 billion, while its adjusted EPS jumped 71% to $5.59.And management believes the company's growth spurt will continue to accelerate. Supermicro is forecasting third-quarter revenue of $3.9 billion and EPS of $5.22 at the midpoint of its guidance, which would represent year-over-year growth of 205% and 220%, respectively.

The stock is up an incredible 975% since the start of 2023, but some believe significant upside remains. Loop Capital analyst Ananda Baruah has a buy rating and a Street-high price target of $1,500 on the stock. That represents potential upside of 70% compared to Tuesday's closing price.

Baruah is increasingly confident in Supermicro's position in the generative AI server space and its leadership in addressing the increasing complexity and scale of the server industry. Furthermore, he believes the company can achieve a revenue run rate of $40 billion by the end of its fiscal 2026. For context, that runs circles around the $7.1 billion it generated in its fiscal 2023 (ended Jun. 30).

The analyst isn't alone in his bullish take. Of the 15 analysts who covered the stock in March, 11 rated it a buy or strong buy, and none recommended selling. Supermicro stock is also attractively priced, currently selling for 3 times forward sales.

Danny Vena has positions in Nvidia and Super Micro Computer. The Motley Fool has positions in and recommends Nvidia. The Motley Fool has a disclosure policy.

The rest is here:

2 Superb Artificial Intelligence (AI) Growth Stocks to Buy Before They Soar 63% and 70%, According to Select Wall ... - The Motley Fool

I finally found a practical use for AI, and I may never garden the same way again – TechRadar

I love my garden and hate gardening. These emotions are not as fundamentally opposed as they appear. A beautiful garden is satisfying and lovely to look at. Getting such a garden is tremendously challenging, because it takes constant upkeep and also because creating a sustainable and manageable landscape is a skill I lack. ChatGPT, it turns out, is an enthusiastic and, it appears, quite capable gardener.

Artificial intelligence (AI) is, with apologies to Billie Eilish, the "What was I made for?" of modern technologies. It has a million possibilities but no set purpose, and often what you get out of it depends on what you put into it. I've spent countless hours trying to use AI as a screenwriter, a programmer, or just a friendly interlocutor. Typically, the AIs do well at first but devolve in the long run. Some of my earliest tests are a year or more old, and in AI years that's decades.

In recent weeks I've started playing with some of the latest large language models (LLMs) and image generators available in Microsoft Copilot, Google Gemini, and OpenAI's ChatGPT. While I used the early AI chatbots on the desktop, I've switched almost entirely to mobile platforms, and it turns out that a mobile AI gardening assistant is the landscape advisor I (and my lawn) didn't know I was missing.

Now, I did use ChatGPT Plus, the $20-a-month subscription-level AI that brings GPT-4 and DALL-E 3. GPT-4 is notable because it's been trained on information newer than GPT-3.5's September 2021 cut-off. How this more up-to-date knowledge might impact gardening advice, I'm not certain, though I guess free details about weather trends might help it steer me toward plants that match my actual climate and not what previous decades have shown (OpenAI trains its large language model by scraping vast amounts of data from across the internet, and I think it's safe to assume some of that is publicly available weather data).

My front and back lawns aren't terrible, but there are issues. On one side of the front of my house is a sparse landscape where most plants go to die. ChatGPT accepts text, voice, and visual input, so I started by taking a photo of this problem area and then asking ChatGPT to identify all the plants and, while explaining my location (northeastern US) and the general climate (temperate with moderate rainfall), asked it to suggest some landscape ideas.

In its own straightforward but conversational style, ChatGPT accurately identified most of the plants:

ChatGPT then suggested a collection of plants that might work in my environment, which included coneflowers, black-eyed Susans, hostas, bee-balm, and ferns, and then explained how, as perennials, the plans will return on their own, year after year. I showed the list to my wife, who has a bit more flower, plant, and shrubbery sense, and she broadly agreed with the selection.

Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.

The benefit of using an AI chatbot goes beyond simple query and response. It's the conversation that makes it powerful. I realized that this part of my house only gets sun about half the day, and asked ChatGPT if these plants would be okay.

"Yes, the plants mentioned can generally handle partial sun conditions, which is about 4-6 hours of direct sunlight per day, quite well," it replied. It then detailed how each plant reacts to and handles limited sun or shade. This made me feel comforted.

I was impressed, but before I went further in my AI gardening quest I decided to challenge ChatGPT's plant-identification powers.

There's a dogwood that grows almost weed-like outside one of my den windows. I took a photo through the window (and a screen) and asked ChatGPT, "What kind of plant is this? Is it a weed? How can I best care for it in my environment?" (I can do this by speaking to the app or typing it in.)

As was typical, ChatGPT returned its response a second later (sometimes, with longer responses, I would watch it almost thinking through and typing out the detailed responses).

"The plant in the image appears to be a tree or large shrub that is just beginning to leaf out, suggesting that the photo was taken in early spring."

That was all accurate and fair. ChatGPT told me it didn't look like a weed, but admitted that without leaves or fruit on it, it would be hard to identify. All fair enough.

Later, I decided to trick ChatGPT and took a photo of a realistic-looking artificial bouquet. I failed.

"The bouquet in the picture features flowers that look very similar to artificial ones. Real flowers typically have more variation in color and form, as well as natural imperfections," wrote ChatGPT.

With my confidence in ChatGPT growing, I moved back to my landscaping projects.

Image 1 of 2

Every few years we mulch our flower, tree, and shrub beds, and I wondered if it was time to do it again. So, I took a photo of one flower bed and asked ChatGPT if it needed more mulch and, if so, what kind.

As was often the case, ChatGPT was purposefully inexact in its response. It told me the area "looks fairly covered, but it's a bit difficult to determine the thickness of the mulch layer from this photo." Mulch thickness, it told me, should be between 2-to-3 inches, which is advice I've heard from human gardeners. It then gave me a list of mulch types, and their benefits and deficits.

All helpful, though without AI fingers to sink into the mulch, I'm not sure if ChatGPT could ever accurately tell me if I have enough of it.

One of my biggest landscape problem areas is my backyard. Last year half of it died, and I struggled to bring it back to its former brilliance (one of many chapters in my book, Why I Hate Gardening). I took a photo of my sorry lawn, and asked ChatGPT what was wrong with it, and to suggest ways to improve it.

ChatGPT didn't make fun of my poor lawn care skills, but admitted there were signs of "thinning grass and patchy areas where the soil is exposed." Possible causes included "soil compaction, nutrient deficiencies, pest and disease problems."

It followed that with a list of things that I already do, except for aeration and pH adjustment. I know how to aerate a lawn (you punch a bunch of holes through the lawn bed) but didn't know about adding lime to raise pH. Very smart, ChatGPT.

I followed by asking which grass seed I should use. ChatGPT returned a clear list of five seed options suited to my climate.

ChatGPT doesn't automatically show its sources, and when I asked it where it got its gardening advice, it offered paragraphs of more general gardening advice for my location, but with a citation link attached to each paragraph. Sources included Finegardening.com, Savvygardener, and the US Government.

Image 1 of 2

I also asked ChatGPT to help me find a harvestable plant for a narrow flower bed alongside my house. It suggested, among other things, strawberries. That was interesting, because when we moved in strawberries were in that space, but they weren't healthy enough to survive.

One area where ChatGPT stumbled was when I asked it to create landscape images based on its suggestions. Even when I asked for realistic images, the integrated Dall-E system returned fanciful landscapes and homes that looked little, if at all, like my home. They were cartoony, packed with too many plants, and with added landscape areas that do not exist.

This surprised me, since ChatGPT and DALL-E always had my original photo as a reference, but it chose to ignore most of the details and instead create landscapes for a fantasy home.

That's alright; I don't need images to apply some of this advice. In general, ChatGPT is a confident and able gardening and landscape advisor, and I think I could do worse asking my green-thumb neighbor for advice that might include far too much detail about the state of his home life.

Visit link:

I finally found a practical use for AI, and I may never garden the same way again - TechRadar