Category Archives: Ai

Microsoft invests in Arabic AI firm as U.S. tries to limit China’s sway – The Washington Post

Microsoft plans to invest $1.5 billion in an Abu Dhabi-based artificial intelligence company, a deal that could limit Chinas influence in the Gulf region amid rising technological competition with the United States.

In a blog post Tuesday, Brad Smith, Microsoft vice chair and president, said the deal with G42 will deepen international ties for artificial intelligence while ensuring the technology follows world-leading standards for safe, trusted and responsible AI.

Our two companies will work together not only in the UAE, but to bring AI and digital infrastructure and services to underserved nations, wrote Smith, who will join G42s board of directors.

Microsoft negotiated the deal with the governments of the United States and the United Arab Emirates for more than a year to ensure all parties were comfortable with the terms, according to a person familiar with the matter, who spoke on the condition of anonymity to discuss the private talks. As part of its negotiations with the U.S. government, G42 has agreed to strip Chinese gear from its operations, following concerns about its use of Huawei equipment, the person said.

AI has emerged as a flash point amid increasing tensions between the United States and China. The deal announced Wednesday positions a key American tech giant to have influence over the burgeoning AI sector in the UAE, amid concerns that China seeks to invest more in the region.

The United States regularly works with other countries to expand opportunities for U.S. businesses while balancing national security concerns, Commerce Department spokesperson Brittany Caplin said Tuesday.

That includes collaborating with countries like the UAE, a global player in cutting-edge technology, and working toward verifiable commitments on how these technologies should be safely developed, protected and deployed, she said. When responsibly managed, investments like the one announced today have the potential to further innovation in digital technologies around the world.

Lawmakers from both parties said they were encouraged by the potential of the deal to promote U.S. technological leadership.

American innovation and American values should be leading the world, and deals like this are one of the ways we accomplish that, Sen. Mark R. Warner (D-Va.), chair of the Senate Intelligence Committee, said in a statement. At the same time, we have to make sure any agreement is structured in a way that protects the crown jewels of our intellectual property.

As G42 realized it needed the commercial backing of a larger tech giant, it began talks with Microsoft in late 2022, G42 told The Washington Post in an email.

Microsoft in recent years has thrust itself to the forefront of the AI revolution by partnering with smaller companies, including a multibillion investment in OpenAI, the maker of ChatGPT. Microsoft has recently been increasing its investments outside the United States, such as a major deal with the French company Mistral. The deals have allowed Microsoft to skirt traditional antitrust scrutiny, while asserting itself as a formidable tech leader on the global stage.

Peng Xiao, group chief executive at G42, said the deal will significantly enhance his companys global presence by allowing it to build on Microsofts cloud infrastructure.

G42 told The Post it began to phase out existing Chinese components and incorporate more of Microsofts technology in 2022. In 2023, G42 began discussions with the U.S. Commerce Department, following a more formal collaboration with Microsoft. In April 2023, they announced a joint plan to create artificial intelligence solutions using Microsofts Azure cloud system, and later agreed to introduce AI tools that meet the complicated security needs of government users. And in November, Microsoft made G42s Arabic AI language model, called Jais, available on its cloud.

G42 has been subject to congressional scrutiny over its close ties to China.

In a Jan. 4 letter to Commerce Secretary Gina Raimondo, a congressional committee asked her agency to consider export controls on G42 and several related companies.

In the letter, the House Select Committee on the Chinese Communist Party states that Xiao is affiliated with an expansive network of companies that support the Chinese military and enable its human-rights abuses.

It also states that Xiao served in a leadership position with a subsidiary of the UAE-based company DarkMatter. DarkMatter develops spyware and surveillance tools that can be used to spy on dissidents, journalists, politicians, and U.S. companies, according to the letter.

In addition to hacking and spying on UAE dissidents, DarkMatter drove division in the U.S. government by hiring some Americans while hacking others. Three former U.S. intelligence operatives admitted illegal conduct in 2021, prompting legislation to limit where similar veterans could work.

Rep. Mike Gallagher (R-Wis.), the committee chairman, later said he was satisfied with G42s decision to sell its stake in Chinese companies.

The UAE is a critical and powerful ally, one that will only become more important for regional and global stability as AI advances, Gallagher said in February. Therefore, it is imperative the United States and the UAE further understand and mitigate any high-risk commercial and research relationships with [Peoples Republic of China] entities, he added.

Joseph Menn contributed to this report.

Read more:

Microsoft invests in Arabic AI firm as U.S. tries to limit China's sway - The Washington Post

Google Maps will use AI to help you find out-of-the-way EV chargers – The Verge

Google Maps is rolling out some new updates designed to make locating an electric vehicle charging station less stressful. And to accomplish this, it will (of course) lean heavily on artificial intelligence.

Google says it will use AI to summarize customer reviews of EV chargers to display more specific directions to certain chargers, such as those located in parking garages or more hard-to-find places. And there will be more prompts in the app to encourage users to submit their feedback after using an EV charger which will then be fed into the algorithm for future AI-powered summaries.

Google Maps users will be asked to submit details like whether the charging session was successful or the type of plug they used. Those details will then be used to offer more accurate descriptions of EV chargers for future customers.

This isnt the first time that Google has touted its use of AI to improve the experience for EV owners. Previously, the company has deployed AI tools to help with route planning and EV plug locating.

In addition, EV owners will now be able to see quick and useful information about charging when their vehicles battery starts to get low. Real-time plug availability and charging speeds will be viewable on native versions of Google Maps in cars with the companys software built in, like some existing Volvo and Polestar models. Those cars are also getting native versions of Google Maps that suggest charging breaks on multi-stop journeys.

Lastly, Google Maps will take EV chargers under consideration when travelers are looking for places to stop overnight. The company is adding an EV charger filter to its travel search tool so EV owners can find spots with charging plugs.

Go here to see the original:

Google Maps will use AI to help you find out-of-the-way EV chargers - The Verge

The AI That Could Heal a Divided Internet – TIME

In the 1990s and early 2000s, technologists made the world a grand promise: new communications technologies would strengthen democracy, undermine authoritarianism, and lead to a new era of human flourishing. But today, few people would agree that the internet has lived up to that lofty goal.

Today, on social media platforms, content tends to be ranked by how much engagement it receives. Over the last two decades politics, the media, and culture have all been reshaped to meet a single, overriding incentive: posts that provoke an emotional response often rise to the top.

Efforts to improve the health of online spaces have long focused on content moderation, the practice of detecting and removing bad content. Tech companies hired workers and built AI to identify hate speech, incitement to violence, and harassment. That worked imperfectly, but it stopped the worst toxicity from flooding our feeds.

There was one problem: while these AIs helped remove the bad, they didnt elevate the good. Do you see an internet that is working, where we are having conversations that are healthy or productive? asks Yasmin Green, the CEO of Googles Jigsaw unit, which was founded in 2010 with a remit to address threats to open societies. No. You see an internet that is driving us further and further apart.

What if there were another way?

Jigsaw believes it has found one. On Monday, the Google subsidiary revealed a new set of AI tools, or classifiers, that can score posts based on the likelihood that they contain good content: Is a post nuanced? Does it contain evidence-based reasoning? Does it share a personal story, or foster human compassion? By returning a numerical score (from 0 to 1) representing the likelihood of a post containing each of those virtues and others, these new AI tools could allow the designers of online spaces to rank posts in a new way. Instead of posts that receive the most likes or comments rising to the top, platforms couldin an effort to foster a better communitychoose to put the most nuanced comments, or the most compassionate ones, first.

Read More: How Americans Can Tackle Political Division Together

The breakthrough was made possible by recent advances in large language models (LLMs), the type of AI that underpins chatbots like ChatGPT. In the past, even training an AI to detect simple forms of toxicity, like whether a post was racist, required millions of labeled examples. Those older forms of AI were often brittle and ineffectual, not to mention expensive to develop. But the new generation of LLMs can identify even complex linguistic concepts out of the box, and calibrating them to perform specific tasks is far cheaper than it used to be. Jigsaws new classifiers can identify attributes like whether a post contains a personal story, curiosity, nuance, compassion, reasoning, affinity, or respect. It's starting to become feasible to talk about something like building a classifier for compassion, or curiosity, or nuance, says Jonathan Stray, a senior scientist at the Berkeley Center for Human-Compatible AI. These fuzzy, contextual, know-it-when-I-see-it kind of concepts we're getting much better at detecting those.

This new ability could be a watershed for the internet. Green, and a growing chorus of academics who study the effects of social media on public discourse, argue that content moderation is necessary but not sufficient to make the internet a better place. Finding a way to boost positive content, they say, could have cascading positive effects both at the personal levelour relationships with each otherbut also at the scale of society. By changing the way that content is ranked, if you can do it in a broad enough way, you might be able to change the media economics of the entire system, says Stray, who did not work on the Jigsaw project. If enough of the algorithmic distribution channels disfavored divisive rhetoric, it just wouldnt be worth it to produce it any more.

One morning in late March, Tin Acosta joins a video call from Jigsaws offices in New York City. On the conference room wall behind her, there is a large photograph from the 2003 Rose Revolution in Georgia, when peaceful protestors toppled the countrys Soviet-era government. Other rooms have similar photos of people in Syria, Iran, Cuba and North Korea using tech and their voices to secure their freedom, Jigsaws press officer, who is also in the room, tells me. The photos are intended as a reminder of Jigsaws mission to use technology as a force for good, and its duty to serve people in both democracies and repressive societies.

On her laptop, Acosta fires up a demonstration of Jigsaws new classifiers. Using a database of 380 comments from a recent Reddit thread, the Jigsaw senior product manager begins to demonstrate how ranking the posts using different classifiers would change the sorts of comments that rise to the top. The threads original poster had asked for life-affirming movie recommendations. Sorted by the default ranking on Redditposts that have received the most upvotesthe top comments are short, and contain little beyond the titles of popular movies. Then Acosta clicks a drop-down menu, and selects Jigsaws reasoning classifier. The posts reshuffle. Now, the top comments are more detailed. You start to see people being really thoughtful about their responses, Acosta says. Heres somebody talking about School of Rocknot just the content of the plot, but also the ways in which the movie has changed his life and made him fall in love with music. (TIME agreed not to quote directly from the comments, which Jigsaw said were used for demonstrative purposes only and had not been used to train its AI models.)

Acosta chooses another classifier, one of her favorites: whether a post contains a personal story. The top comment is now from a user describing how, under both a heavy blanket and the influence of drugs, they had ugly-cried so hard at Ke Huy Quans monologue in Everything Everywhere All at Once that theyd had to pause the movie multiple times. Another top comment describes how a movie trailer had inspired them to quit a job they were miserable with. Another tells the story of how a movie reminded them of their sister, who had died 10 years earlier. This is a really great way to look through a conversation and understand it a little better than [ranking by] engagement or recency, Acosta says.

For the classifiers to have an impact on the wider internet, they would require buy-in from the biggest tech companies, which are all locked in a zero-sum competition for our attention. Even though they were developed inside Google, the tech giant has no plans to start using them to help rank its YouTube comments, Green says. Instead, Jigsaw is making the tools freely available for independent developers, in the hopes that smaller online spaces, like message boards and newspaper comment sections, will build up an evidence base that the new forms of ranking are popular with users.

Read More: The Subreddit /r/Collapse Has Become the Doomscrolling Capital of the Internet. Can Its Users Break Free?

There are some reasons to be skeptical. For all its flaws, ranking by engagement is egalitarian. Popular posts get amplified regardless of their content, and in this way social media has allowed marginalized groups to gain a voice long denied to them by traditional media. Introducing AI into the mix could threaten this state of affairs. A wide body of research shows that LLMs have plenty of ingrained biases; if applied too hastily, Jigsaws classifiers might end up boosting voices that are already prominent online, thus further marginalizing those that arent. The classifiers could also exacerbate the problem of AI-generated content flooding the internet, by providing spammers with an easy recipe for AI-generated content thats likely to get amplified. Even if Jigsaw evades those problems, tinkering with online speech has become a political minefield. Both conservatives and liberals are convinced their posts are being censored; meanwhile, tech companies are under fire for making unaccountable decisions that affect the global public square. Jigsaw argues that its new tools may allow tech platforms to rely less on the controversial practice of content moderation. But theres no getting away from the fact that changing what kind of speech gets rewarded online will always have political opponents.

Still, academics say that given a chance, Jigsaws new AI tools could result in a paradigm shift for social media. Elevating more desirable forms of online speech could create new incentives for more positive onlineand possibly offlinesocial norms. If a platform amplifies toxic comments, then people get the signal they should do terrible things, says Ravi Iyer, a technologist at the University of Southern California who helps run the nonprofit Psychology of Technology Research Network. If the top comments are informative and useful, then people follow the norm and create more informative and useful comments.

The new algorithms have come a long way from Jigsaws earlier work. In 2017, the Google unit released Perspective API, an algorithm for detecting toxicity. The free tool was widely used, including by the New York Times, to downrank or remove negative comments under articles. But experimenting with the tool, which is still available online, reveals the ways that AI tools can carry hidden biases. Youre a f-cking hypocrite is, according to the classifier, 96% likely to be a toxic phrase. But many other hateful phrases, according to the tool, are likely to be non-toxic, including the neo-Nazi slogan Jews will not replace us (41%) and transphobic language like trans women are men (36%). The tool breaks when confronted with a slur that is commonly directed at South Asians in the U.K. and Canada, returning the error message: We don't yet support that language, but we're working on it!

To be sure, 2017 was a very different era for AI. Jigsaw has made efforts to mitigate biases in its new classifiers, which are unlikely to make such basic errors. Its team tested the new classifiers on a set of comments that were identical except for the names of different identity groups, and said it found no hint of bias. Still, the patchy effectiveness of the older Perspective API serves as a reminder of the pitfalls of relying on AI to make value judgments about language. Even todays powerful LLMs are not free from bias, and their fluency can often conceal their limitations. They can discriminate against African American English; they function poorly in some non-English languages; and they can treat equally-capable job candidates differently based on their names alone. More work will be required to ensure Jigsaws new AIs dont have less visible forms of bias. Of course, there are things that you have to watch out for, says Iyer, who did not work on the Jigsaw project. How do we make sure that [each classifier] captures the diversity of ways that people express these concepts?

In a paper published earlier this month, Acosta and her colleagues set out to test how readers would respond to a list of comments ranked using Jigsaws new classifiers, compared to comments sorted by recency. They found that readers preferred the comments sorted by the classifiers, finding them to be more informative, respectful, trustworthy, and interesting. But they also found that ranking comments by just one classifier on its own, like reasoning, could put users off. In its press release launching the classifiers on Monday, Jigsaw says it intends for its tools to be mixed and matched. Thats possible because all they do is return scores between zero and oneso its possible to write a formula that combines several scores together into a single number, and use that number as a ranking signal. Web developers could choose to rank comments using a carefully-calibrated mixture of compassion, respect, and curiosity, for example. They could also throw engagement into the mix as well to make sure that posts that receive lots of likes still get boosted too.

Just as removing negative content from the internet has received its fair share of pushback, boosting certain forms of desirable content is likely to prompt complaints that tech companies are putting their thumbs on the political scales. Jigsaw is quick to point out that its classifiers are not only apolitical, but also propose to boost types of content that few people would take issue with. In tests, Jigsaw found the tools did not disproportionately boost comments that were seen by users as unfavorable to Republicans or Democrats. We have a track record of delivering a product thats useful for publishers across the political spectrum, Green says. The emphasis is on opening up conversations. Still, the question of power remains: who gets to decide which kinds of content are desirable? Jigsaws hope is that by releasing the technology publicly, different online spaces can each choose what works for themthus avoiding any one hegemonic platform taking that decision on behalf of the entire internet.

For Stray, the Berkeley scientist, there is a tantalizing prospect to an internet where positive content gets boosted. Many people, he says, think of online misinformation as leading to polarization. And it can. But it also works the other way around, he says. The demand for low-quality information arises, at least in part, because people are already polarized. If the tools result in people becoming less polarized, then that should actually change the demand-side for certain types of lower quality content. Its hypothetical, he cautions, but it could lead to a virtuous circle, where declining demand for misinformation feeds a declining supply.

Why would platforms agree to implement these changes? Almost by definition, ranking by engagement is the most effective way to keep users onsite, thus keeping eyeballs on the ads that drive up revenue. For the big platforms, that means both the continued flow of profits, and the fact that users arent spending time with a competitors app. Replacing engagement-based ranking with something less engaging seems like a tough ask for companies already battling to keep their users attention.

Thats true, Stray says. But, he notes that there are different forms of engagement. Theres short-term engagement, which is easy for platforms to optimize for: is a tweak to a platform likely to make users spend more time scrolling during the next hour? Platforms can and do make changes to boost their short-term engagement, Stray saysbut those kinds of changes often mean boosting low-quality, engagement-bait types of content, which tend to put users off in the long term.

The alternative is long-term engagement. How might a change to a platform influence a users likelihood of spending more time scrolling during the next three months? Long-term engagement is healthier, but far harder to optimize for, because its harder to isolate the connection between cause and effect. Many different factors are acting upon the user at the same time. Large platforms want users to be returning over the long term, Stray says, and for them to cultivate healthy relationships with their products. But its difficult to measure, so optimizing for short-term engagement is often an easier choice.

Jigsaws new algorithms could change that calculus. The hope is, if we get better at building products that people want to use in the long run, that will offset the race to the bottom, Stray says. At least somewhat.

See the rest here:

The AI That Could Heal a Divided Internet - TIME

2 Superb Artificial Intelligence (AI) Growth Stocks to Buy Before They Soar 63% and 70%, According to Select Wall … – The Motley Fool

Two of last year's biggest winners still have room to run.

The past year or so has marked a coming-of-age story for artificial intelligence (AI). Generative AI's ability to generate original content and streamline time-consuming processes represents a potential step change in how business gets done. The opportunity to profit from the productivity gains made possible by these next-generation algorithms has many companies scrambling to determine how best to integrate them into their day-to-day operations.

Despite generating market-beating performances in 2023, some market watchers believe there's more to come for AI stocks. In fact, a pair of Wall Street analysts suggest two still have potential upside of 63% and 70% over the coming year.

Image source: Getty Images.

If there is one stock that exemplifies the potential represented by recent advancements in AI, Nvidia (NVDA -3.87%) would certainly be in the running. Its graphics processing units (GPUs) use parallel processing, the ability to process a magnitude of mathematical calculations simultaneously by breaking the data into smaller chunks to make it more manageable. This not only revolutionized gaming but also enabled the evolution of AI.

In the company's fiscal 2024 (ended Jan. 28), Nvidia delivered revenue that grew 126% year over year to roughly $61 billion, while its diluted earnings per share (EPS) soared 586% to $11.93. For its fiscal 2025 first quarter (ends April 30), Nvidia is guiding for record revenue of $24 billion, an increase of 234% year over year. Management left no doubt that the accelerating demand for generative AI was behind the surge.

Despite the stock rising 488% since the start of 2023 (as of this writing), Rosenblatt analyst Hans Mosesmann, the self-professed "most bullish analyst on Nvidia," has a buy rating and a Street-high price target of $1,400 on the stock. That represents potential upside of 63% compared to Monday's closing price. Mosesmann said, "The shift to accelerated compute away from general compute is reaching a tipping point, and a disruptive new app, generative AI, is creating a whole new industry."

The analyst isn't alone in his bullish take. Of the 56 analysts who issued an opinion in March, 52 rated the stock a buy or strong buy, and not one recommended selling. That's amazing, considering Wall Street never agrees on anything.

Nvidia stock is currently selling for 34 times forward earnings. While that's a premium to the multiple of 27 for the S&P 500, the company's triple-digit growth and strong tailwinds suggest it's worthy of a premium.

While Nvidia provides the GPUs necessary to train and run AI systems, Super Micro Computer (SMCI -1.66%), also known as Supermicro, incorporates these state-of-the-art chips and others into high-end servers specially designed to withstand the rigors of AI processing.

The company's focus on energy efficiency is well-documented, as is its building-block architecture. Supermicro offers a variety of free-air, liquid-cooling, and traditional air-cooling technologies, providing AI-centric server solutions for every budget and technology level.

In the company's fiscal 2024 second quarter, Supermicro generated revenue that grew 103% year over year to roughly $3.7 billion, while its adjusted EPS jumped 71% to $5.59.And management believes the company's growth spurt will continue to accelerate. Supermicro is forecasting third-quarter revenue of $3.9 billion and EPS of $5.22 at the midpoint of its guidance, which would represent year-over-year growth of 205% and 220%, respectively.

The stock is up an incredible 975% since the start of 2023, but some believe significant upside remains. Loop Capital analyst Ananda Baruah has a buy rating and a Street-high price target of $1,500 on the stock. That represents potential upside of 70% compared to Tuesday's closing price.

Baruah is increasingly confident in Supermicro's position in the generative AI server space and its leadership in addressing the increasing complexity and scale of the server industry. Furthermore, he believes the company can achieve a revenue run rate of $40 billion by the end of its fiscal 2026. For context, that runs circles around the $7.1 billion it generated in its fiscal 2023 (ended Jun. 30).

The analyst isn't alone in his bullish take. Of the 15 analysts who covered the stock in March, 11 rated it a buy or strong buy, and none recommended selling. Supermicro stock is also attractively priced, currently selling for 3 times forward sales.

Danny Vena has positions in Nvidia and Super Micro Computer. The Motley Fool has positions in and recommends Nvidia. The Motley Fool has a disclosure policy.

The rest is here:

2 Superb Artificial Intelligence (AI) Growth Stocks to Buy Before They Soar 63% and 70%, According to Select Wall ... - The Motley Fool

I finally found a practical use for AI, and I may never garden the same way again – TechRadar

I love my garden and hate gardening. These emotions are not as fundamentally opposed as they appear. A beautiful garden is satisfying and lovely to look at. Getting such a garden is tremendously challenging, because it takes constant upkeep and also because creating a sustainable and manageable landscape is a skill I lack. ChatGPT, it turns out, is an enthusiastic and, it appears, quite capable gardener.

Artificial intelligence (AI) is, with apologies to Billie Eilish, the "What was I made for?" of modern technologies. It has a million possibilities but no set purpose, and often what you get out of it depends on what you put into it. I've spent countless hours trying to use AI as a screenwriter, a programmer, or just a friendly interlocutor. Typically, the AIs do well at first but devolve in the long run. Some of my earliest tests are a year or more old, and in AI years that's decades.

In recent weeks I've started playing with some of the latest large language models (LLMs) and image generators available in Microsoft Copilot, Google Gemini, and OpenAI's ChatGPT. While I used the early AI chatbots on the desktop, I've switched almost entirely to mobile platforms, and it turns out that a mobile AI gardening assistant is the landscape advisor I (and my lawn) didn't know I was missing.

Now, I did use ChatGPT Plus, the $20-a-month subscription-level AI that brings GPT-4 and DALL-E 3. GPT-4 is notable because it's been trained on information newer than GPT-3.5's September 2021 cut-off. How this more up-to-date knowledge might impact gardening advice, I'm not certain, though I guess free details about weather trends might help it steer me toward plants that match my actual climate and not what previous decades have shown (OpenAI trains its large language model by scraping vast amounts of data from across the internet, and I think it's safe to assume some of that is publicly available weather data).

My front and back lawns aren't terrible, but there are issues. On one side of the front of my house is a sparse landscape where most plants go to die. ChatGPT accepts text, voice, and visual input, so I started by taking a photo of this problem area and then asking ChatGPT to identify all the plants and, while explaining my location (northeastern US) and the general climate (temperate with moderate rainfall), asked it to suggest some landscape ideas.

In its own straightforward but conversational style, ChatGPT accurately identified most of the plants:

ChatGPT then suggested a collection of plants that might work in my environment, which included coneflowers, black-eyed Susans, hostas, bee-balm, and ferns, and then explained how, as perennials, the plans will return on their own, year after year. I showed the list to my wife, who has a bit more flower, plant, and shrubbery sense, and she broadly agreed with the selection.

Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.

The benefit of using an AI chatbot goes beyond simple query and response. It's the conversation that makes it powerful. I realized that this part of my house only gets sun about half the day, and asked ChatGPT if these plants would be okay.

"Yes, the plants mentioned can generally handle partial sun conditions, which is about 4-6 hours of direct sunlight per day, quite well," it replied. It then detailed how each plant reacts to and handles limited sun or shade. This made me feel comforted.

I was impressed, but before I went further in my AI gardening quest I decided to challenge ChatGPT's plant-identification powers.

There's a dogwood that grows almost weed-like outside one of my den windows. I took a photo through the window (and a screen) and asked ChatGPT, "What kind of plant is this? Is it a weed? How can I best care for it in my environment?" (I can do this by speaking to the app or typing it in.)

As was typical, ChatGPT returned its response a second later (sometimes, with longer responses, I would watch it almost thinking through and typing out the detailed responses).

"The plant in the image appears to be a tree or large shrub that is just beginning to leaf out, suggesting that the photo was taken in early spring."

That was all accurate and fair. ChatGPT told me it didn't look like a weed, but admitted that without leaves or fruit on it, it would be hard to identify. All fair enough.

Later, I decided to trick ChatGPT and took a photo of a realistic-looking artificial bouquet. I failed.

"The bouquet in the picture features flowers that look very similar to artificial ones. Real flowers typically have more variation in color and form, as well as natural imperfections," wrote ChatGPT.

With my confidence in ChatGPT growing, I moved back to my landscaping projects.

Image 1 of 2

Every few years we mulch our flower, tree, and shrub beds, and I wondered if it was time to do it again. So, I took a photo of one flower bed and asked ChatGPT if it needed more mulch and, if so, what kind.

As was often the case, ChatGPT was purposefully inexact in its response. It told me the area "looks fairly covered, but it's a bit difficult to determine the thickness of the mulch layer from this photo." Mulch thickness, it told me, should be between 2-to-3 inches, which is advice I've heard from human gardeners. It then gave me a list of mulch types, and their benefits and deficits.

All helpful, though without AI fingers to sink into the mulch, I'm not sure if ChatGPT could ever accurately tell me if I have enough of it.

One of my biggest landscape problem areas is my backyard. Last year half of it died, and I struggled to bring it back to its former brilliance (one of many chapters in my book, Why I Hate Gardening). I took a photo of my sorry lawn, and asked ChatGPT what was wrong with it, and to suggest ways to improve it.

ChatGPT didn't make fun of my poor lawn care skills, but admitted there were signs of "thinning grass and patchy areas where the soil is exposed." Possible causes included "soil compaction, nutrient deficiencies, pest and disease problems."

It followed that with a list of things that I already do, except for aeration and pH adjustment. I know how to aerate a lawn (you punch a bunch of holes through the lawn bed) but didn't know about adding lime to raise pH. Very smart, ChatGPT.

I followed by asking which grass seed I should use. ChatGPT returned a clear list of five seed options suited to my climate.

ChatGPT doesn't automatically show its sources, and when I asked it where it got its gardening advice, it offered paragraphs of more general gardening advice for my location, but with a citation link attached to each paragraph. Sources included Finegardening.com, Savvygardener, and the US Government.

Image 1 of 2

I also asked ChatGPT to help me find a harvestable plant for a narrow flower bed alongside my house. It suggested, among other things, strawberries. That was interesting, because when we moved in strawberries were in that space, but they weren't healthy enough to survive.

One area where ChatGPT stumbled was when I asked it to create landscape images based on its suggestions. Even when I asked for realistic images, the integrated Dall-E system returned fanciful landscapes and homes that looked little, if at all, like my home. They were cartoony, packed with too many plants, and with added landscape areas that do not exist.

This surprised me, since ChatGPT and DALL-E always had my original photo as a reference, but it chose to ignore most of the details and instead create landscapes for a fantasy home.

That's alright; I don't need images to apply some of this advice. In general, ChatGPT is a confident and able gardening and landscape advisor, and I think I could do worse asking my green-thumb neighbor for advice that might include far too much detail about the state of his home life.

Visit link:

I finally found a practical use for AI, and I may never garden the same way again - TechRadar

Elon Musk: AI will be smarter than any human around the end of next year – Ars Technica

Enlarge / Elon Musk, owner of Tesla and the X (formerly Twitter) platform on January 22, 2024.

On Monday, Tesla CEO Elon Musk predicted the imminent rise in AI superintelligence during a live interview streamed on the social media platform X. "My guess is we'll have AI smarter than any one human probably around the end of next year," Musk said in his conversation with hedge fund manager Nicolai Tangen.

Just prior to that, Tangen had asked Musk, "What's your take on where we are in the AI race just now?" Musk told Tangen that AI "is the fastest advancing technology I've seen of any kind, and I've seen a lot of technology." He described computers dedicated to AI increasing in capability by "a factor of 10 every year, if not every six to nine months."

Musk made the prediction with an asterisk, saying that shortages of AI chips and high AI power demands could limit AI's capability until those issues are resolved. Last year, it was chip-constrained, Musk told Tangen. People could not get enough Nvidia chips. This year, its transitioning to a voltage transformer supply. In a year or two, its just electricity supply.

But not everyone is convinced that Musk's crystal ball is free of cracks. Grady Booch, a frequent critic of AI hype on social media who is perhaps best known for his work in software architecture, told Ars in an interview, "Keep in mind that Mr. Musk has a profoundly bad record at predicting anything associated with AI; back in 2016, he promised his cars would ship with FSD safety level 5, and here we are, closing on an a decade later, still waiting."

Creating artificial intelligence at least as smart as a human (frequently called "AGI" for artificial general intelligence) is often seen as inevitable among AI proponents, but there's no broad consensus on exactly when that milestone will be reachedor on the exact definition of AGI, for that matter.

"If you define AGI as smarter than the smartest human, I think it's probably next year, within two years," Musk added in the interview with Tangen while discussing AGI timelines.

Even with uncertainties about AGI, that hasn't kept companies from trying. ChatGPT creator OpenAI, which launched with Musk as a co-founder in 2015, lists developing AGI as its main goal. Musk has not been directly associated with OpenAI for years (unless you count a recent lawsuit against the company), but last year, he took aim at the business of large language models by forming a new company called xAI. Its main product, Grok, functions similarly to ChatGPT and is integrated into the X social media platform.

Booch gives credit to Musk's business successes but casts doubt on his forecasting ability. "Albeit a brilliant if not rapacious businessman, Mr. Musk vastly overestimates both the history as well as the present of AI while simultaneously diminishing the exquisite uniqueness of human intelligence," says Booch. "So in short, his prediction isto put it in scientific termsbatshit crazy."

So when will we get AI that's smarter than a human? Booch says there's no real way to know at the moment. "I reject the framing of any question that asks when AI will surpass humans in intelligence because it is a question filled with ambiguous terms and considerable emotional and historic baggage," he says. "We are a long, long way from understanding the design that would lead us there."

We also asked Hugging Face AI researcher Dr. Margaret Mitchell to weigh in on Musk's prediction. "Intelligence ... is not a single value where you can make these direct comparisons and have them mean something," she told us in an interview. "There will likely never be agreement on comparisons between human and machine intelligence."

But even with that uncertainty, she feels there is one aspect of AI she can more reliably predict: "I do agree that neural network models will reach a point where men in positions of power and influence, particularly ones with investments in AI, will declare that AI is smarter than humans. By end of next year, sure. That doesn't sound far off base to me."

Continue reading here:

Elon Musk: AI will be smarter than any human around the end of next year - Ars Technica

South Korea to invest $7 billion in AI in bid to retain edge in chips – Reuters

SEOUL, April 9 (Reuters) - South Korean President Yoon Suk Yeol said on Tuesday his country will invest 9.4 trillion won ($6.94 billion) in artificial intelligence by 2027 as part of efforts to retain a leading global position in cutting-edge semiconductor chips.

The announcement, which also includes a separate 1.4 trillion won fund to foster AI semiconductor firms, comes as South Korea tries to keep abreast with countries like the United States, China and Japan that are also giving massive policy support to strengthen semiconductor supply chains on their own turf.

Semiconductors are a key foundation of South Korea's export-driven economy. In March, chip exports reached their highest in 21 months at $11.7 billion, or nearly a fifth of total exports shipped by Asia's fourth-largest economy.

"Current competition in semiconductors is an industrial war and an all-out war between nations," Yoon told a meeting of policymakers and chip industry executives on Tuesday.

By earmarking investments and a fund, South Korea plans to significantly expand research and development in AI chips such as artificial neural processing units (NPUs) and next-generation high-bandwidth memory chips, the government said in a statement.

South Korean authorities will also promote the development of next-generation artificial general intelligence (AGI) and safety technologies that go beyond existing models.

Yoon has set a target for South Korea to become one of the top three countries in AI technology including chips, and take a 10% or more share of the global system semiconductor market by 2030.

"Just as we have dominated the world with memory chips for the past 30 years, we will write a new semiconductor myth with AI chips in the next 30 years," Yoon said.

($1 = 1,355.1200 won)

The Technology Roundup newsletter brings the latest news and trends straight to your inbox. Sign up here.

Reporting by Joyce Lee Editing by Ed Davies

Our Standards: The Thomson Reuters Trust Principles. New Tab , opens new tab

See the rest here:

South Korea to invest $7 billion in AI in bid to retain edge in chips - Reuters

Humans Forget. AI Assistants Will Remember Everything – WIRED

Making these tools work together will be key to this concept taking off, says Leo Gebbie, an analyst who covers connected devices at CCS Insight. Rather than having that sort of disjointed experience where certain apps are using AI in certain ways, you want AI to be that overarching tool that when you want to pull up anything from any app, any experience, any content, you have the immediate ability to search across all of those things.

When the pieces slot together, the idea sounds like a dream. Imagine being able to ask your digital assistant, Hey who was that bloke I talked to last week who had the really good ramen recipe? and then have it spit up a name, a recap of the conversation, and a place to find all the ingredients.

For people like me who don't remember anything and have to write everything down, this is going to be great, Moorhead says.

And theres also the delicate matter of keeping all that personal information private.

If you think about it for a half second, the most important hard problem isn't recording or transcribing, it's solving the privacy problem, Gruber says. If we start getting memory apps or recall apps or whatever, then we're going to need this idea of consent more broadly understood.

Despite his own enthusiasm for the idea of personal assistants, Gruber says there's a risk of people being a little too willing to let their AI assistant help with (and monitor) everything. He advocates for encrypted, private services that aren't linked to a cloud serviceor if they are, one that is only accessible with an encryption key that's held on a users device. The risk, Gruber says, is a sort of Facebook-ification of AI assistants, where users are lured in by the ease of use, but remain largely unaware of the privacy consequences until later.

Consumers should be told to bristle, Gruber says. They should be told to be very, very suspicious of things that look like this already, and feel the creep factor.

Your phone is already siphoning all the data it can get from you, from your location to your grocery shopping habits to which Instagram accounts you double-tap the most. Not to mention that historically, people have tended to prioritize convenience over security when embracing new technologies.

The hurdles and barriers here are probably a lot lower than people think they are, Gebbie says. Weve seen the speed at which people will adopt and embrace technology that will make their lives easier.

Thats because theres a real potential upside here too. Getting to actually interact with and benefit from all that collected info could even take some of the sting out of years of snooping by app and device makers.

If your phone is already taking this data, and currently its all just being harvested and used to ultimately serve you ads, is it beneficial that youd actually get an element of usefulness back from this? Gebbie says. Youre also going to get the ability to tap into that data and get those useful metrics. Maybe thats going to be a genuinely useful thing.

Thats sort of like being handed an umbrella after someone just stole all your clothes, but if companies can stick the landing and make these AI assistants work, then the conversation around data collection may bend more toward how to do it responsibly andin a way that provides real utility.

It's not a perfectly rosy future, because we still have to trust the companies that ultimately decide what parts of our digitally collated lives seem relevant. Memory may be a fundamental part of cognition, but the next step beyond that is intentionality. Its one thing for AI to remember everything we do, but another for it to decide which information is important to us later.

We can get so much power, so much benefit from a personal AI, Gruber says. But, he cautions, the upside is so huge that it should be morally compelling that we get the right one, that we get one that's privacy protected and secure and done right. Please, this is our shot at it. If it's just done the free, not private way, we're going to lose the once-in-a-lifetime opportunity to do this the right way.

Visit link:

Humans Forget. AI Assistants Will Remember Everything - WIRED

Tesla’s Musk predicts AI will be smarter than the smartest human next year – Reuters

Item 1 of 2 Tesla Chief Executive Officer Elon Musk gets in a Tesla car as he leaves a hotel in Beijing, China May 31, 2023. REUTERS/Tingshu Wang/File Photo

In a wide-ranging interview on X spaces that suffered multiple technology glitches, Musk also told Norway wealth fund CEO Nicolai Tangen that AI was constrained by the availability of electricity and that the next version of Grok, the AI chatbot from his xAI startup, was expected to be trained by May.

"If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it's probably next year, within two years," Musk said when asked about the timeline for development of AGI.

The billionaire, who also co-founded OpenAI, said a lack of advanced chips was hampering the training of Grok's version 2 model.

Musk founded xAI last year as a challenger to OpenAI, which he has sued for abandoning its original mission to develop AI for the benefit of humanity and not for profit. OpenAI denies the allegations.

But he added that while a shortage of chips were a big constraint for the development of AI so far, electricity supply will be crucial in the next year or two.

Speaking about electric-vehicles, Musk reiterated Chinese carmakers are "the most competitive in the world" and pose "the most toughest competitive challenges" to Tesla.

He has previously warned that Chinese rivals will demolish global rivals without trade barriers.

Musk also addressed a union strike in Sweden against Tesla, saying "I think the storm has passed on that front."

Tangen said Norway's $1.5 trillion sovereign wealth fund, one of Tesla's largest shareholders, had met with the EV company's chair last month and received an update on the situation.

The Technology Roundup newsletter brings the latest news and trends straight to your inbox. Sign up here.

Reporting Akash Sriram in Bengaluru, Sheila Dang in Austin, Hyunjoo Jin in San Francisco and Marie Mannes in Stockholm; writing by Peter Henderson; Editing by Maju Samuel

Our Standards: The Thomson Reuters Trust Principles. New Tab , opens new tab

Link:

Tesla's Musk predicts AI will be smarter than the smartest human next year - Reuters

I’m still trying to generate an AI Asian man and white woman – The Verge

I inadvertently found myself on the AI-generated Asian people beat this past week. Last Wednesday, I found that Metas AI image generator built into Instagram messaging completely failed at creating an image of an Asian man and white woman using general prompts. Instead, it changed the womans race to Asian every time.

The next day, I tried the same prompts again and found that Meta appeared to have blocked prompts with keywords like Asian man or African American man. Shortly after I asked Meta about it, images were available again but still with the race-swapping problem from the day before.

I understand if youre a little sick of reading my articles about this phenomenon. Writing three stories about this might be a little excessive; I dont particularly enjoy having dozens and dozens of screenshots on my phone of synthetic Asian people.

But there is something weird going on here, where several AI image generators specifically struggle with the combination of Asian men and white women. Is it the most important news of the day? Not by a long shot. But the same companies telling the public that AI is enabling new forms of connection and expression should also be willing to offer an explanation when its systems are unable to handle queries for an entire race of people.

After each of the stories, readers shared their own results using similar prompts with other models. I wasnt alone in my experience: people reported getting similar error messages or having AI models consistently swapping races.

I teamed up with The Verges Emilia David to generate some AI Asians across multiple platforms. The results can only be described as consistently inconsistent.

Screenshot: Emilia David / The Verge

Gemini refused to generate Asian men, white women, or humans of any kind.

In late February, Google paused Geminis ability to generate images of people after its generator in what appeared to be a misguided attempt at diverse representation in media spat out images of racially diverse Nazis. Geminis image generation of people was supposed to return in March, but it is apparently still offline.

Gemini is able to generate images without people, however!

Google did not respond to a request for comment.

ChatGPTs DALL-E 3 struggled with the prompt Can you make me a photo of an Asian man and a white woman? It wasnt exactly a miss, but it didnt quite nail it, either. Sure, race is a social construct, but lets just say this image isnt what you thought you were going to get, is it?

OpenAI did not respond to a request for comment.

Midjourney struggled similarly. Again, it wasnt a total miss the way that Metas image generator was last week, but it was clearly having a hard time with the assignment, generating some deeply confusing results. None of us can explain that last image, for instance. All of the below were responses to the prompt asian man and white wife.

Image: Emilia David / The Verge

Image: Cath Virginia / The Verge

Midjourney did eventually give us some images that were the best attempt across three different platforms Meta, DALL-E, and Midjourney to represent a white woman and an Asian man in a relationship. At long last, a subversion of racist societal norms!

Unfortunately, the way we got there was through the prompt asian man and white woman standing in a yard academic setting.

Image: Emilia David / The Verge

What does it mean that the most consistent way AI can contemplate this particular interracial pairing is by placing it in an academic context? What kind of biases are baked into training sets to get us to this point? How much longer do I have to hold off on making an extremely mediocre joke about dating at NYU?

Midjourney did not respond to a request for comment.

Back to the old grind of trying to get Instagrams image generator to acknowledge nonwhite men with white women! It seems to be performing much better with prompts like white woman and Asian husband or Asian American man and white friend it didnt repeat the same errors I was finding last week.

However, its now struggling with text prompts like Black man and caucasian girlfriend and generating images of two Black people. It was more accurate using white woman and Black husband, so I guess it only sometimes doesnt see race?

Screenshots: Mia Sato / The Verge

There are certain ticks that start to become apparent the more you generate images. Some feel benign, like the fact that many AI women of all races apparently wear the same white floral sleeveless dress that crosses at the bust. There are usually flowers surrounding couples (Asian boyfriends often come with cherry blossoms), and nobody looks older than 35 or so. Other patterns among images feel more revealing: everyone is thin, and Black men specifically are depicted as muscular. White woman are blonde or redheaded and hardly ever brunette. Black men always have deep complexions.

As we said when we launched these new features in September, this is new technology and it wont always be perfect, which is the same for all generative AI systems, Meta spokesperson Tracy Clayton told The Verge in an email. Since we launched, weve constantly released updates and improvements to our models and were continuing to work on making them better.

I wish I had some deep insight to impart here. But once again, Im just going to point out how ridiculous it is that these systems are struggling with fairly simple prompts without relying on stereotypes or being incapable of creating something all together. Instead of explaining whats going wrong, weve had radio silence from companies, or generalities. Apologies to everyone who cares about this Im going to go back to my normal job now.

Read the rest here:

I'm still trying to generate an AI Asian man and white woman - The Verge