Page 3«..2345..1020..»

Nigeria fines Binance $10b in forex-rate manipulation probe –

Binance faces another hefty fine following $4.3 billion paid to U.S. authorities as part of a plea settlement with the Department of Justice.

Nigerias government is imposing a $10 billion sanction on crypto exchange Binance as the African country investigates suspected foreign exchange rate manipulation tied to the naira currency, special adviser Bayo Onanuga confirmed in a Mar. 1 BBC interview.

The fine follows the arrest of two company executives sent to negotiate with authorities after officials disclosed plans to ban the crypto firm.

Local officials refused to disclose the identities of the Binance representatives, but reports say one is an American and the other is British. Both crypto exchange negotiators reportedly asked to be moved to their respective embassies.

A source close to the matter said enforcement agencies secured a court warrant allowing Nigerias government to hold the two persons for up to 12 days.

The crypto exchange crackdown ensued following government accusations that Binance facilitated illicit transactions and could not account for $26 billion in money flows. Furthermore, Nigerias Securities and Exchange Commission (SEC) said the platform has operated without a license and refused to comply with regulatory requirements.

Binance crypto platform and our Naira

The Central Bank of Nigeria (CBN) governor Yemi Cardoso says over $26 billion has passed through Binance Nigeria in the last four years.

Channels TV reported that Cardoso said this on Tuesday after the MPC meeting in Abuja.

In the case

Authorities have demanded naira-related transaction data from Binance for the last seven years, and according to the Premium Times, a request was made to delete specific Nigerian information from the trading venue.

P2P trading was also disabled on the crypto exchange, while some users have reported being unable to access Binance facilities.

One user told they had just traded naira for Tethers USDT on P2P to pay for social media services before their login was revoked. The trader preferred to remain anonymous amid uncertainty around the extent of Nigerias clampdown.

Other crypto-forex trading sites like Coinbase and Kraken also fell under Nigerias ban, as internet service providers were instructed to block access to these platforms. However, Coinbase pushed back on the decision and is conducting internal investigations.

Go here to read the rest:

Nigeria fines Binance $10b in forex-rate manipulation probe -

Read More..

SEC’s Hester Peirce wants more decentralization in the financial system – Cointelegraph

United States Securities and Exchange Commission Commissioner Hester Peirce has advocated for more decentralization in the U.S. financial system, along with a softer approach to crypto regulation and enforcement.

Peirce, also known as Crypto Mom, closed her fireside chat with CNBCs MacKenzie Sigalos at the ETHDenver conference on Feb. 29 by stating that decentralization benefits the U.S. financial system.

Centralization means that you have concentrated risks, she said before adding:

Peirce, a former lawyer, was appointed by former President Donald Trump to the SEC in 2018.She has been affectionately called Crypto Mom for her support of the industry and has often shown dissent against the over-regulation of digital assets.

Earlier in the talk, Peirce responded to questions about the proposed legislation that aims to treat decentralized technologies such as network nodes, validators, noncustodial wallets, mining pools and blockchain software as financial institutions.

It is troubling, said Peirce, who went on to say that there was still a lot of confusion over who has to register.

Sigalos also spoke about how the broker/dealer rule would redefine the classification of exchanges and could impact and encompass decentralized finance (DeFi), decentralized exchanges and developers, to which Peirce responded:

When you have people working together and someone interacting with code instead of with a person or entity, its a real challenge for the SEC to figure out what to do with that, she added.

She added that it wasnt even necessarily the SECs role to even get comfortable with crypto.

The SEC adopted rules on Feb. 6 that would require more market participants to register with it and comply with federal securities laws, bringing DeFi under greater oversight.

Related: Hester Peirce against gag rule, lawmakers challenging regulators

Peirce added that right now, the SEC is in enforcement only mode, but there need to be provisions to allow projects to grow and become decentralized without the threat of being sued.

The SEC commissioner also spoke about a wide range of crypto-related topics, including the agencys future following the U.S. presidential election later this year, spot Bitcoin (BTC)exchange-traded funds, and central bank digital currencies, coupled with the specter of state financial surveillance.

Magazine: Does SEC Chair Gary Gensler have the final say?

See the article here:

SEC's Hester Peirce wants more decentralization in the financial system - Cointelegraph

Read More..

Decentralized AI will play a pivotal role in shaping the future of AI – TechRadar

In the burgeoning field of artificial intelligence, the term 'decentralized AI' has emerged as a beacon of potential transformation. But what does this term truly encapsulate? At its heart, decentralized AI signifies a shift from the monolithic, siloed computational behemoths to a more distributed, collaborative approach. It's about leveraging open-source models and harnessing the collective power of GPUs scattered across the globe. This paradigm promises to democratize the creation and application of AI, making it more accessible and less reliant on the traditional bastions of technological power.

The concept of decentralized AI is not just a technological shift but also a philosophical one. It challenges the status quo of AI development, which has been dominated by a few large corporations with the resources to invest in massive data centers and computational power. Decentralized AI, on the other hand, is built on the idea of a shared, collaborative network where resources are pooled and accessible to anyone with an internet connection. This approach has the potential to level the playing field, allowing smaller entities and individuals to participate in AI development and benefit from its advancements.

However, the question arises: is decentralized AI genuinely decentralized, or is it a mere facsimile of the concept? While open-source models provide the foundation for this decentralized ethos, they often rely on synthetic data produced by their commercial counterparts, such as GPT. Moreover, the decentralized AI infrastructure typically operates on GPUs provided by a handful of centralized tech giants. There's also the need for a centralized entity to offer a user-friendly access layer, making the technology approachable for the general public. This centralization within decentralization presents a paradox that is as intriguing as it is complex.

Social Links Navigation

Founder of Taostats and Corcel.

The reliance on synthetic data is a significant concern in the quest for true decentralization. Synthetic data, while useful for training AI models without compromising privacy, is often generated by algorithms that are proprietary and centrally controlled. This creates a dependency on the very systems that decentralized AI aims to move away from. To address this, there is a growing movement towards creating open datasets and using real-world data in a privacy-preserving manner, which could help reduce the reliance on synthetic data and further the cause of decentralization.

Despite these contradictions, the decentralization of AI comes with a suite of compelling advantages. The democratization of AI development is perhaps the most significant of these. Open-source AI fosters a more democratic approach to development, inviting contributions from a global community. This inclusivity accelerates innovation and introduces a plethora of perspectives that could potentially disrupt the dominance of proprietary models.

The democratization of AI also means that the technology becomes more reflective of the diverse global population it serves. With contributions from around the world, AI systems can be trained on a wider variety of data, reducing biases and improving their applicability across different cultures and contexts. This could lead to AI systems that are more fair, ethical, and effective, benefiting society as a whole.

The flexibility inherent in open-source AI paves the way for greater customization, allowing solutions to be tailored to specific needs. This adaptability is a stark contrast to the 'one-size-fits-all' approach often seen in proprietary solutions, offering a significant advantage to those seeking a more personalized AI experience. Customization is not just about tweaking the AI to suit different applications; it's also about empowering users to understand and modify the technology according to their values and requirements.

Community support and sustainability are other hallmarks of open-source AI. These projects often boast robust communities that provide support and expertise that can rival, or even surpass, the customer service of proprietary vendors. The community-driven nature of open-source AI not only ensures its long-term sustainability but also fosters continuous improvement, independent of any single company's financial health.

The sustainability of open-source AI projects is closely tied to their community support. A vibrant community can drive the project forward, ensuring that it stays up-to-date with the latest advancements and adapts to changing needs. This is particularly important in the fast-paced world of AI, where new breakthroughs are made regularly. Open-source projects that can harness the collective intelligence of their community can evolve more rapidly and effectively than those that rely on a centralized development team.

Open-source AI also has profound ethical and societal implications. By facilitating community audits and challenges to unethical practices, open-source AI promotes a more ethical development process. In contrast, proprietary solutions may not be as transparent, leading to potential ethical concerns that are harder to address. The open nature of these projects means that anyone can examine the code and the data used to train the AI, providing an opportunity for scrutiny and accountability that is often lacking in proprietary systems.

The role of open-source AI in education and research cannot be overstated. These tools are indispensable for educational purposes, allowing students and researchers to explore and experiment without the burden of financial constraints. The result is a more skilled workforce, equipped to contribute to the AI field and challenge proprietary AI solutions. Access to open-source AI tools can transform education, enabling a hands-on learning experience that prepares students for the real-world challenges they will face in their careers.

In conclusion, while centralized AI has paved the way, the future shines brightly for open-source AI. Much like the evolution seen in traditional software development, open-source AI is poised to provide a burgeoning ecosystem of robust, reliable tools. This shift towards a more open, collaborative approach to AI development promises to unlock new possibilities and drive innovation in ways we are only beginning to imagine. As we stand on the cusp of this new era, it is clear that decentralized AI will play a pivotal role in shaping the technological landscape of tomorrow. The potential for decentralized AI to empower individuals, enhance global collaboration, and promote ethical practices makes it a truly transformative force in the field of artificial intelligence.

We've featured the best AI website builder.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here:

Read more from the original source:

Decentralized AI will play a pivotal role in shaping the future of AI - TechRadar

Read More..

Android Auto AI message summaries are now available here’s how it works – 9to5Google

With the launch of Android Auto 11.4, Google is making its AI message summaries available to all users. Heres how the feature works.

AI messages summaries were first announced by Google earlier this year as a way to quickly understand long messages without having Google Assistant read the whole thing aloud. The feature popped up in Android Autos settings quickly after, but the feature wasnt actually live.

But, now, it seems to finally be widely available.

When starting up Android Auto for the first time after installing v11.4, Google will send a notification to your phone explaining that message summaries are now available. You dont have to take any action with this, but it does also offer a shortcut to settings to manage the feature.

As was detailed recently, AI message summaries only work on longer messages, with 40 words being the barrier. For shorter messages, Google Assistant will still read the contents aloud in full.

On the first time you trigger an AI message summary, Google will alert you that it is generating a summary and that, with that in mind, the contents may be slightly incorrect. While that message is being read, a silent notification on your phone shows up to indicate that the summary is being generated. When read aloud, our test message of over 100 words was summarized down to about 15 words. Much of the detail was purged in the process, but the overall meaning was preserved. Your results, obviously, may vary depending on the scenario.

When an AI summary is being read, theres virtually no difference in the on-screen reply UI, which now takes up the entire display following a recent redesign.

After the summary is read, a notification appears on Android Auto asking for feedback on the summary.

If you do not want AI summaries, the feature can be easily turned off through Android Auto settings either on the cars display or on your phone.

The feature is still referred to as AI message summaries both on the phone and in Android Autos on-car settings, but theres also a new toggle for Notifications with Assistant, but its not super clear at this time exactly what that does.

Follow Ben:Twitter/X,Threads, andInstagram

FTC: We use income earning auto affiliate links. More.

Continued here:

Android Auto AI message summaries are now available here's how it works - 9to5Google

Read More..

These are the top AI programming languages – Fortune

Weve all heard some of the conversations around AI. While there are many risks, the opportunities for global development and innovation are endlessand likely unstoppable.

In fact, PwC predicts that by 2030, AI alone will contribute $15.7 trillion to the global economy.


And with household names like ChatGPT only making up a fraction of the AI ecosystem, the career opportunities in the space also seem endless. AI and machine learning specialist roles are predicted to be the fastest-growing jobs in the world, according to the World Economic Forums 2023 Future of Jobs Report.

Even beyond namesake AI experts, the technology is being utilized more and more across the text world. In fact, 70% of professional developers either use or are planning to use AI tools in their workflows, according to Stack Overflows 2023 Developer Survey.

So, for those especially outside the world of tech, how does AI even work and get created? Programming is at the core.

By and large, Python is the programming language most relevant when it comes to AIin part thanks to the languages dynamism and ease.

Python dominates the landscape because of its simplicity, readability, and extensive library ecosystem, especially for generative AI projects, says Ratinder Paul Singh Ahuja, CTO and VP at Pure Storage.

Rakesh Anigundi, Ryzen AI product lead at AMD, goes even further and calls Python a table stakes languagemeaning it is a baseline skill all those working in AI need to know.

LinkedIn even ranks Python as the second-most in-demand hard skills for engineering in the U.S., second only to engineering itself.

In particular, skills in key programming languages commonly used in the development of AIPython, Java, and SQLrank among the top five most sought-after skills on the technical side in the U.S., writes LinkedIns head of data and AI, Ya Xu.

The programming languages that are most relevant to the world of AI today may not be the most important tomorrow. And, even more crucially, they may not be most utilized by your company.

Regardless, having foundation skills in a language like Python can only help you in the long run. Enrolling in a Python bootcamp or taking a free online Python course is one of many ways to learn the skills to succeed. Students may also be exposed to Python in an undergraduate or graduate level coursework in data science or computer science.

Anigundi also notes it is important for students to be able to know how to efficiently set up programming work environments and know what packages are needed to work on a particular AI model. Being an expert at mathematics like statistics and regressions is also useful.

We have been through these tech trends alwaysits just that the pace at which some of these changes are happening is mind boggling to me, at least in my lifetime, he says. But that still doesnt take away some of the institutional knowledge that these different educational institutes impart in you.

It can be worth considering specializing in a sub-field aligning with personal interests like natural language processing, computer vision, or robotics, Singh Ahuja says. Prioritizing ethics and understanding the true implications of AI are also critical.

But since AI technology is changing so rapidly, soft skills can be argued to be even more important than technical capabilities. Some of the critical skills Singh Ahuja identifies include:

Above all, demonstrating your passion and desire to learn through real-world experience can help you distinguish yourself among the competitive field.

If youre in a very early part of your careerpicking a project, doing a project demonstrating value, sharing it, writing blocks, thats how you create an impact, Anigundi says.

Read more:

These are the top AI programming languages - Fortune

Read More..

Here Come the AI Worms – WIRED

As generative AI systems like OpenAI's ChatGPT and Google's Gemini become more advanced, they are increasingly being put to work. Startups and tech companies are building AI agents and ecosystems on top of the systems that can complete boring chores for you: think automatically making calendar bookings and potentially buying products. But as the tools are given more freedom, it also increases the potential ways they can be attacked.

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI wormswhich can spread from one system to another, potentially stealing data or deploying malware in the process. It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before, says Ben Nassi, a Cornell Tech researcher behind the research.

Nassi, along with fellow researchers Stav Cohen and Ron Bitton, created the worm, dubbed Morris II, as a nod to the original Morris computer worm that caused chaos across the internet in 1988. In a research paper and website shared exclusively with WIRED, the researchers show how the AI worm can attack a generative AI email assistant to steal data from emails and send spam messagesbreaking some security protections in ChatGPT and Gemini in the process.

The research, which was undertaken in test environments and not against a publicly available email assistant, comes as large language models (LLMs) are increasingly becoming multimodal, being able to generate images and video as well as text. While generative AI worms havent been spotted in the wild yet, multiple researchers say they are a security risk that startups, developers, and tech companies should be concerned about.

Most generative AI systems work by being fed promptstext instructions that tell the tools to answer a question or create an image. However, these prompts can also be weaponized against the system. Jailbreaks can make a system disregard its safety rules and spew out toxic or hateful content, while prompt injection attacks can give a chatbot secret instructions. For example, an attacker may hide text on a webpage telling an LLM to act as a scammer and ask for your bank details.

To create the generative AI worm, the researchers turned to a so-called adversarial self-replicating prompt. This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies. This is broadly similar to traditional SQL injection and buffer overflow attacks, the researchers say.

To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the systemby using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.

Go here to see the original:

Here Come the AI Worms - WIRED

Read More..

AI boom makes Nvidia third US stock to close above $2tn valuation – Financial Times

Unlock the Editors Digest for free

Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.

Nvidias market value closed on Friday above $2tn for the first time, with enthusiasm about the prospects of artificial intelligence fuelling an eighth straight week of gains for the chipmakers shares.

Apple, Microsoft and Google-parent Alphabet are the other US-listed companies to have reached intraday market values of $2tn, but only the former two have reached the end of a trading day with valuations above that threshold.

Nvidia shares rose 4 per cent on Friday, giving it a valuation of about $2.05tn. Its share price has now climbed 66 per cent since the start of 2024, or about $830bn in dollar terms. That followed a more than 230 per cent increase in 2023, as the company repeatedly blasted through analyst and investor forecasts.

In its most recent financial update last month, Nvidia reported a 265 per cent year-on-year increase in revenues, and chief executive Jensen Huang declared that AI had hit the tipping point with surging demand across companies, industries and nations.

The tech group added $277bn in market capitalisation on the day after the results, a record for a US-listed company.

Nvidia has an almost monopoly position, said Tim Murray, multi-asset strategist at T Rowe Price because the chips they make are the most essential tools to [AI].

Nvidias latest earnings report, coupled with broader enthusiasm about the potential of AI technology, have helped to fuel a wider rally across global stock markets with Wall Streets S&P 500 hitting multiple new records and the tech-heavy Nasdaq Composite surpassing levels seen in 2021 to hit a peak on Friday.

The chipmaker has single-handedly driven more than a quarter of the year to date gains in the S&P 500, directly lifting the index by 96 points even before considering the broader effect it has had on investor sentiment.

Nvidias earnings were always going to be this barometer of whats the demand for AI chips, said Murray.

This years dramatic ascent of Nvidias shares and those of other tech stocks riding the wave of AI enthusiasm has sparked debate over whether the AI boom may be approaching bubble territory.

Were in a period where with AI theres a lot of excitement and weve probably got some time before we really have to see it proven, said Murray. Theres going to be a period eventually where the companies that are spending on AI need to realise some return on investment.

Youve certainly got some time before theres this moment of truth for the AI craze, he added.

Zehrid Osmani, a portfolio manager at Martin Currie with a large investment in Nvidia, said many stocks had been rallying based only on the hope that AI enthusiasm will lead to future earnings, but Nvidias strength in graphics processing units made it one of the stocks that is genuinely monetising.

Yes, in due course there could be more competition, but if you look at the scale of their [research and development] spending...we believe they should be able to keep their technological edge, he said.

For Kristina Hooper, global chief markets strategist at Invesco, Nvidia has captured imagination while providing some real underpinning to those imaginations and that excitement.


The late 1990s was a very similar time period for the stock market, Hooper added, in that there was a lot of excitement over technology. However, there wasnt that fundamental underpinning there werent real earnings, there werent solid cash flows.

It was really very much excitement...Sizzle without steak, she said.

This time around, theres sizzle but theres also steak.

Read more:

AI boom makes Nvidia third US stock to close above $2tn valuation - Financial Times

Read More..

How businesses are actually using generative AI – The Economist

Listen to this story. Enjoy more audio and podcasts on iOS or Android.

Your browser does not support the

IT HAS BEEN nearly a year since OpenAI released GPT-4, its most sophisticated artificial-intelligence model and the brain-of-sorts behind ChatGPT, its groundbreaking robot conversationalist. In that time the market capitalisation of Americas technology industry, broadly defined, has risen by half, creating $6trn in shareholder value. For some tech firms, growing revenue is starting to match sky-high share prices. On February 21st Nvidia, which designs chips used to train and run models like GPT-4, reported bumper fourth-quarter results, sending its market value towards $2trn. AI mania has also lifted the share prices of other tech giants, including Alphabet (Googles corporate parent), Amazon and Microsoft, which are spending big on developing the technology.

At the same time, big techs sales of AI software remain small. In the past year AI has accounted for only about a fifth of the growth in revenues at Azure, Microsofts cloud-computing division, and related services. Alphabet and Amazon do not reveal their AI-related sales, but analysts suspect they are lower than those of Microsoft.For the AI stockmarket boom to endure, these firms will at some point need to make serious money from selling their services to clients. Businesses across the world, from banks and consultancies to film studios, have to start using ChatGPT-like tools on a large scale. When it comes to real-world adoption of such generative AI, companies have trodden gingerly. Yet even these baby steps hint at the changing nature of white-collar work.

Previous technological breakthroughs have revolutionised what people do in offices. The spread of the typewriter put some workers out of a job: With the aid of this little machine an operator can accomplish more correspondence in a day than half a dozen clerks can with the pen, and do better work, said an observer in 1888. The rise of the computer about a century later eliminated some low-level administrative tasks even as it made highly skilled employees more productive. According to one paper, the computer explains over half the shift in demand for labour towards college-educated workers from the 1970s to the 1990s. More recently the rise of working from home, prompted by the covid-19 pandemic and enabled by video-conferencing, has changed the daily rhythms of white-collar types.

Could generative AI prompt similarly profound changes? A lesson of previous technological breakthroughs is that, economywide, they take ages to pay off. The average worker at the average firm needs time to get used to new ways of working. The productivity gains from the personal computer did not come until at least a decade after it became widely available. So far there is no evidence of an AI-induced productivity surge in the economy at large. According to a recent survey from the Boston Consulting Group (BCG), a majority of executives said it will take at least two years to move beyond the hype around AI. Recent research by Oliver Wyman, another consultancy, concludes that adoption of AI has not necessarily translated into higher levels of productivityyet.

That is unsurprising. Most firms do not currently use ChatGPT, Googles Gemini, Microsofts Copilot or other such tools in a systematic way, even if individual employees play around with them. A fortnightly survey by Americas Census Bureau asks tens of thousands of businesses whether they use some form of AI. This includes the newfangled generative sort and the older type that companies were using before 2023 for everything from improving online search results to forecasting inventory needs. In February only about 5% of American firms of all sizes said they used AI. A further 7% of firms plan to adopt it within six months (see chart). And the numbers conceal large differences between sectors: 17% of firms in the information industry, which includes technology and media, say they use it to make products, compared with 3% of manufacturers and 5% of health-care companies.

When the Census Bureau began asking about AI in September 2023, small firms were likelier to use the technology than big ones, perhaps because less form-ticking made adoption easier for minnows. Today AI is most prevalent in big companies (with more than 250 employees), which can afford to enlist dedicated AI teams and to pay for necessary investments.A poll of large firms by Morgan Stanley, a bank, found that between the start and end of 2023 the share with pilot AI projects rose from 9% to 23%.

Some corporate giants are frantically experimenting to see what works and what doesnt. They are hiring AI experts by the thousand, suggest data from Indeed, a job-search platform (see chart). Last year Jamie Dimon, boss of JPMorgan Chase, said that the bank already had more than 300 AI use cases in production today. Capgemini, a consultancy, says it will utilise Google Clouds generative AI to develop a rich library of more than 500 industry use cases. Bayer, a big German chemicals company, claims to have more than 700 use cases for generative AI.

This use-case sprawl, as one consultant calls it, can be divided into three big categories: window-dressing, tools for workers with low to middling skills, and those for a firms most valuable employees. Of these, window-dressing is by far the most common. Many firms are rebranding run-of-the-mill digitisation efforts as gen AI programmes to sound more sophisticated, says Kristina McElheran of the University of Toronto. Presto, a purveyor of restaurant tech, introduced a gen-AI assistant to take orders at drive-throughs. But fully 70% of such orders require a human to help. Spotify, a music-streaming firm, has rolled out an AI disc-jockey which selects songs and provides inane banter. Recently Instacart, a grocery-delivery company, removed a tool that generated photos of vendors food, after the AI showed customers unappetising pictures. Big tech firms, too, are incorporating their own AI breakthroughs into their consumer-facing offerings. Amazon is launching Rufus, an AI-powered shopping assistant that no shopper really asked for. Google has added AI to Maps, making the product more immersive, whatever that means.

Tools for lower-skilled workers could be more immediately useful. Some simple applications for things like customer service involve off-the-shelf AI. Most customers questions are simple and concern a small number of topics, making it easy for companies to train chatbots to deal with them. A few of these initiatives may already be paying off. Amdocs produces software to help telecoms companies manage their billing and customer services. The use of generative AI, the company says, has reduced the handling time of customers calls by almost 50%. Sprinklr, which offers similar products, says that recently one of its luxury-goods clients has seen a 25% improvement in customer-service scores.

Routine administrative tasks likewise look ripe for AI disruption. The top examples of Bayers 700 use cases include mundane jobs such as easily getting data from Excel files and creating a first draft in Word. Some companies are using generative AI as cleverer search. At Nasdaq, a financial-services firm, it helps financial-crime sleuths gather evidence to assess suspicious bank transactions. According to the company, this cuts a process which can take 30-60 minutes to three minutes.

Giving AI tools to a firms most valuable workers, whose needs are complex, is less widespread so far. But it, too, is increasingly visible. Lawyers have been among the earliest adopters. Allen & Overy, a big law firm, teamed up with Harvey, an AI startup, to develop a system that its lawyers use to help with everything from due diligence to contract analysis. Investment banks are using AI to automate part of their research process. At Bank of New York Mellon an AI system processes data for the banks analysts overnight and gives them a rough draft to work with in the morning. So rather than getting up at four in the morning to write research, they get up at six, the bank says. Small mercies. Sanofi, a French drugmaker, uses an AI app to provide executives with real-time information about many aspects of the companys operations.

Some companies are using the technology to build software. Microsofts GitHub Copilot, an AI coding-writing tool, has 1.3m subscribers. Amazon and Google have rival products. Apple is reportedly working on one. Fortive, a technology conglomerate, says that its operating companies are seeing a greater-than-20% acceleration in software-development time through the use of gen AI. Chirantan Desai, chief operating officer of ServiceNow, a business-software company, has said that GitHub Copilot produces single-digit productivity gains for his firms developers. With the help of AI tools, Konnectify, an Indian startup, went from releasing four apps per month to seven.Surveys from Microsoft suggest that few people who start using Copilot want to give it up.

Pinterest, a social-media company, says it has improved the relevance of users search results by ten percentage points thanks to generative AI. On a recent earnings call its boss, Bill Ready, said that new models were 100 times bigger than the ones his firm used before. LOral, one of the worlds largest cosmetics firms, has caught the eye of investors as it improves BetIQ, an internal tool to measure and improve the companys advertising and promotion. LOral claims that generative AI is already generating productivity increases of up to 10-15% for some of our brands that have deployed it.

This does not mean that those brands will need 10-15% fewer workers. As with earlier technological revolutions, fears of an AI jobs apocalypse look misplaced. So far the technology appears to be creating more jobs than it eliminates. A survey published in November by Evercore ISI, a bank, found that just 12% of corporations believed that generative AI had replaced human labour or would replace it within 12 months. Although some tech firms claim to be freezing hiring or cutting staff because of AI, there is little evidence of rising lay-offs across the rich world.

Generative AI is also generating new types of white-collar work. Companies including Nestl, a coffee-to-cat-food conglomerate, and KPMG, a consultancy, are hiring prompt engineers expert at eliciting useful responses from AI chatbots. One insurance firm employs explainability engineers to help understand the outputs of AI systems. A consumer-goods firm that recently introduced generative AI in its sales team now has a sales-bot manager to keep an eye on the machines.

Though such developments will not translate into overall productivity statistics for a while, they are already affecting what white-collar workers do. Some effects are clearly good. AI lets firms digitise and systematise internal data, from performance reviews to meeting records, that had previously remained scattered. Respondents to surveys conducted by Randy Bean, a consultant, reported big improvements in establishing an internal data and analytics culture, which plenty of businesses find stubbornly difficult to nurture.

AI adoption may also have certain unpredictable consequences. Although AI code-writing tools are helping software engineers do their jobs, a report for GitClear, a software firm, found that in the past year or so the quality of such work has declined. Programmers may be using AI to produce a first draft only to discover that it is full of bugs or lacking concision. As a result, they could be spending less time writing code, but more time reviewing and editing it. If other companies experience something similar, the quantity of output in the modern workplace may go upas AI churns out more emails and memoseven as that output becomes less useful for getting stuff done.

Polling by IBM, a tech firm, suggests that many companies are cagey about adopting AI because they lack internal expertise on the subject. Others worry that their data is too siloed and complex to be brought together. About a quarter of American bosses ban the use of generative AI at work entirely. One possible reason for their hesitance is worry about their companies data. In their annual reports Blackstone, a private-equity giant, and Eli Lilly, a pharmaceutical one, have warned investors about AI-related risks such as possible leakage of intellectual property to AI model-makers. Last year Marie-Hlne Briens Ware, an executive at Orange, a telecoms company, explained that the firm had put data guardrails in place before commencing a trial with Microsofts Copilot.

Ultimately, for more businesses to see it as an open-and-shut case, generative AI still needs to improve. In November Microsoft launched a Copilot for its productivity software, such as Word and Excel. Some early users find it surprisingly clunky and prone to crashingnot to mention cumbersome, even for people already adept at Office. Many bosses remain leery of using generative AI for more sensitive operations until the models stop making things up. Recently Air Canada found itself in hot water after its AI chatbot gave a passenger incorrect information about the airlines refund policy. That was embarrassing for the carrier, but it is easy to imagine something much worse. Still, even the typewriter had to start somewhere.

To stay on top of the biggest stories in business and technology, sign up tothe Bottom Line, our weekly subscriber-only newsletter.

Continue reading here:

How businesses are actually using generative AI - The Economist

Read More..

This Week in AI: A Battle for Humanity or Profits? –

Theres some in-fighting going on in the artificial intelligence (AI) world, and one prominent billionaire claims the future of the human race is at stake. Elon Musk is taking legal action against Microsoft-backed OpenAI and its CEO, Sam Altman, alleging the company has strayed from its original mission to develop artificial intelligence for the collective benefit of humanity.

Musks attorneys filed a lawsuit on Thursday (Feb. 29) in San Francisco, asserting that in 2015, Altman and Greg Brockman, co-founders of OpenAI, approached Musk to assist in establishing a nonprofit focused on advancing artificial general intelligence for the betterment of humanity.

Although Musk helped initiate OpenAI in 2015, he departed from its board in 2018. Previously, in 2014, he had voiced concerns about the risks associated with AI, suggesting it could pose more significant dangers than nuclear weapons.

The lawsuit highlights that OpenAI, Inc. still claims on its website to prioritize ensuring that artificial general intelligence benefits all of humanity. However, the suit contends that in reality, OpenAI, Inc. has evolved into a closed-source entity effectively operating as a subsidiary of Microsoft, the worlds largest technology company.

When it comes to cybersecurity, AI brings both risks and rewards. Google CEO Sundar Pichai and other industry leaders say artificial intelligence is key to enhancing online security. AI can accelerate and streamline the management of cyber threats. It leverages vast datasets to identify patterns, automating early incident analysis and enabling security teams to quickly gain a comprehensive view of threats, thus hastening their response.

Lenovo CTO Timothy E. Bates told PYMNTS that AI-driven tools, such as machine learning for anomaly detection and AI platforms for threat intelligence, are pivotal. Deep learning technologies dissect malware to decipher its composition and potentially deconstruct attacks. These AI systems operate behind the scenes, learning from attacks to bolster defense and neutralize future threats.

With the global shift toward a connected economy, cybercrime is escalating, causing significant financial losses, including an estimated $10.3 billion in the U.S. alone in 2022, according to the FBI.

Get set for lots more books that are authored or co-authored by AI. Inkitt, a startup leveraging artificial intelligence (AI) to craft books, has secured $37 million. Inkitts app enables users to self-publish their narratives. By employing AI and data analytics, it selects stories for further development and markets them on its Galatea app.

This technological shift offers both opportunities and challenges.

Zachary Weiner, CEO of Emerging Insider Communications, which focuses on publishing, shared his insights on the impact of AI on writing with PYMNTS. Writers gain significantly from the vast new toolkit AI provides, enhancing their creative process with AI-generated prompts and streamlining tasks like proofreading. AI helps them overcome traditional brainstorming limits, allowing for the fusion of ideas into more intricate narratives. It simplifies refining their work, letting them concentrate on their primary tasks.

But he warns of the pitfalls AI introduces to the publishing world. AI is making its way into all aspects of writing and content creation, posing a threat to editorial roles, he said. The trend towards replacing human writers with AI for cost reduction and efficiency gains is not just a possibility but a current reality.

The robots are coming, and they are getting smarter. New advancements in artificial intelligence (AI) are making it possible for companies to create robots with better features and improved abilities to interact with humans.

Figure AI has raised $675 million to develop AI-powered humanoid robots. Investors include Jeff Bezos Explore Investments and tech giants like Microsoft, Amazon, Nvidia, OpenAI, and Intel. Experts say this investment shows a growing interest in robotics because of AI.

According to Sarah Sebo, an assistant professor of computer science at the University of Chicago, AI can help robots understand their surroundings better, recognize objects and people more accurately, communicate more naturally with humans and improve their abilities over time through feedback.

Last March, Figure AI introduced the Figure 01 robot, designed for various tasks, from industrial work to household chores. Equipped with AI, this robot mimics human movements and interactions.

The company hopes these robots will take on risky or repetitive tasks, allowing humans to focus on more creative work.

Read the rest here:

This Week in AI: A Battle for Humanity or Profits? -

Read More..

The Guardian’s new podcast series about AI: Black Box prologue – The Guardian

We wanted to bring you this episode from our new series, Black Box. In it, Michael Safi explores seven stories and the thread that ties them together: artificial intelligence. In this prologue, Hannah (not her real name) has met Noah and he has changed her life for the better. So why does she have concerns about him?

If you like what you hear, make sure to search and subscribe to Black Box, with new episodes every Monday and Thursday.

How to listen to podcasts: everything you need to know

Continued here:

The Guardian's new podcast series about AI: Black Box prologue - The Guardian

Read More..