Category Archives: Artificial General Intelligence

The Future That AI Can Help Build – American Enterprise Institute

When will the future arrive? Of course, thats a nonsense question. The future isnt a fixed point that suddenly arrives. Time flows continuously, and were always moving from today into tomorrow. As soon as a moment arrives, it becomes the present, and then immediately becomes the past. Rather than a destination even one with flying cars and mile-high skyscrapers the future is a continuous process of experiencing each new moment as it comes. Kind of a brain-bender, I know.

A better version of that question: When will the sorts of technological advancements that we often think of as futuristic finally happen? As the saying, (attributed to many different folks) goes, It is difficult to make predictions, especially about the future.

That caveat noted, here are some current consensus predictions from the Metaculus forecasting platform:

That last forecast is particularly interesting to me since it suggests how forecasts can affect each other. For example: If the forecast for the date of artificial general intelligence or superintelligence were moved forward, it should prompt forecasters to reconsider timelines for other tech advancements like cancer cures or nuclear fusion. That, especially if forecasters believe AGI could accelerate research and development in those areas.

Which they absolutely should. Indeed, the potential impact on accelerating scientific and tech progress is a big part of the bullish case for AGI and the economy although one ignored by most economic forecasters on Wall Street and in Washington.

Take the recent debate between economist Daron Acemoglu and Goldman Sachs about the economic impact of AI.The crux of the disputeis their differing assumptions about AIs potential for task automation, as well as Goldmans inclusion of labor reallocation and new job creation in its analysis. These are factors Acemoglu doesnt account for in his more pessimistic prediction. Butneitherof them includes the potential impact of radical tech breakthroughs, something tough to model.

As Acemogluputs it:

I also do not discuss how AI can have revolutionary effects by changing the process of science (a possibility illustrated by neural network-enabled advances in protein folding and new crystal structures discovered by the Google subsidiary DeepMind), because large-scale advances of this sort do not seem likely within the 10-year time frame and many current discussions focus on automation and task complementarities.

But in this newsletter, I like to consider what Acemoglu calls revolutionary effects. And so does computer science and inventor Ray Kurzweil, author of the booksThe Age of Spiritual Machines(1999) andThe Singularity is Near(2005). His new book,The Singularity is Nearer: When We Merge with AI, will be published on June 25th. In anessayfor the new issue of The Economist, Kurzweil does a great job of outlining how AI will affect other aspects of science and technology:

Kurzweil:

By the time children born today are in kindergarten, artificial intelligence (ai) will probably have surpassed humans at all cognitive tasks, from science to creativity. When I first predicted in 1999 that we would have such artificial general intelligence by 2029, most experts thought Id switched to writing fiction. But since the spectacular breakthroughs of the past few years, many experts think we will have AGI even soonerso Ive technically gone from being an optimist to a pessimist, without changing my prediction at all.

The techno-optimist highlights three key areas to showcase AIs transformative potential:

The Kurzweil kicker: This is AIs most transformative promise: longer, healthier lives unbounded by the scarcity and frailty that have limited humanity since its beginnings.

As I see it, the real value in what Kurzweil is saying isnt its predictive power or insights into AI timelines but rather the Up Wing image of the future it suggests. A big theme of my bookThe Conservative Futuristis the importance of having positive images of the future. They are crucial for societal progress and human flourishing. From the book:

American historian Carl Becker notes in his 1936 book Progress and Power that a Philosopher could not grasp the modern idea of progressuntil he was willing to abandon ancestor worship, until he analyzed away his inferiority complex toward the past, and realized that his own generation was superior to any yet known.

The ancient Greeks, for instance, conceived of the future in a fundamentally different way than the modern West. My Hellenic ancestors didnt face the future to see what was coming. They instead metaphorically had their backs turned to the future and faced the past, viewing what had already happened as a guide to what might happen next.

But in the two centuries between Columbus sailing to the New World and the death of Isaac Newton in 1727, doubts arose about the wisdom of the ancients. If the natural philosophers of the past didnt know about the vast continent and peoples across the Atlantic or about gravity, what else might they not have known or gotten wrong? This newfound skepticism helped power the Scientific Revolution, the Enlightenment, and the rise of an Up Wing culture among the literate elite of Europeastronomers, chemists, clergymen, doctors, engineers, mathematicians.

Dutch futurist Frederik Polak argued that cultures without optimistic visions of tomorrow lack direction and vitality. Positive future images inspire innovation, drive scientific and tech advancements, and motivate people to work towards better outcomes. Without such visions, we risk stagnation or decline. And Kurzweil just gave us a pretty compelling vision. Hope to see more of them. Faster, please!

More:

The Future That AI Can Help Build - American Enterprise Institute

Railtown AI Unveils Version 2.0 of Conductor – Newsfile

July 02, 2024 8:00 AM EDT | Source: Railtown AI Technologies Inc.

Vancouver, British Columbia--(Newsfile Corp. - July 2, 2024) - Railtown AI Technologies Inc. (CSE: RAIL) (OTCQB: RLAIF) ("Railtown AI'', "Railtown" or the "Company") is pleased to announce the launch of Conductor Version 2.0, an advanced AI platform that is transforming how companies build and manage their software applications.

Conductor Version 2.0 is designed to drive new insights by seamlessly aggregating and analyzing diverse application data. By providing a holistic view of application performance and development processes, Railtown AI enables organizations to understand all aspects of their software applications. This enhanced perspective is a crucial step toward the company vision of building an Artificial General Intelligence (AGI) that manages and controls all aspects of the software application lifecycle.

"Our mission with Railtown AI has always been to empower businesses with actionable intelligence," said Marwan Haddad, CTO at Railtown AI. "With the release of Version 2.0, we're taking a giant leap forward by giving companies the tools they need to not only monitor but also optimize every aspect of their application ecosystem. This is more than just an update; it's a transformation in how we understand and manage software."

Key features of Conductor Version 2.0 include:

Conductor Version 2.0 is now available to all current and new customers.

About Railtown AI Technologies

Railtown AI, a Microsoft Partner, is a cloud-based Application General Intelligence Platform for Software Developers and Teams that practice Agile Project Management. We purposely built our Application General Intelligence Platform to help Software Developers and Agile practitioners save time on redundant tasks, improve productivity, drive down costs, and accelerate developer velocity. Railtown's proprietary AI technology, designed to enable our clients to be more productive and profitable, is accessible on Microsoft's Azure Marketplace.

Follow us on social media:

SUBSCRIBE FOR INVESTOR NEWS

Click here to receive our latest investor news alerts.

ON BEHALF OF THE BOARD

"Cory Brandolini" Cory Brandolini, Chief Executive Officer

INVESTOR CONTACT

Rebecca Kerswell Investor Relations and Marketing Email: investors@railtown.ai Phone: (604)417-4440

This news release contains forward-looking statements relating to the future operations of the Company and other statements that are not historical facts. Forward-looking statements are often identified by terms such as "will," "may", "should", "intends", "anticipates", "expects" and similar expressions. All statements other than statements of historical fact included in this release, including, without limitation, statements regarding the future plans and objectives of the Company are forward-looking statements that involve risks and uncertainties. There can be no assurance that such statements will prove to be accurate and actual results and future events could differ materially from those anticipated in such statements. Important factors that could cause actual results to differ materially from the Company's expectations are risks detailed from time to time in the filings made by the Company with securities regulators.

Readers are cautioned that assumptions used in the preparation of any forward-looking information may prove to be incorrect. Events or circumstances may cause actual results to differ materially from those predicted, as a result of numerous known and unknown risks, uncertainties, and other factors, many of which are beyond the control of the Company. As a result, the Company cannot guarantee that any forward-looking statement will materialize, and readers should not place undue reliance on any forward-looking information. Such information, although considered reasonable by management at the time of preparation, may prove to be incorrect and actual results may differ materially from those anticipated. Forward-looking statements contained in this news release are expressly qualified by this cautionary statement. The forward-looking statements contained in this news release are made as of the date of this news release and the Company will only update or revise publicly any of the included forward-looking statements as expressly required by Canadian securities law.

To view the source version of this press release, please visit https://www.newsfilecorp.com/release/215209

SOURCE: Railtown AI Technologies Inc.

Originally posted here:

Railtown AI Unveils Version 2.0 of Conductor - Newsfile

Railtown AI Unveils Version 2.0 of Conductor | RLAIF Stock News – StockTitan

Railtown AI Technologies has launched Conductor Version 2.0, an advanced AI platform designed to transform how companies manage their software applications. This release aims to provide a holistic view of application performance and development processes by integrating and analyzing diverse application data. Key features include integrated data analysis, comprehensive application overviews, and advanced insights, offering actionable recommendations to enhance efficiency and reliability. According to CTO Marwan Haddad, this version marks a significant step toward building an Artificial General Intelligence for managing software application lifecycles. Conductor Version 2.0 is available to all current and new customers.

Positive

Negative

Vancouver, British Columbia--(Newsfile Corp. - July 2, 2024) - Railtown AI Technologies Inc. (CSE: RAIL) (OTCQB: RLAIF) ("Railtown AI'', "Railtown" or the "Company") is pleased to announce the launch of Conductor Version 2.0, an advanced AI platform that is transforming how companies build and manage their software applications.

Conductor Version 2.0 is designed to drive new insights by seamlessly aggregating and analyzing diverse application data. By providing a holistic view of application performance and development processes, Railtown AI enables organizations to understand all aspects of their software applications. This enhanced perspective is a crucial step toward the company vision of building an Artificial General Intelligence (AGI) that manages and controls all aspects of the software application lifecycle.

"Our mission with Railtown AI has always been to empower businesses with actionable intelligence," said Marwan Haddad, CTO at Railtown AI. "With the release of Version 2.0, we're taking a giant leap forward by giving companies the tools they need to not only monitor but also optimize every aspect of their application ecosystem. This is more than just an update; it's a transformation in how we understand and manage software."

Key features of Conductor Version 2.0 include:

Conductor Version 2.0 is now available to all current and new customers.

About Railtown AI Technologies

Railtown AI, a Microsoft Partner, is a cloud-based Application General Intelligence Platform for Software Developers and Teams that practice Agile Project Management. We purposely built our Application General Intelligence Platform to help Software Developers and Agile practitioners save time on redundant tasks, improve productivity, drive down costs, and accelerate developer velocity. Railtown's proprietary AI technology, designed to enable our clients to be more productive and profitable, is accessible on Microsoft's Azure Marketplace.

Follow us on social media:

SUBSCRIBE FOR INVESTOR NEWS

Click here to receive our latest investor news alerts.

ON BEHALF OF THE BOARD

"Cory Brandolini" Cory Brandolini, Chief Executive Officer

INVESTOR CONTACT

Rebecca Kerswell Investor Relations and Marketing Email: investors@railtown.ai Phone: (604)417-4440

This news release contains forward-looking statements relating to the future operations of the Company and other statements that are not historical facts. Forward-looking statements are often identified by terms such as "will," "may", "should", "intends", "anticipates", "expects" and similar expressions. All statements other than statements of historical fact included in this release, including, without limitation, statements regarding the future plans and objectives of the Company are forward-looking statements that involve risks and uncertainties. There can be no assurance that such statements will prove to be accurate and actual results and future events could differ materially from those anticipated in such statements. Important factors that could cause actual results to differ materially from the Company's expectations are risks detailed from time to time in the filings made by the Company with securities regulators.

Readers are cautioned that assumptions used in the preparation of any forward-looking information may prove to be incorrect. Events or circumstances may cause actual results to differ materially from those predicted, as a result of numerous known and unknown risks, uncertainties, and other factors, many of which are beyond the control of the Company. As a result, the Company cannot guarantee that any forward-looking statement will materialize, and readers should not place undue reliance on any forward-looking information. Such information, although considered reasonable by management at the time of preparation, may prove to be incorrect and actual results may differ materially from those anticipated. Forward-looking statements contained in this news release are expressly qualified by this cautionary statement. The forward-looking statements contained in this news release are made as of the date of this news release and the Company will only update or revise publicly any of the included forward-looking statements as expressly required by Canadian securities law.

To view the source version of this press release, please visit https://www.newsfilecorp.com/release/215209

Conductor Version 2.0 is an advanced AI platform designed to transform how companies manage and monitor their software applications through integrated data analysis, comprehensive overviews, and actionable insights.

Conductor Version 2.0 was launched on July 2, 2024.

Key features include integrated data analysis, comprehensive application overviews, and advanced insights with actionable recommendations to enhance software efficiency and reliability.

Customers benefit from a holistic view of application performance, deeper analytical insights, and actionable recommendations that help optimize their software applications.

Yes, Conductor Version 2.0 is available to both current and new customers.

Railtown AI aims to build an Artificial General Intelligence capable of managing all aspects of the software application lifecycle.

Follow this link:

Railtown AI Unveils Version 2.0 of Conductor | RLAIF Stock News - StockTitan

Hollywood tycoon Ari Emanuel blasts OpenAI’s Sam Altman, fearing the future after Elon Musk tells him he’ll become ‘a … – Fortune

As CEO of OpenAI, Sam Altman wants to make history by developing the worlds first artificial general intelligence, or AGIa machine powerful enough to think and reason like a human. But some are starting to worry whether he can be trusted not to accidentally create an AI overlord that views humans as a lower life form.

Speaking this weekend at the Aspen Ideas Festival, media tycoon Ari Emanuel recalled a conversation he had with Elon Musk, a former director of his billion-dollar entertainment company, Endeavor.

The anecdote highlighted the awesome stakes involved as companies race to build ever-smarter neural networks.

Elon said to me oncethis scared mehe said: You know, Ari, your relationship with your dogs? Think about it the following way: Youre the dog to the AI, Emanuel told the audience. I dont want to be a dog.

AI experts such as Geoffrey Hinton fear Silicon Valley executives wont stop at just AGI: Mankind could ultimately create an artificial superintelligence (ASI).

This ASI would not just mimic human learning processes like current transformer-based neural networks such as OpenAIs GPT-4, but could potentially gain self-awareness in the processrelegating humanity to the second-most advanced species on earth.

With so much riding on Altmans ability to make the responsible decision, Emanuel fears past behavior suggests he cant be trusted to properly develop groundbreaking technologyespecially given a growing chorus of critics.

I think hes a con man, said Emanuel. Elon gave him a lot of moneyit was supposed to be a nonprofit, now hes making a lot of money. I dont know why I would trust him. I dont know why we would trust these people. (According to legal filings, Musk contributed more than $44 million into OpenAI between 2016 and September 2020.)

Emanuel said Altman has not done enough to prove the technology doesnt pose a long-term threat to society, especially since the latter seems to prioritize commercialization over safety.

Youre telling me youve done the calculation, and the good outweighs the bad, said Emanuel, whose media business could be hurt by generative AI such as OpenAIs Sora. Really? I dont think so.

OpenAI did not respond to a request fromFortunefor comment.

For his part Altman, who has in the past stated offering equity was necessary in part to attract and retain talent, himself said a year ago that people should not place their trust in any one AI company or CEO without evidence that trust is deserved.

The OpenAI bosswho is worth $2 billion according to Bloombergsurvived aboardroom mutinyin November thanks in part to Microsoft CEO Satya Nadella, and returned more powerful than everwith three of the four former plotters against him leaving the company.

One of those former directors, Helen Toner, in Mayjustified the coupby citing a pattern of dishonest behavior, saying Altman would withhold information, misrepresent things, and sometimes even outright lie to the board.

At the same time, scientists at OpenAI such as Jan Leikeleft the companyafter accusing Altman ofbreaking a key promiseto fund his research. Leike and chief scientistIlya Sutskeverwere supposed to design safety protocols robust enough to ensure AI cannot ever gain the upper hand over humans.

Altman himself is now mingling his commercial responsibility as CEO with anew roleheading up AI safety at the companyraising eyebrows from a governance perspective.

Twitter founder Jack Dorsey has alsosounded the alarm, warning the damage done to human behavior by the very engagement-focused algorithms he himself helped to create could pale when compared with artificial intelligence.

All of this has given Emanuel pause about the AI future.

I dont want to stifle innovation, because I do think we need AI, but we have to have the rails around it, Emanuel said in Aspen this weekend, where Altman was also speaking. What is society going to be like in that world when theres no purpose?

The younger brother of Rahm Emanuelformer chief of staff to President Barack ObamaAri Emanuel is often referred to as a Hollywood super agent having represented stars like Martin Scorsese, but hes an entrepreneur in every sense of the word.

In 1995, the senior partner at talent reps International Creative Management founded Endeavor Agency, transforming it into a fully-fledged media and entertainment empire that went public on the stock exchange in April 2021.

Among his close friends and business associates he counts Hollywood star Dwayne The Rock Johnson, UFC boss Dana White, and Tesla CEO Elon Musk, who briefly served as a director on Endeavors board.

Emanuelplayed hostto the powerful tycoon in Greece, when unflattering pictures of Musk emerged next to a leaner Emanuel. The images prompted so much online ridicule that Musk reportedly began taking Wegovy just tolose the fat.

Altman and Musk, on the other hand, are far from cozy. Despite cochairing OpenAI at its inception in 2015, Musk departed the organization in 2018. Since then a war of wordsand productshas ensued, with the Tesla CEO attempting to sue OpenAI earlier this year.

Emanuel is widely lauded by his Hollywood peers: Theres no CEO in the world like this guy, the Rock said of Emanuel inJanuary.

More here:

Hollywood tycoon Ari Emanuel blasts OpenAI's Sam Altman, fearing the future after Elon Musk tells him he'll become 'a ... - Fortune

ChatGPT could be facing some serious competition: Amazon is reportedly working on a new AI tool, ‘Metis’, to … – ITPro

Amazon is reportedly working on development of a new AI tool aimed at directly competing with ChatGPT as the tech giant looks to compete with the popular chatbot.

According to reports from Business Insider, the internal project, codenamed Metis, is designed to be accessible for users via web browser.

Metis will work in a similar fashion to ChatGPT, providing users text and image-based answers in response to user queries. The chatbot will also reportedly provide links within answers to give users access to source materials used when generating a response, and even suggest follow-up queries.

Sources told the publication the chatbot will be powered by Amazons internal AI model, dubbed Olympus, which is also believed to be in development. Reports on Olympus emerged late last year, but as of yet no concrete details have been teased by the firm.

Development of the chatbot is being led by Amazons artificial general intelligence (AGI) team, which is also working on the Olympus model, and the launch of the service could come in September to coincide with Amazons Alexa event.

A key differentiator for Metis, sources suggested, is that retrieval augmented generation (RAG) will play a big role in optimizing how the chatbot produces responses.

RAG has become the latest industry buzzword amidst a flurry of AI activity over the last 18 months. This allows AI models to draw upon larger knowledge repositories such as organization-specific information, alongside user input, to inform their responses.

Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.

All told, this provides more relevant results for user queries. Metis will be no different, and will be able to source information beyond the original data used to train its supporting model, thereby providing more up-to-date responses.

Sources close to the matter told Business Insider the chatbot will be able to share the latest stock prices in real-time, for example.

Amazon plans appear to also point toward Metis acting as an AI agent. These custom chatbots essentially automate and perform tasks on behalf of the user, and are built using existing, internal data. Sources told BI that some use-cases included booking flights, or turning on lights, pointing toward close integration with Alexa.

Alongside RAG, AI agents have emerged as a major enterprise focus so far in 2024.

Google Cloud launched its own AI Agents service for Vertex AI at Google Cloud Next 2024. Microsoft-backed OpenAI also launched its marketplace for AI agents which allows users to create custom GPTs in a move earlier this year.

Rory Bathgate

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

To compete with the likes of Microsoft and Google, Amazon faces a challenge on several fronts.

Breaking through and capturing customers in this environment will require Metis to deliver serious results. With Metis reportedly powered by Olympus, the tool carries potential to not only produce powerful outputs but also show off Amazons own in-house development capabilities.

That said, its a hard task. In the eight months since rumors about Olympus first surfaced, the AI space has moved on considerably. Gemini 1.5 Pro and GPT-4o continue to push the limits of large language models (LLMs) and unless Olympus is proven to outperform these competitors by sizable margins it could fail to catch the attention of would-be customers.

Rising capabilities of open source models like Metas Llama 3, which cost nothing and are already available on AWS, Azure, and Google Cloud, could also prove a hindrance here. Simple benchmark scores are no longer enough in a world where completely free models can go toe-to-toe with those made by the best AI developers.

Theres also the issue of convenience. Its likely that as AI becomes more commonplace, businesses will choose their AI assistant that requires the least upheaval of their existing stack and as little staff retraining as possible. For example, an organization thats already heavily invested in the Microsoft ecosystem could easily adopt Copilot versus another AI solution.

All of this said, its in its RAG-based approach that Metis could make the most difference. Still considered a novel technique, RAG provides assurances of accuracy that can put fears over hallucinations to bed and make AI adoption much more feasible for enterprises.

If Amazon can sell this message Metis is more reliable than the next best tool it could seize the corner of the market that the company has thus far failed to make gains within.

Excerpt from:

ChatGPT could be facing some serious competition: Amazon is reportedly working on a new AI tool, 'Metis', to ... - ITPro

SoftBank CEO says AI that is 10000 times smarter than humans will come out in 10 years – CNBC

Masayoshi Son, chairman and chief executive officer of SoftBank Group Corp., speaks during the company's annual general meeting in Tokyo, Japan, on Friday, June 20, 2024. Son sketched out ambitions to help create AI thousands of times smarter than any human, making his most grandiose pronouncements since the Japanese conglomerate began taking steps to shore up its finances following a series of ill-timed startup bets.

Kosuke Okahara | Bloomberg | Getty Images

Artificial intelligence that is 10,000 times smarter than humans will be here in 10 years, SoftBank CEO Masayoshi Son said on Friday, in a rare public appearance during which he questioned his own purpose in life.

Son laid out his vision for a world featuring artificial super intelligence, or ASI, as he dubbed it.

The CEO first talked about another term artificial general intelligence, or AGI which broadly refers to AI that is smarter than humans. Son said this tech is likely to be one to 10 times smarter than humans and will arrive in the next three-to-five years, earlier than he had anticipated.

But if AGI is not much smarter than humans, "then we don't need to change the way of living, we don't need to change the structure of human lifestyle," Son said, according to a live translation of his comments in Japanese, which were delivered during SoftBank's annual general meeting of shareholders.

"But when it comes to ASI it's a totally different story. [With] ASI, you will see a big improvement."

Son discussed how the future will hold various ASI models that interact with each other, like neurons in a human brain. This will lead to AI that is 10,000 times smarter than any human genius, according to Son.

SoftBank shares closed down more than 3% in Japan, following the meeting.

Son is SoftBank's founder, who rose to prominence after an early and profitable investment in Chinese e-commerce giant Alibaba. He positioned SoftBank as a tech visionary with the 2017 launch of the Vision Fund, a massive investment fund focused on backing tech firms. While some of the bets were successful, there were also many high-profile failures, such as office sharing company WeWork.

After posting then-record financial losses at Vision Fund in 2022, Son said that SoftBank would go into "defense" mode and be more conservative with its investments. In 2023, the Vision Fund posted a new record loss, with Son shortly after saying that SoftBank would now shift into "offense," because he was excited about the investment opportunities in AI.

Son has been broadly out of the public eye since then.

He returned to the spotlight on Friday to deliver a speech that was full of existential questions.

"Two years ago, I am getting old, rest of my life is limited, but I haven't done anything yet and I cried so hard," Son said, suggesting he feels he hasn't achieved anything of consequence to date.

He added that he had now found SoftBank's mission, which is the "evolution of humanity." He also said he has discovered his own purpose in life.

"SoftBank was founded for what purpose? For what purpose was Masa Son born? It may sound strange, but I think I was born to realize ASI. I am super serious about it," Son said.

Continued here:

SoftBank CEO says AI that is 10000 times smarter than humans will come out in 10 years - CNBC

How is AI transforming the insurtech sector? – Information Age

Artificial intelligence (AI) is impacting almost every industry, and insurance and the insurtech sector on which it depends is no exception, with applications benefiting both customers and insurance firms themselves.

From a customer service perspective, the use of chatbots is helping to answer queries in a more efficient manner, providing customers with instant answers around the clock, says Quentin Colmant, CEO of insurtech firm Qover. AI-powered chatbots can assist customers with contract management, freeing up human agents for more complex issues, he says. Additionally, AI analyses vast amounts of customer data to personalise insurance recommendations. This allows insurtechs to tailor products to the specific needs of customers, ensuring they are presented with the most relevant options.

The emergence of generative AI is likely to see this evolve further, using multiple data sources to provide even more personalised digital interaction. General information typically provided through static and dynamic FAQs are likely to be superseded by a more interactive human-style chatbot, which was on the increase even before the advent of generative AI, says Tony Farnfield, partner and UK practice lead at management consulting firm BearingPoint. The ability to link an AI bot to back-end policy and claims systems will scale back the need for human intervention.

Generative AI can also help target specific areas of frustration for customers, says Rory Yates, global strategic lead at EIS, referencing its own client esure Group. They focused on a key customer frustration when calling a contact centre, which was repetition, so being passed from one person to the next, and needing to re-explain the reason for making contact, he says. Their use of generative AI helps alleviate this. Then at the end of every call, generative AI is used to summarise the notes, capturing the details of the call, making sure accurate records are kept.

Internal efficiency is another major benefit of the effective use of AI. Steve Muylle, professor of digital strategy and business marketing at Vlerick Business School, gives the example of AI helping insurers to generate accurate quotes almost immediately. In 2019, Direct Line launched Darwin a motor insurance platform that uses AI to determine individual pricing through machine learning, he says. This approach has translated into better customer reviews and improved customer service.

Another example is in Asia, where insurance companies work with Uber, he adds. After an accident, insurers can ask nearby Uber drivers to check accidents, leveraging their knowledge of cars and their ability to take photos or videos for reporting, which can then be analysed by AI. This provides the insurers with more data, potentially from a third party, and is also a side gig for the Uber drivers.

Another application is in the onboarding and training of employees. AI-powered virtual assistants can guide new employees through the onboarding process, providing support and answering questions around the clock, says Christian Brugger, partner at digital consultancy OMMAX. Interactive AI-powered tools, such as virtual reality and augmented reality, can offer immersive training experiences, simulating real-life scenarios employees might face.

Its also being used to improve efficiency more generally, in the same way as it might any other business. The ability to automate high-volume, routine, low-value-added tasks has allowed insurers to speed up their services and increase productivity, says Steve Bramall, credit director at Allianz Trade. This frees up valuable experts to spend more time with customers and brokers, improving customer experience.

Yet the use of AI also brings risks and ethical considerations for insurers and insurtech firms. With all AI, you need to understand where the AI models are from and where the data is being trained from and, importantly, whether there is an in-built bias, says Kevin Gaut, chief technology officer at insurtech INSTANDA. Proper due diligence on the data is the key, even with your own internal data.

Its essential, too, that organisations can explain any decisions that are taken, warns Muylle, and that there is at least some human oversight. A notable issue is the black-box nature of some AI algorithms that produce results without explanation, he warns. To address this, its essential to involve humans in the decision-making loop, establish clear AI principles and involve an AI review board or third party. Companies can avoid pitfalls by being transparent with their AI use and co-operating when questioned.

AI applications themselves also raise the potential for organisations to get caught out in cyber-attacks. Perpetrators can use generative AI to produce highly believable yet fraudulent insurance claims, points out Brugger. They can also use audio synthesis and deepfakes pretending to be someone else. If produced at high-scale, such fraudulent claims can overwhelm the insurer, leading to higher payouts.

Cyber-attacks can also lead to significant data breaches, which can have serious consequences for insurers. These can expose confidential client information, which inevitably poses new challenges towards fostering client trust, says James Harrison, global head of insurance at Dun & Bradstreet. Additionally, failure to comply with data protection regulations, such as GDPR, can lead to legal consequences and financial penalties.

Having robust cybersecurity measures is essential, particularly when it comes to sensitive or personal data, says David Dumont, a partner at law firm Hunton Andrews Kurth, and its important to ensure these remain able to cope with new regulations. In the EU, the legal framework on cybersecurity is evolving and becoming more prescriptive, he explains. Within the next year, insurtechs may, for example, be required to comply with considerable cybersecurity obligations under the Digital Operational Resilience Act (DORA), depending on the specific type of products and services that they offer.

All this means AI requires careful handling if insurers and insurtechs are to realise the benefits, without experiencing the downsides. The future of AI in insurtech is brimming with potential, believes Colmant. AI will likely specialise in specific insurance processes, like underwriting or claims management, leading to significant efficiency gains and improved accuracy. This will also likely lead to even greater personalisation and automation.

However, the focus will likely shift towards a collaborative approach, with AI augmenting human capabilities rather than replacing them entirely. Throughout this evolution, ethical considerations will remain a top priority.

How artificial intelligence is helping to slash fraud at UK banks Rob Woods, fraud expert at LexisNexis Risk Solutions, tells Charles Orton-Jones why behavioural data and AI are a powerful fraud-fighting combination

Why is embedded insurance so popular right now? Charles Orton-Jones asks five industry experts how embedded insurance could transform the sector and whether or not it offers real value for consumers

Will more AI mean more cyberattacks? An increased use of AI within organisations could spell a rise in cyberattacks, explains Nick Martindale. Heres what you can do

Read more from the original source:

How is AI transforming the insurtech sector? - Information Age

AI doomers have warned of the tech-pocalypse while doing their best to accelerate it – Salon

One of the most prominent narratives about AGI, or artificial general intelligence, in the popular media these days is the AI doomer narrative. This claims that were in the midst of an arms race to build AGI, propelled by a relatively small number of extremely powerful AI companies like DeepMind, OpenAI, Anthropic, and Elon Musks xAI (which aims to design an AGI that uncovers truths about the universe by eschewing political correctness). All are backed by billions of dollars: DeepMind says that Microsoft will invest over $100 billion in AI, while OpenAI has thus far received $13 billion from Microsoft, Anthropic has $4 billion in investments from Amazon, and Musk just raised $6 billion for xAI.

Many doomers argue that the AGI race is catapulting humanity toward the precipice of annihilation: if we create an AGI in the near future, without knowing how to properly align the AGIs value system, then the default outcome will be total human extinction. That is, literally everyone on Earth will die. And since it appears that were on the verge of creating AGI or so they say this means that you and I and everyone we care about could be murdered by a misaligned AGI within the next few years.

These doomers thus contend, with apocalyptic urgency, that we must pause or completely ban all research aiming to create AGI. By pausing or banning this research, it would give others more time to solve the problem of aligning AGI to our human values, which is necessary to ensure that the AGI is sufficiently safe. Failing to do this means that the AGI will be unsafe, and the most likely consequence of an unsafe AGI will be the untimely death of everyone on our planet.

The doomers contrast with the AI accelerationists, who hold a much more optimistic view. They claim that the default outcome of AGI will be a bustling utopia: well be able to cure diseases, solve the climate crisis, figure out how to become immortal, and even colonize the universe. Consequently, these accelerationists some of whom use the acronym e/acc (pronounced ee-ack) to describe their movement argue that we should accelerate rather than pause or ban AGI research. There isnt enough money being funneled into the leading AI companies, and calls for government regulation are deeply misguided because theyre only going to delay the arrival of utopia.

Some even contend that any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder. So, if you advocate for slowing down research on advanced AI, you are no better than a murderer.

The loudest voices within the AI doomer camp have been disproportionately responsible for launching and sustaining the very technological race that they now claim could doom humanity.

But theres a great irony to this whole bizarre predicament: historically speaking, no group has done more to accelerate the race to build AGI than the AI doomers. The very people screaming that the AGI race is a runaway train barreling toward the cliff of extinction have played an integral role in starting these AI companies. Some have helped found these companies, while others provided crucial early funding that enabled such companies to get going. They wrote papers, books and blog posts that popularized the idea of AGI and organized conferences that inspired interest in the topic. Many of those worried that AGI will kill everyone on Earth have gone on to work for the leading AI companies, and indeed the two techno-cultural movements that initially developed and promoted the doomer narrative namely, Rationalism and Effective Altruism have been at the very heart of the AGI race since its inception.

In a phrase, the loudest voices within the AI doomer camp have been disproportionately responsible for launching and sustaining the very technological race that they now claim could doom humanity in the coming years. Despite their apocalyptic warnings of near-term annihilation, the doomers have in practice been more effective at accelerating AGI than the accelerationists themselves.

Consider a few examples, beginning with the Skype cofounder and almost-billionaire Jaan Tallinn, who also happens to be one of the biggest financial backers of the Rationalist and Effective Altruist (EA) movements. Tallinn has repeatedly claimed that AGI poses an enormous threat to the survival of humanity. Or, in his words, it is by far the biggest risk facing us this century bigger than nuclear war, global pandemics or climate change.

In 2014, Tallinn co-founded a Boston-based organization called the Future of Life Institute (FLI), which has helped raise public awareness of the supposedly grave dangers of AGI. Last year, FLI released an open letter calling on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4, where GPT4 was the most advanced system that OpenAI had released at the time. The letter warns that AI labs have become locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one not even their creators can understand, predict, or reliably control, resulting in a dangerous race. Tallinn was one of the first signatories.

Tallinn is thus deeply concerned about the race to build AGI. Hes worried that this race might lead to our extinction in the near future. Yet, through his wallet, he has played a crucial role in sparking and fueling the AGI race. He was an early investor in DeepMind, which Demis Hassabis, Shane Legg and Mustafa Suleyman cofounded 2010 with the explicit goal of creating AGI. After OpenAI started in 2015, he had a close connection to some people at the company, meeting regularly with individuals like Dario Amodei, a member of the EA movement and a key figure in the direction of OpenAI. (Tallinn himself is closely aligned with the EA movement.)

Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter Lab Notes.

In 2021, Amodei and six other former employees of OpenAI founded Anthropic, a competitor of both DeepMind and OpenAI. Where did Anthropic get its money? In part from Tallinn, who donated $25 million and led a $124 million series A fundraising round to help the company get started.

Here we have one of the leading voices in the doomer camp claiming that the AGI race could result in everyone on Earth dying, while simultaneously funding the biggest culprits in this reckless race toward AGI. Im reminded of something that Noam Chomsky once said in 2002, during the early years of George Bushs misguided War on Terror. Chomsky declared: We certainly want to reduce the level of terror, he said, referring to the U.S. There is one easy way to do that stop participating in it. The same idea applies to the AGI race: if AI doomers are really so worried that the race to build AGI will lead to an existential catastrophe, then why are they participating in it? Why have they funded and, in some cases, founded the very companies responsible for supposedly pushing humanity toward the precipice of total destruction?

In fact, Amodei, Shane Legg, Sam Altman and Elon Musk all of whom founded or cofounded some of the leading AI companies have expressed doomer concerns that AGI could annihilate our species in the near term. In an interview with the EA organization 80,000 Hours, Amodei referenced the possibility that an AGI could destroy humanity, saying I cant see any reason in principle why that couldnt happen. He adds that this is a possible outcome and at the very least as a tail risk we should take it seriously.

Over and over again, the very same people saying that AGI could kill us all have done more than anyone else to launch and accelerate the race toward AGI.

Similarly, DeepMind cofounder Shane Legg wrote on the website LessWrong in 2011 that AGI is his number 1 risk for this century. That was one year after DeepMind was created. In 2015, the year he co-founded OpenAI with Elon Musk and others, Altman declared that I think AI will most likely sort of lead to the end of the world, adding on his personal blog that the development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.

Then theres Musk, who has consistently identified AGI as the biggest existential threat, and far more dangerous than nukes. In early 2023, Musk signed the open letter from FLI calling for a six month pause on advanced AI research. Just four months later, he announced that he was starting yet another AI company: xAI.

Over and over again, the very same people saying that AGI could kill us all have done more than anyone else to launch and accelerate the race toward AGI. This is even true of the most famous doomer in the world today, a self-described genius named Eliezer Yudkowsky. In a Time magazine article from last year, Yudkowsky argued that our only hope of survival is to immediately shut down all of the large computer farms where the most powerful AIs are refined. Countries should sign an international treaty to halt AGI research and be willing to engage in military airstrikes against rogue datacenters to enforce this treaty.

Yudkowsky is so worried about the AGI apocalypse that he claims we should be willing to risk an all-out thermonuclear war that kills nearly everyone on Earth to prevent AGI from being built in the near future. He then gave a TED talk in which he reiterated his warnings: if we build AGI without knowing how to make it safe and we have no idea how to make it safe right now, he claims then literally everyone on Earth will die.

Yet I doubt that any single individual has promoted the idea of AGI more than Yudkowsky himself. In a very significant way, he put AGI on the map, inspired many people involved in the current AGI race to become interested in the topic, and organized conferences that brought together early AGI researchers to cross-pollinate ideas.

Consider the Singularity Summit, which Yudkowsky co-founded with the Google engineer Ray Kurzweil and tech billionaire Peter Thiel in 2006. This summit, held annually until 2012, focused on the promises and perils of AGI, and included the likes of Tallinn, Hassabis, and Legg on its list of speakers. In fact, both Hassabis and Legg gave talks about AGI-related issues in 2010, shortly before co-founding DeepMind. At the time, DeepMind needed money to get started, so after the Singularity Summit, Hassabis followed Thiel back to his mansion, where Hassabis asked Thiel for financial support to start DeepMind. Thiel obliged, offering Hassabis $1.85 million, and thats how DeepMind was born. (The following year, in 2011, is when Tallinn made his early investment in the company.)

If not for Yudkowskys Singularity Summit, DeepMind might not have gotten off the ground or at least not when it did. Similar points could be made about various websites and mailing lists that Yudkowsky created to promote the idea of AGI. For example, AGI has been a major focus of the community blogging website LessWrong, created by Yudkowsky around 2009. This website quickly became the online epicenter for discussions about how to build AGI, the utopian future that a safe or aligned AGI could bring about, and the supposed existential risks associated with AGIs that are unsafe or misaligned. As noted above, it was on the LessWrong website that Legg identified AGI to be the number one threat facing humanity, and records show that Legg was active on the website very early on, sometimes commenting directly under articles by Yudkowsky about AGI and related issues.

Or consider the SL4 mailing list that Yudkowsky created in 2001, which described itself as dedicated to advanced topics in transhumanism and the Singularity, including strategies to accelerate the Singularity. The Singularity is a hypothetical future event in which advanced AI begins to redesign itself, leading to a superintelligent AGI system over the course of weeks, days, or perhaps even minutes. Once again, Legg also contributed to the list, which indicates that the connections between Yudkowsky, the worlds leading doomer, and Legg, cofounder of one of the biggest AI companies involved in the AGI race, goes back more than two decades.

These are just a few reasons that Altman himself wrote on Twitter (now X) last year that Yudkowsky the worlds leading AI doomer has probably contributed more than anyone to the AGI race. In Altmans words, Yudkowsky got many of us interested in AGI, helped DeepMind get funding at a time when AGI was extremely outside the Overton window, was critical in the decision to start OpenAI, etc. He then joked that Yudkowsky may deserve the Nobel Peace Prize for this. (These quotes have been lightly edited to improve readability.)

Rationalists and EAs are also some of the main participants and contributors to the very race they believe could precipitate our doom.

Though Altman was partly trolling Yudkowsky for complaining about a situation the AGI race that Yudkowsky was instrumental in creating, Altman isnt wrong. As a New York Times article from 2023 notes, Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind. One could say something similar about Anthropic, as it was Yudkowskys blog posts that convinced Tallinn that AGI could be existentially risky, and Tallinn later played a crucial role in helping Anthropic get started which further accelerated the race to build AGI. The connections and overlaps between the doomer movement and the race to build AGI are extensive and deep the more one scratches the surface, the clearer these links appear.

Indeed, I mentioned the Rationalist and EA movements earlier. Rationalism was founded by Yudkowsky via the LessWrong website, while EA emerged around the same time, in 2009, and could be seen as the sibling of Rationalism. These communities overlap considerably, and both have heavily promoted the idea that AGI poses a profound threat to our continued existence this century.

Yet Rationalists and EAs are also some of the main participants and contributors to the very race they believe could precipitate our doom. As noted above, Dario Amodei (co-founder of Anthropic) is an EA, and Tallinn has given talks at major EA conferences and donated tens of millions of dollars to both movements. Similarly, an Intelligencer article about Altman reports that Altman once embraced EA, and a New York Times profile describes him as the product of a strange, sprawling online community that began to worry, around the same time Mr. Altman came to the Valley, that artificial intelligence would one day destroy the world. Called rationalists or effective altruists, members ofthis movementwere instrumental in the creation of OpenAI.

Yet another New York Times article notes that the EA movement beat the drum so loudly about the dangers of AGI that many young people became inspired to work on the topic. Consequently, all of the major AI labs and safety research organizations contain some trace of effective altruisms influence, and many count believers among their staff members. The article then observes that no major AI lab embodies the EA ethos as fully as Anthropic, given that many of the companys early hires were effective altruists, and much of its start-up funding came from wealthy EA-affiliated tech executives not just Tallinn, but the co-founder of Facebook Dustin Moskovitz, who, like Tallinn, has donated considerably to EA projects.

There is a great deal to say about this topic, but the key point for our purposes is that the doomer narrative largely emerged out of the Rationalist and EA movements the very movements that have been pivotal in founding, funding and inspiring all the major AI companies now driving the race to build AGI.

Again, one wants to echo Chomsky in saying: if these communities are so worried about the AGI apocalypse, why have they done so much to create the very conditions that enabled the AGI race to get going? The doomers have probably done more to accelerate AGI research than the AI accelerationists that they characterize as recklessly dangerous.

How has this happened? And why? One reason is that many doomers believe that AGI will be built by someone, somewhere, eventually. So it might as well be them who builds the first AGI. After all, many Rationalists and EAs pride themselves on having exceptionally high IQs while claiming to be more rational than ordinary people, or normies. Hence, they are the best group to build AGI while ensuring that it is maximally safe and beneficial. The unfortunate consequence is that these Rationalists and EAs have inadvertently initiated a race to build AGI that, at this point, has gained so much momentum that it appears impossible to stop.

Even worse, some of the doomers most responsible for the AGI race are now using this situation to gain even more power by arguing that policymakers should look to them for the solutions. Tallinn, for example, recently joined the United Nations Artificial Intelligence Advisory Body, which focuses on the risks and opportunities of advanced AI, while Yudkowsky has defended an international policy that leaves the door open to military strikes that might trigger a thermonuclear war. These people helped create a huge, complicated mess, then turned around, pointed at that mess, and shouted: Oh, my! Were in such a dire situation! If only governments and politicians would listen to us, though, we just might be able to dodge the bullet of annihilation.

This looks like a farce. Its like someone drilling a hole in a boat and then declaring: The only way to avoid drowning is to make me captain.

The lesson is that governments and politicians should not be listening to the very people or the Rationalist and EA movements to which they belong that are disproportionately responsible for this mess in the first place. One could even argue plausibly, in my view that if not for the doomers, there probably wouldnt be an AGI race right now at all.

Though the race to build AGI does pose many dangers, the greatest underlying danger is the Rationalist and EA movements that spawned this unfortunate situation over the past decade and a half. If we really want to bring the madness of the AGI race to a stop, its time to let someone else have the mic.

Read more

about artificial intelligence

See the original post:

AI doomers have warned of the tech-pocalypse while doing their best to accelerate it - Salon

Softbank CEO Says AI That’s 10000X Smarter Than Humans Is Inevitable – Hot Hardware

Softbank CEO Masayoshi Son remarked, during a shareholders meeting, that he believes AI will be 10,000 times smarter than human intelligence 10 years from now. Son also remarked that he saw Softbanks mission to be the evolution of humanity, while also stating he had finally discovered his own purpose in life.

During the meeting last week, Son remarked that the company will place its entire focus on pairing robots with artificial intelligence to be utilized in all sorts of mass production, logistics, and autonomous driving. Son recognized that the effort will require immense capital and pooling funds with partners, as he said Softbank could not finance it on its own.

Son began his speech talking about artificial general intelligence, or AGI. He added that he believes AI will be at least 10 times smarter than humans within 3-5 years, even earlier than he anticipated. However, he went on to say that if AGI is not going to be that much smarter than humans, then we dont need to change the way of living, we dont need to change the structure of human lifestyle.

Son, and Softbank have not had the best of years in the recent past. While some of Sons investments had good returns, there were also many that did not, such as office sharing company WeWork. However, Softbank subsidiary Arm, a British chip designer, has prospered in more recent times with the attention of investors focused on anything AI. It is perhaps of Arm that Son is now willing to make such a bold statement in terms of Softbanks mission, and his own vision.

During his speech, Son summed up his vision for himself and Softbank, remarking, Softbank was founded for what purpose? For what purpose was Masa Son born? It may sound strange, but I think I was born to realize ASI. He concluded, I am super serious about it.

View original post here:

Softbank CEO Says AI That's 10000X Smarter Than Humans Is Inevitable - Hot Hardware

Artificial General Intelligence, Shrinkflation and Snackable are Some of the Many New, Revised, or Updated Words and … – LJ INFOdocket

Were sure youll agree that 2024 is turning out to be anything butbeige(bland or unremarkable; uninspiring). Were set to see a record-breaking number of elections this year, with 50 countries due to head to the polls before the year is out. Readers with an interest in UK and/or European politics might remember that we addedBrexitto theOEDback in 2016. Since then, several related words have proven their longevity, and this month, weve added entries forleaver,Brexiter, andBrexiteer(referring to people who supported, campaigned, or voted for the United Kingdoms withdrawal from the European Union), as well asremainerandRemoaner(words referring to those who did the same on the other side, wanting the UK to stay in the EU).

If you find yourselfbefuddled(bewildered, confused) by current political debates, take refuge in theenjoyability(the fact or quality of being enjoyable; congeniality, pleasurableness) of the following lighter offering. Have you found the third series of Netflixs glamorousBridgertonbinge-worthy? Taken note of thehunkiness(qualities or characteristics considered to be hunky, especially rugged good looks or sexual attractiveness) of its male stars? Then it may interest you to know that it was not until the early 1900s that the wordglamourcame to be associated with attractiveness and luxury. In the eighteenth and nineteenth centuries, glamour was all about enchantment to cast aglamour over someone meant putting them quite literally under your spell.

The word only became closely associated with visual opulence, physical attraction, and charisma in the later twentieth century, perhaps as a result of the rise of cinema and the Golden Age of Hollywood. In the 1970s, the advent ofglam rock the style of rock music where performers such as David Bowie made flamboyant clothes and make-up a feature of their onstage performances and personas sealed this linguistic shift. Other associated additions includeglam rocker,visual kei(the glam rock movement or aesthetic in Japanese rock music),glam up(to make oneself more glamorous),glamour puss(a glamorous or attractive person),glamazon(a tall, glamorous, and powerful woman), andglampsite the most luxurious location to get your fix of the great outdoors.

Speaking of the great outdoors,wildscapenow has its own entry. Meaning an area within which plants and animals have been able to thrive with minimal or no human presence, it conjures up more peaceful scenes than some of our other environment-related additions.Five-alarm(designating a particularly large, fierce, destructive fire, especially one requiring a large-scale response from firefighters) andmegadrought(a drought lasting many years, great both in extent and severity) echo other alarming language used in the world of meteorology, such asweather bomb(added in 2015) andblood rain(added in 2012).

Moving back indoors and online, weve added a number of technology related terms, perhaps most notablyartificial general intelligence, orAGIfor short. This is a form of AI in which a machine or computer program can (hypothetically) simulate behaviour as intelligent as, or more intelligent than, that of a human being. When it comes to human activity on the internet, weve addedfreecycle(to give away an unwanted possession, especially when agreed or arranged via an online network) andedgelord(a person who affects a provocative or extreme persona, especially online).Snackable, meanwhile, can be used to describe a video or other item of digital content, especially on social media, that is designed for brief and easy consumption, or to refer to food intended as a snackIRL(in real life which is not a new addition, but is an enjoyable acronym).

Speaking of snacks, babyccino(a frothy hot milk drink for children, intended to resemble a cappuccino) and the regrettableshrinkflation(a reduction in the size or weight of products with no corresponding reduction in price, a phenomenon first described this way in 2008) can now be found in theOED. Fewer tasty treats for more money? How regrettable. One last food-related anecdote before we sign off the verbbeefhas a new first sense. Evidence dating from the early 1800s shows the phraseto cry beefhad the meaning to raise the alarm or make an outcry against a person, especially to cry for help to arrest an escaping thief. This seems to be a precursor to the more familiar current senses ofbeef(and indeedbeefing) relating to arguments, fights, and feuds.

Sadly, we cantsqueeze another word in edgeways(to contribute something to a conversation, usually with the implication that this is difficult because the other speakers are talking incessantly).T minusthree months until the next quarterly update Join us then.

Learn More

For more insight into the surprising joint linguistic origins of the words glamourandgrammar, seethis blog post. Thesenew word notesinclude discussion of the wordcoruscating(recommended reading), and this piece focuses on updates aroundIndo-European words. A selection of highlights from the list ofnew wordsadded,new sensesadded, and additions tounrevised entriesare available too.

View original post here:

Artificial General Intelligence, Shrinkflation and Snackable are Some of the Many New, Revised, or Updated Words and ... - LJ INFOdocket