Category Archives: Ai

Big risks: Obama and tech experts address harms of AI to marginalized communities – NBC News

CHICAGO More must be done to curb AIs potential for harm or the further marginalization of people of color, a panel of experts weighing the ever-widening reach of AI warned last week.

The warning came during a panel discussion here at the Obama Foundations Democracy Forum, a yearly event for thought leaders to exchange ideas on how to create a more equitable society. This years forum was focused on the advances and challenges of AI.

During a panel titled, Weighing AI and Human Progress, Alondra Nelson, a professor of social science at the Institute for Advanced Study, said AI tools can be incorrect and even perpetuate discrimination.

Theres already evidence that the tools sometimes discriminate and sort of amplify and exacerbate bias in life big problems that were already trying to grapple with in society, Nelson said.

A 2021 paper published by AI researchers revealed how large language models can reinforce racism and other forms of oppression. People in positions of privilege tend to be overrepresented in training data for language models, which incorporates encoded biases like racism, misogyny and ableism.

Furthermore, just in the last year multiple Black people have said they were misidentified by facial recognition technology, which is based on AI, leading to unfair criminalization. In Georgia, 28-year-old Randall Reid said he was falsely arrested and jailed in 2022 after Louisiana authorities used facial recognition technology to secure an arrest warrant linking him to three men involved in theft. Noticeable physical differences, including a mole on his face, prompted a Jefferson Parish sheriff to rescind the warrant.

Porcha Woodruff sued the city of Detroit for a false arrest in February. Her lawsuit accuses authorities of using an unreliable facial recognition match in a photo lineup linking her to a carjacking and robbery. Woodruff, who was eight months pregnant at the time, was charged and released on a $100,000 personal bond. The case was later dropped for insufficient evidence, according to the lawsuit.

In polls, Black people have already expressed skepticism over the technology. In April the Pew Research Center found that 20% of Black adults who see racial bias and unfair treatment in hiring as an issue said they think AI would make it worse, compared to about 1 in 10 white, Asian and Latino adults.

Former President Barack Obama, in the forums keynote address, said he was encouraged by the Biden administrations recently signed executive order on AI, which established broad federal oversight and investment in the technology and which Obama provided advice on, but acknowledged that there are some big risks associated with it.

During the panel, Hany Farid, a professor at the University of California, Berkeley, said that predictive AI in hiring, in the criminal legal system and even in banking can sometimes perpetuate human biases.

That predictive AI is based on historical data, Farid said. So, if your historical data is biased, which it is against people of color, against women, against the LGBTQ community well guess what? Your AI is going to be biased. So, when we push these systems without fully understanding them, all we are doing is repeating history.

Over the past two years, Nelson has been working within the White House Office of Science and Technology Policy, focusing on the equitable innovation of AI to include many people and voices, she said. Under the Biden administration, her team developed a Blueprint for an AI Bill of Rights, a guide to protect people from the threats of automated systems and includes insights from journalists, policymakers, researchers and other experts.

More conversations are happening about AI around the globe, Nelson said, which is really important, and she hopes that society will seize the opportunity.

Even if youre not an expert in mathematics, you can have an opinion about this very powerful tool thats going to accomplish a quite significant social transformation, Nelson said. We have choices to make as a society about what we want our future to look like, and how we want these tools to be used in that future and it really is going to fall to all of us and all of you to do that work.

For more from NBC BLK,sign up for our weekly newsletter.

Claretta Bellamy is a fellow for NBC News.

Read the original here:

Big risks: Obama and tech experts address harms of AI to marginalized communities - NBC News

Chamath Palihapitiya says theres a reasonable case to make that the job of VC doesnt exist in a world of AI-powered two-person startups – Fortune

If you accept the argument that todays artificial intelligence boom will lead to dramatic productivity gains, it follows that smaller companies will be able to accomplish things that only larger ones could in the past.

In a world like that, venture capitalists might need to change their approach to funding startups. So believes billionaire investor Chamath Palihapitiya, a former Facebook executive and the CEO of Silicon Valley VC firm Social Capital.

It seems pretty reasonable and logical that AI productivity gains will lead to tens or hundreds of millions of startups made up of only one or two people, he said on a Friday episode of the All-In Podcast.

Theres a lot of sort of financial engineering that kind of goes away in that world, he said. I think the job of the venture capitalist changes really profoundly. I think theres a reasonable case to make that it doesnt exist.

Palihapitiya became the face of the SPAC boom-and-bust a few years ago due to his involvement with special purpose acquisition companies. Also known as blank check companies, SPACs are shell corporations listed on a stock exchange that acquire a private company, thereby making it public while skipping the rigors of the IPO process.

At one point, Palihapitiya suggested that he might become his generations version of Berkshire Hathaway chairman Warren Buffett. I do want to have a Berkshire-like instrument that is all things, you know, not to sound egotistical, but all things Chamath, all things Social Capital, he said in early 2021.

Buffetts right-hand man at Berkshire, Charlie Munger, recently expressed his disdain for venture capitalists. You dont want to make money by screwing your investors, and thats what a lot of venture capitalists do, the 99-year-old said on the Acquired podcast, adding, To hell with them!

Palihapitiya suggested that VCs might be replaced at some level by an automated system of capital against objectivesyou want to be making many, many, many small $100,000 [or] $500,000 bets.

Once a tiny-team startup gets to certain level, it can go and get the $100 and $200 million checks, he said, adding, I dont know how else all of this gets supported financially.

Many Silicon Valley leaders expect AI will lead to some types of jobs going away, but that overall it will result in greater productivity and more jobs. Among them is Jensen Huang, the billionaire CEO of Nvidia, which makes the chips that are in hot demand from companies racing to launch AI services.

My sense is that its likely to generate jobs, he recently told the Acquired podcast. The first thing that happens with productivity is prosperity. When the companies get more successful, they hire more people, because they want to expand into more areas.

He added, humans have a lot of ideas.

Read the rest here:

Chamath Palihapitiya says theres a reasonable case to make that the job of VC doesnt exist in a world of AI-powered two-person startups - Fortune

Here’s what we know about generative AI’s impact on white-collar work – Financial Times

What is included in my trial?

During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages.

Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here.

Change the plan you will roll onto at any time during your trial by visiting the Settings & Account section.

If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month.

For cost savings, you can change your plan at any time online in the Settings & Account section. If youd like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial.

You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many users needs. Compare Standard and Premium Digital here.

Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel.

You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side.

You can still enjoy your subscription until the end of your current billing period.

We support credit card, debit card and PayPal payments.

Read the original:

Here's what we know about generative AI's impact on white-collar work - Financial Times

EU’s AI Act negotiations hit the brakes over foundation models – EURACTIV

A technical meeting on the EUs AI regulation broke down on Friday (10 November) after large EU countries asked to retract the proposed approach for foundation models. Unless the deadlock is broken in the coming days, the whole legislation is at risk.

The AI Act is a landmark bill to regulate Artificial Intelligence following a risk-based approach. The file is currently in the last phase of the legislative process, with the main EU institutions gathered in so-called trilogues to hash out the final dispositions of the law.

Foundation models have become the sticking point in this late phase of the negotiations. With the rise of ChatGPT, a popular chatbot based on OpenAIs powerful GPT-4 model, EU policymakers have been wondering how best to cover this type of AI in the upcoming law.

At the last political trilogue on 24 October, there seemed to be a consensus to introduce rules for foundation models following a tiered approach, namely, introducing tighter rules for the most powerful ones bound to have more impact on society.

This approach, which goes along similar lines to the Digital Markets Act (DMA) and Digital Services Act (DSA), was seen as a concession from the side of the European Parliament, which would have preferred horizontal rules for all foundation models.

The point of the tiered approach was to put the harshest obligations on the leading providers that currently are non-European companies. However, this approach has faced mounting opposition from large European countries.

On Sunday, the Spanish presidency circulated a first draft that put the tiered approach black-on-white for internal feedback. The European Parliaments co-rapporteurs replied with some modifications on Wednesday, maintaining the overall structure of the provisions.

However, at a meeting of the Telecom Working Party on Thursday, a technical body of the EU Councils of Ministers, representatives from several member states, most notably France, Germany and Italy, pushed against any type of regulation for foundation models.

Leading the charge against any regulation for foundation models in the AI rulebook is Mistral, a French AI start-up that has thrown the gauntlet to Big Tech. Cedric O, Frances former state secretary for digital, is pushing Mistrals lobbying efforts, arguing that the AI Act could kill the company.

Meanwhile, Germany is being pressured by its own leading AI company Aleph Alpha, which Euractiv understands has very high-level connections with the German establishment. All these companies fear the EU regulation might put them on a back foot compared to US and Chinese competitors.

Despite efforts from the Spanish presidency to broker an agreement with the European Parliament, faced with these strong stances from political heavyweights, the Spaniards proposed a general rethinking of the dispositions on foundation models.

Pressed for one hour and a half about the reason for such a change of direction, the arguments advanced included that this tiered approach would have amounted to a regulation in the regulation, and that it could jeopardise innovation and the risk-based approach.

The European Commission originally proposed the tiered approach, which would have seen the EU executive in the driving seat of enforcing on foundation models. However, the Commission did not defend it during the technical meeting.

The European Parliaments representatives ended the meeting two hours earlier because there was nothing else to discuss. Euractiv understands that regulating foundation models is a red line for the parliamentarians, without which an agreement cannot be reached.

The ball is now in the Councils court to come up with a proposal, a parliamentary official told Euractiv under the condition of anonymity, stressing that the presidency did not have an alternative solution to the tiered approach.

A second EU official also told Euractiv anonymously that the presidency is trying to convince reluctant member states, which are against regulating systemic actors at the model level but not at the system level.

At the same time, Euractiv understands that a growing faction inside the most reluctant member states is opposing the AI Act as a whole, considering it overregulation. Indeed, if no solution is found soon, the entire law might be at risk.

The EU policymakers were expected to close a political agreement at the next trilogue on 6 December, which means that landing zones for the most critical parts should be more or less in sight by the end of November.

If no agreement is reached in December, the outgoing Spanish presidency would have no incentive to continue the work at the technical level, and the upcoming Belgian presidency would have only a few weeks to tie up the loose ends of such a complex file before the European Parliament is dissolved for the EU elections next June.

Moreover, a general rethinking of the approach to foundation models would also require a deep revision of the regulations governance architecture and dispositions for responsibilities alongside the AI value chain, for which there might simply not be enough time.

When the AI Act was proposed in April 2021, the EU had a first-mover advantage in setting the worlds international standard for regulating Artificial Intelligence. As the hype on AI has grown, policymakers in the US, UK and China have become increasingly active.

Failing to agree on the EUs AI rulebook under this mandate would not only make lose momentum, but it would also result in Brussels losing ground compared to other jurisdictions.

The Telecom Working Party is due to meet again next Tuesday. Another technical meeting is scheduled among the EU co-legislators on the same day. Euractiv understands negotiations have now been escalated at the highest political level to break the deadlock.

The AI Act is on the line now, a third EU official told Euractiv. Its now or never.

[Edited by Nathalie Weatherald]

Read this article:

EU's AI Act negotiations hit the brakes over foundation models - EURACTIV

AI robotics GPT moment is near – TechCrunch

Image Credits: Robust.ai

Its no secret that foundation models have transformed AI in the digital world. Large language models (LLMs) like ChatGPT, LLaMA, and Bard revolutionized AI for language. While OpenAIs GPT models arent the only large language model available, they have achieved the most mainstream recognition for taking text and image inputs and delivering human-like responses even with some tasks requiring complex problem-solving and advanced reasoning.

ChatGPTs viral and widespread adoption has largely shaped how society understands this new moment for artificial intelligence.

The next advancement that will define AI for generations is robotics. Building AI-powered robots that can learn how to interact with the physical world will enhance all forms of repetitive work in sectors ranging from logistics, transportation, and manufacturing to retail, agriculture, and even healthcare. It will also unlock as many efficiencies in the physical world as weve seen in the digital world over the past few decades.

While there is a unique set of problems to solve within robotics compared to language, there are similarities across the core foundational concepts. And some of the brightest minds in AI have made significant progress in building the GPT for robotics.

To understand how to build the GPT for robotics, first look at the core pillars that have enabled the success of LLMs such as GPT.

GPT is an AI model trained on a vast, diverse dataset. Engineers previously collected data and trained specific AI for a specific problem. Then they would need to collect new data to solve another. Another problem? New data yet again. Now, with a foundation model approach, the exact opposite is happening.

Instead of building niche AIs for every use case, one can be universally used. And that one very general model is more successful than every specialized model. The AI in a foundation model performs better on one specific task. It can leverage learnings from other tasks and generalize to new tasks better because it has learned additional skills from having to perform well across a diverse set of tasks.

To have a generalized AI, you first need access to a vast amount of diverse data. OpenAI obtained the real-world data needed to train the GPT models reasonably efficiently. GPT has trained on data collected from the entire internet with a large and diverse dataset, including books, news articles, social media posts, code, and more.

Its not just the size of the dataset that matters; curating high-quality, high-value data also plays a huge role. The GPT models have achieved unprecedented performance because their high-quality datasets are informed predominantly by the tasks users care about and the most helpful answers.

OpenAI employs reinforcement learning from human feedback (RLHF) to align the models response with human preference (e.g., whats considered beneficial to a user). There needs to be more than pure supervised learning (SL) because SL can only approach a problem with a clear pattern or set of examples. LLMs require the AI to achieve a goal without a unique, correct answer. Enter RLHF.

RLHF allows the algorithm to move toward a goal through trial and error while a human acknowledges correct answers (high reward) or rejects incorrect ones (low reward). The AI finds the reward function that best explains the human preference and then uses RL to learn how to get there. ChatGPT can deliver responses that mirror or exceed human-level capabilities by learning from human feedback.

The same core technology that allows GPT to see, think, and even speak also enables machines to see, think, and act. Robots powered by a foundation model can understand their physical surroundings, make informed decisions, and adapt their actions to changing circumstances.

The GPT for robotics is being built the same way as GPT was laying the groundwork for a revolution that will, yet again, redefine AI as we know it.

By taking a foundation model approach, you can also build one AI that works across multiple tasks in the physical world. A few years ago, experts advised making a specialized AI for robots that pick and pack grocery items. And thats different from a model that can sort various electrical parts, which is different from the model unloading pallets from a truck.

This paradigm shift to a foundation model enables the AI to better respond to edge-case scenarios that frequently exist in unstructured real-world environments and might otherwise stump models with narrower training. Building one generalized AI for all of these scenarios is more successful. Its by training on everything that you get the human-level autonomy weve been missing from the previous generations of robots.

Teaching a robot to learn what actions lead to success and what leads to failure is extremely difficult. It requires extensive high-quality data based on real-world physical interactions. Single lab settings or video examples are unreliable or robust enough sources (e.g., YouTube videos fail to translate the details of the physical interaction and academic datasets tend to be limited in scope).

Unlike AI for language or image processing, no preexisting dataset represents how robots should interact with the physical world. Thus, the large, high-quality dataset becomes a more complex challenge to solve in robotics, and deploying a fleet of robots in production is the only way to build a diverse dataset.

Similar to answering text questions with human-level capability, robotic control and manipulation require an agent to seek progress toward a goal that has no single, unique, correct answer (e.g., Whats a successful way to pick up this red onion?). Once again, more than pure supervised learning is required.

You need a robot running deep reinforcement learning (deep RL) to succeed in robotics. This autonomous, self-learning approach combines RL with deep neural networks to unlock higher levels of performance the AI will automatically adapt its learning strategies and continue to fine-tune its skills as it experiences new scenarios.

In the past few years, some of the worlds brightest AI and robotics experts laid the technical and commercial groundwork for a robotic foundation model revolution that will redefine the future of artificial intelligence.

While these AI models have been built similarly to GPT, achieving human-level autonomy in the physical world is a different scientific challenge for two reasons:

The growth trajectory of robotic foundation models is accelerating at a very rapid pace. Robotic applications, particularly within tasks that require precise object manipulation, are already being applied in real-world production environments and well see an exponential number of commercially viable robotic applications deployed at scale in 2024.

Chen has published more than 30 academic papers that have appeared in the top global AI and machine learning journals.

Originally posted here:

AI robotics GPT moment is near - TechCrunch

The World Is Running Out of Data to Feed AI, Experts Warn – ScienceAlert

As artificial intelligence (AI) reaches the peak of its popularity, researchers have warned the industry might be running out of training data the fuel that runs powerful AI systems.

This could slow down the growth of AI models, especially large language models, and may even alter the trajectory of the AI revolution.

But why is a potential lack of data an issue, considering how much there are on the web? And is there a way to address the risk?

We need a lot of data to train powerful, accurate and high-quality AI algorithms. For instance, ChatGPT was trained on 570 gigabytes of text data, or about 300 billion words.

Similarly, the stable diffusion algorithm (which is behind many AI image-generating apps such as DALL-E, Lensa and Midjourney) was trained on the LIAON-5B dataset comprising of 5.8 billion image-text pairs. If an algorithm is trained on an insufficient amount of data, it will produce inaccurate or low-quality outputs.

The quality of the training data is also important. Low-quality data such as social media posts or blurry photographs are easy to source, but aren't sufficient to train high-performing AI models.

Text taken from social media platforms might be biased or prejudiced, or may include disinformation or illegal content which could be replicated by the model. For example, when Microsoft tried to train its AI bot using Twitter content, it learned to produce racist and misogynistic outputs.

This is why AI developers seek out high-quality content such as text from books, online articles, scientific papers, Wikipedia, and certain filtered web content. The Google Assistant was trained on 11,000 romance novels taken from self-publishing site Smashwords to make it more conversational.

The AI industry has been training AI systems on ever-larger datasets, which is why we now have high-performing models such as ChatGPT or DALL-E 3. At the same time, research shows online data stocks are growing much slower than datasets used to train AI.

In a paper published last year, a group of researchers predicted we will run out of high-quality text data before 2026 if the current AI training trends continue. They also estimated low-quality language data will be exhausted sometime between 2030 and 2050, and low-quality image data between 2030 and 2060.

AI could contribute up to US$15.7 trillion (A$24.1 trillion) to the world economy by 2030, according to accounting and consulting group PwC. But running out of usable data could slow down its development.

While the above points might alarm some AI fans, the situation may not be as bad as it seems. There are many unknowns about how AI models will develop in the future, as well as a few ways to address the risk of data shortages.

One opportunity is for AI developers to improve algorithms so they use the data they already have more efficiently.

It's likely in the coming years they will be able to train high-performing AI systems using less data, and possibly less computational power. This would also help reduce AI's carbon footprint.

Another option is to use AI to create synthetic data to train systems. In other words, developers can simply generate the data they need, curated to suit their particular AI model.

Several projects are already using synthetic content, often sourced from data-generating services such as Mostly AI. This will become more common in the future.

Developers are also searching for content outside the free online space, such as that held by large publishers and offline repositories. Think about the millions of texts published before the internet. Made available digitally, they could provide a new source of data for AI projects.

News Corp, one of the world's largest news content owners (which has much of its content behind a paywall) recently said it was negotiating content deals with AI developers. Such deals would force AI companies to pay for training data whereas they have mostly scraped it off the internet for free so far.

Content creators have protested against the unauthorised use of their content to train AI models, with some suing companies such as Microsoft, OpenAI and Stability AI. Being remunerated for their work may help restore some of the power imbalance that exists between creatives and AI companies.

Rita Matulionyte, Senior Lecturer in Law, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Go here to read the rest:

The World Is Running Out of Data to Feed AI, Experts Warn - ScienceAlert

VMware, Intel to help businesses build and run AI models on … – CRN Australia

VMware and Intel said they are collaborating to help businesses adopt privacy-minded AI solutions faster by eliminating the guesswork to make the solutions run well on existing infrastructure.

At this weeks VMware Explore 2023 event in Barcelona, the virtualisation giant said it has teamed with Intel to develop a validated reference architecture called VMware Private AI with Intel, which consists of VMware Cloud Foundation and its AI computing features as well as Intels Xeon CPUs, Max Series GPUs and AI software suite.

The reference architecture is set for release by next month, and it will be supported by servers from Dell Technologies, Hewlett Packard Enterprise and Lenovo running fourth-generation Intel Xeon CPUs and Intel Max Series GPUs.

Chris Wolf, vice president of VMware AI Labs, said in a statement to CRN that the reference architecture will create new opportunities for VMware and Intels joint partners.

Our broad and growing ecosystem of AI apps and services, MLOps tools, AI hardware and data services is creating considerable optionality by which our joint partners can customise and differentiate, he said.

The reference architecture is an alternative to the VMware Private AI Foundation with Nvidia platform, which was unveiled in August and enables businesses to develop and run AI models on Dell, HPE and Lenovo servers powered by Nvidia GPUs, DPUs and SmartNICs.

Intel is keen to challenge Nvidias dominant position in the AI computing space with not just GPUs but also CPUs with AI acceleration capabilities such as Advanced Matrix Extensions.

Tuning its hardware and software to run AI workloads well on VMwares multi-cloud platform is an important step in giving the semiconductor giant a better fighting chance as it ramps up competition in silicon.

With the potential of artificial intelligence to unlock powerful new possibilities and improve the life of every person on the planet, Intel and VMware are well equipped to lead enterprises into this new era of AI, powered by silicon and software, said Sandra Rivera, the outgoing executive vice president and general manager of Intels Data Centre and AI Group, in a statement.

Enabling AI work with emphasis on privacy and compliance

The main purpose of VMware Private AI with Intel is to enable the virtualisation giants customers to use existing Intel-based infrastructure and open-source software to simplify building and deploying AI models with an emphasis on practical privacy and compliance needs, according to VMware.

This applies to infrastructure wherever enterprise data is being created, processed and consumed, whether in a public cloud, enterprise data centre or at the edge, the company said.

By tapping into existing infrastructure, businesses can reduce total cost of ownership and address concerns of environmental sustainability, it added.

When it comes to AI, there is no longer any reason to debate trade-offs in choice, privacy and control. Private AI empowers customers with all three, enabling them to accelerate AI adoption while future-proofing their AI infrastructure, Wolf said.

The AI computing reference architecture covers the crucial steps of building and running AI models, from data preparation and model training to fine-tuning and inferencing.

The use cases are wide open, from accelerating scientific discovery to enriching business and consumer services.

VMware Private AI with Intel will help our mutual customers dramatically increase worker productivity, ignite transformation across major business functions and drive economic impact, Wolf added.

Intels AI software suite consists of end-to-end open-source software and optional licensing components to enable developers to run full AI pipeline workflows, according to VMware.

This includes Intels oneAPI framework for letting developers writing code once to target software for multiple types of processors as well as Intels Transformer Extensions and PyTorch Extensions.

VMware Cloud Foundation provides complementary features for building and running AI models, such as vSAN Express Storage Architecture for accelerating capabilities such as encryption, vSphere Distributed Resources Scheduler for maximising hardware utilisation for AI models and training, and VMware NSX for micro-segmentation and threat protection capabilities.

The multi-cloud platform also comes with secure boot and Virtual Trusted Platform Module features for enabling model and data confidentiality.

See the rest here:

VMware, Intel to help businesses build and run AI models on ... - CRN Australia

New international consortium formed to create trustworthy and … – Argonne National Laboratory

A global consortium of scientists from federal laboratories, research institutes, academia, and industry has formed to address the challenges of building large-scale artificial intelligence (AI) systems and advancing trustworthy and reliable AI for scientific discovery.

The Trillion Parameter Consortium (TPC) brings together teams of researchers engaged in creating large-scale generative AI models to address key challenges in advancing AI for science. These challenges include developing scalable model architectures and training strategies, organizing, and curating scientific data for training models; optimizing AI libraries for current and future exascale computing platforms; and developing deep evaluation platforms to assess progress on scientific task learning and reliability and trust.

At our laboratory and at a growing number of partner institutions around the world, teams are beginning to develop frontier AI models for scientific use and are preparing enormous collections of previously untapped scientific data for training. Rick Stevens, Argonne associate laboratory director for computing, environment and life sciences

Toward these ends, TPC will:

The consortium has formed a dynamic set of foundational work areas addressing three facets of the complexities of building large-scale AI models:

TPC aims to provide the community with a venue in which multiple large model-building initiatives can collaborate to leverage global efforts, with flexibility to accommodate the diverse goals of individual initiatives. TPC includes teams that are undertaking initiatives to leverage emerging exascale computing platforms to train LLMs or alternative model architectures on scientific research including papers, scientific codes, and observational and experimental data to advance innovation and discoveries.

Trillion parameter models represent the frontier of large-scale AI with only the largest commercial AI systems currently approaching this scale.

Training LLMs with this many parameters requires exascale class computing resources, such as those being deployed at several U.S. Department of Energy (DOE) national laboratories and multiple TPC founding partners in Japan, Europe, and elsewhere. Even with such resources, training a state-of-the-art one trillion parameter model will require months of dedicated timeintractable on all but the largest systems. Consequently, such efforts will involve large, multi-disciplinary, multi-institutional teams. TPC is envisioned as a vehicle to support collaboration and cooperative efforts among and within such teams.

At our laboratory and at a growing number of partner institutions around the world, teams are beginning to develop frontier AI models for scientific use and are preparing enormous collections of previously untapped scientific data for training, said Rick Stevens, associate laboratory director of computing, environment and life sciences at DOEs Argonne National Laboratory and professor of computer science at the University of Chicago. We collaboratively created TPC to accelerate these initiatives and to rapidly create the knowledge and tools necessary for creating AI models with the ability to not only answer domain-specific questions but to synthesize knowledge across scientific disciplines.

The founding partners of TPC are from the following organizations (listed in organizational alphabetical order, with a point-of-contact):

TPC contact: Charlie Catlett

Learn more at tpc.dev.

Follow this link:

New international consortium formed to create trustworthy and ... - Argonne National Laboratory

Here’s How Violent Extremists Are Exploiting Generative AI Tools – WIRED

We're going to partner with Microsoft to figure out if there are ways using our archive of material to create a sort of gen AI detection system in order to counter the emerging threat that gen AI will be used for terrorist content at scale, Hadley says. We're confident that gen AI can be used to defend against hostile uses of gen AI.

The partnership was announced today, on the eve of the Christchurch Call Leaders Summit, a movement designed to eradicate terrorism and extremist content from the internet, to be held in Paris.

The use of digital platforms to spread violent extremist content is an urgent issue with real-world consequences, Brad Smith, vice chair and president at Microsoft said in a statement. By combining Tech Against Terrorisms capabilities with AI, we hope to help create a safer world both online and off.

While companies like Microsoft, Google, and Facebook all have their own AI research divisions and are likely already deploying their own resources to combat this issue, the new initiative will ultimately aid those companies that cant combat these efforts on their own.

This will be particularly important for smaller platforms that don't have their own AI research centers, Hadley says. Even now, with the hashing databases, smaller platforms can just become overwhelmed by this content.

The threat of AI generative content is not limited to extremist groups. Last month, the Internet Watch Foundation, a UK-based nonprofit that works to eradicate child exploitation content from the internet, published a report that detailed the growing presence of child sexual abuse material (CSAM) created by AI tools on the dark web.

The researchers found over 20,000 AI-generated images posted to one dark web CSAM forum over the course of just one month, with 11,108 of these images judged most likely to be criminal by the IWF researchers. As the IWF researchers wrote in their report, These AI images can be so convincing that they are indistinguishable from real images.

Follow this link:

Here's How Violent Extremists Are Exploiting Generative AI Tools - WIRED

Moderna Highlights its Digital and AI Strategy and Progress at … – Moderna Investor Relations

Moderna Highlights its Digital and AI Strategy and Progress at Second Digital Investor Event

The Company demonstrates how its integrated Artificial Intelligence ecosystem accelerates innovation at scale and creates value across the enterprise

Moderna to present case studies on how the organization is building a real-time AI Company

CAMBRIDGE, MA / ACCESSWIRE / November 8, 2023 / Moderna, Inc. (NASDAQ:MRNA) today will unveil its comprehensive AI and digital strategy at its second Digital Investor Event. Building off its Manufacturing and Digital Day hosted in March 2020, the Company will showcase how AI continues to transform the organization and enhance its value creation. Today's presentation will highlight Moderna's leading position in AI-powered innovation, its ability to harness the power of AI to improve efficiency and scalability across the value chain, and its development of an AI-centric culture.

Since its founding, Moderna has been a digital-first company. Building on its strong foundation of more than a decade of data in developing mRNA medicines, combined with its unique platform approach and cloud-native infrastructure, the Company is well-positioned to continue to scale using AI.

"Just as the personal computer changed the way we work and live, AI will completely transform our everyday lives. At Moderna, we are leading the charge of this AI revolution in medicine. It is as much about technology as it is about people and ensuring they have the right skills," said Stphane Bancel, Chief Executive Officer of Moderna. "We were built on the premise that the natural flow of information in life, mRNA, can be used to develop transformative medicines, and by embedding AI into every aspect of how we work, we are accelerating our mission to deliver the greatest possible impact to people through mRNA medicines."

Moderna has already leveraged the impact of AI to increase its speed to market as well as advance the continuous improvement and quality of its products. AI helps optimize each aspect of Moderna's value chain - from drug design to commercial manufacturing. The Company will present a case study on how mRNA-4157 (V940), its individualized neoantigen therapy (INT), leverages a series of fully autonomous, integrated AI algorithms. These proprietary algorithms design the specific therapy for each individual patient. AI algorithms are also used to optimize the timely manufacture and delivery of INT to the patient. Moderna will present a detailed overview of the Company's AI-optimized manufacturing scheduling system, ensuring the timely administration of INT for each patient.

Moderna maintains an AI-centric culture through both educational opportunities for employees and easy-to-implement AI-powered tools. The Company's AI Academy also offers a unique and immersive learning experience to encourage employees to become proficient AI users and enthusiasts. After only two weeks of development, Moderna launched its own generative AI product, mChat, in May 2023. As of last month, nearly 65% of employees are active users, embedding the tool into their specific functions for customized support and meaningful improvements in workflow efficiency and efficacy.

"While 90% of tech executives believe AI is the center of the next tech revolution, only 10% of AI projects make it into production. "We're committed to not only changing this narrative, but to leading by example," said Brad Miller, Moderna's Chief Information Officer. "We know that successful AI implementation means putting our employees at its center, requiring an intentional cultural transformation and a mindset shift around how each employee approaches their work. Rather than adding complexity and viscosity as we grow as a company, we embrace and democratize AI so that every employee can create value measured by efficiency and efficacy."

Webcast Information

Moderna will host a webcast at 8:00 am ET on November 8, 2023. A webcast of the event will be available under "Events and Presentations" in the Investors section of the Moderna website.

Webcast: https://investors.modernatx.com

The archived webcast will be available on Moderna's website and will be available for one year following the call.

About Moderna In over 10 years since its inception, Moderna has transformed from a research-stage company advancing programs in the field of messenger RNA (mRNA), to an enterprise with a diverse clinical portfolio of vaccines and therapeutics across seven modalities, a broad intellectual property portfolio and integrated manufacturing facilities that allow for rapid clinical and commercial production at scale. Moderna maintains alliances with a broad range of domestic and overseas government and commercial collaborators, which has allowed for the pursuit of both groundbreaking science and rapid scaling of manufacturing. Most recently, Moderna's capabilities have come together to allow the authorized use and approval of one of the earliest and most effective vaccines against the COVID-19 pandemic.

Moderna's mRNA platform builds on continuous advances in basic and applied mRNA science, delivery technology and manufacturing, and has allowed the development of therapeutics and vaccines for infectious diseases, immuno-oncology, rare diseases, cardiovascular diseases and auto-immune diseases. Moderna has been named a top biopharmaceutical employer by Science for the past nine years. To learn more, visit http://www.modernatx.com.

Forward-Looking Statements This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995, as amended, including statements regarding: how AI is driving the next technological revolution and will transform our everyday lives; Moderna's investments in digital and AI; and Moderna's ability to harness the power of AI to accelerate innovation and improve efficiency and scalability across the value chain. The forward-looking statements in this press release are neither promises nor guarantees, and you should not place undue reliance on these forward-looking statements because they involve known and unknown risks, uncertainties, and other factors, many of which are beyond Moderna's control and which could cause actual results to differ materially from those expressed or implied by these forward-looking statements. These risks, uncertainties, and other factors include, among others, those risks and uncertainties described under the heading "Risk Factors" in Moderna's Annual Report on Form 10-K for the fiscal year ended December 31, 2022, filed with the U.S. Securities and Exchange Commission (SEC), and in subsequent filings made by Moderna with the SEC, which are available on the SEC's website at http://www.sec.gov. Except as required by law, Moderna disclaims any intention or responsibility for updating or revising any forward-looking statements contained in this press release in the event of new information, future developments or otherwise. These forward-looking statements are based on Moderna's current expectations and speak only as of the date of this press release.

Moderna Contacts Media:Kelly CunninghamAssociate Director, Communications & Media617-899-7321Kelly.Cunningham@modernatx.com

Investors:Lavina TalukdarSenior Vice President & Head of Investor Relations617-209-5834Lavina.Talukdar@modernatx.com

SOURCE: Moderna, Inc.

View source version on accesswire.com: https://www.accesswire.com/801019/moderna-highlights-its-digital-and-ai-strategy-and-progress-at-second-digital-investor-event

Read the original:

Moderna Highlights its Digital and AI Strategy and Progress at ... - Moderna Investor Relations