Category Archives: Artificial Intelligence

The Florida House passes a bill requiring disclaimers on political ads with AI – WMNF

AI - artificial intelligence graphic by Black Kira via iStock for WMNF News.

2024 The News Service of Florida

Political advertisements created using generative artificial intelligence could soon require a disclaimer that makes clear the technology was involved, under a measure passed Wednesday by the Florida House.

House members voted 104-8 to approve the bill (HB 919) amid questions by some Democrats about a criminal penalty included in the measure.

Under the bill, political advertisements using images, video, audio, graphics, or other digital content that are created using artificial intelligence would have to include the following disclaimer: Created in whole or in part with the use of generative artificial intelligence (AI).

People who pay for, sponsor or approve political ads found to be in violation could face first-degree misdemeanor charges.

Bill sponsor Alex Rizo, R-Hialeah, pointed to artificial intelligence possibly being used to create misleading images or other content.

The reason why we wanted to give this (bill) a little more teeth than usual election bills or election laws have, is because now for the first time there is a real concern to really change reality on people, Rizo said.

A similar Senate bill (SB 850) is awaiting consideration by the Senate.

Read more from the original source:
The Florida House passes a bill requiring disclaimers on political ads with AI - WMNF

Artificial intelligence needs a scientific method-driven reset – Nature.com

Tsipras, D., Santurkar, S., Engstrom, L., Ilyas, A. & Madry, A. In International Conference on Machine Learning 96259635 (PMLR, 2020).

Strubell, E., Ganesh, A. & McCallum, A. AAAI 34, 1369313696 (2019).

Article Google Scholar

Bender, E. M., Gebru, T., McMillan-Major, A. & Shmitchell, S. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency 610623 (Association for Computing Machinery, 2021).

Goodfellow, I. J., Shlens, J. & Szegedy C. In 3rd International Conference on Learning Representations (ICLR, 2015).

Chowdhuri, R., Deshmukh, N. & Koplow, D. No, GPT4 cant ace MIT. https://flower-nutria-41d.notion.site/No-GPT4-can-t-ace-MIT-b27e6796ab5a48368127a98216c76864 (2023).

McCarthy, J., Minsky, M., Rochester, N. & Shannon, C. E. AI Mag. 27, 1214 (2006).

Google Scholar

More here:
Artificial intelligence needs a scientific method-driven reset - Nature.com

We’ve been here before: AI promised humanlike machines in 1958 – The Conversation

A roomsize computer equipped with a new type of circuitry, the Perceptron, was introduced to the world in 1958 in a brief news story buried deep in The New York Times. The story cited the U.S. Navy as saying that the Perceptron would lead to machines that will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.

More than six decades later, similar claims are being made about current artificial intelligence. So, whats changed in the intervening years? In some ways, not much.

The field of artificial intelligence has been running through a boom-and-bust cycle since its early days. Now, as the field is in yet another boom, many proponents of the technology seem to have forgotten the failures of the past and the reasons for them. While optimism drives progress, its worth paying attention to the history.

The Perceptron, invented by Frank Rosenblatt, arguably laid the foundations for AI. The electronic analog computer was a learning machine designed to predict whether an image belonged in one of two categories. This revolutionary machine was filled with wires that physically connected different components together. Modern day artificial neural networks that underpin familiar AI like ChatGPT and DALL-E are software versions of the Perceptron, except with substantially more layers, nodes and connections.

Much like modern-day machine learning, if the Perceptron returned the wrong answer, it would alter its connections so that it could make a better prediction of what comes next the next time around. Familiar modern AI systems work in much the same way. Using a prediction-based format, large language models, or LLMs, are able to produce impressive long-form text-based responses and associate images with text to produce new images based on prompts. These systems get better and better as they interact more with users.

In the decade or so after Rosenblatt unveiled the Mark I Perceptron, experts like Marvin Minsky claimed that the world would have a machine with the general intelligence of an average human being by the mid- to late-1970s. But despite some success, humanlike intelligence was nowhere to be found.

It quickly became apparent that the AI systems knew nothing about their subject matter. Without the appropriate background and contextual knowledge, its nearly impossible to accurately resolve ambiguities present in everyday language a task humans perform effortlessly. The first AI winter, or period of disillusionment, hit in 1974 following the perceived failure of the Perceptron.

However, by 1980, AI was back in business, and the first official AI boom was in full swing. There were new expert systems, AIs designed to solve problems in specific areas of knowledge, that could identify objects and diagnose diseases from observable data. There were programs that could make complex inferences from simple stories, the first driverless car was ready to hit the road, and robots that could read and play music were playing for live audiences.

But it wasnt long before the same problems stifled excitement once again. In 1987, the second AI winter hit. Expert systems were failing because they couldnt handle novel information.

The 1990s changed the way experts approached problems in AI. Although the eventual thaw of the second winter didnt lead to an official boom, AI underwent substantial changes. Researchers were tackling the problem of knowledge acquisition with data-driven approaches to machine learning that changed how AI acquired knowledge.

This time also marked a return to the neural-network-style perceptron, but this version was far more complex, dynamic and, most importantly, digital. The return to the neural network, along with the invention of the web browser and an increase in computing power, made it easier to collect images, mine for data and distribute datasets for machine learning tasks.

Fast forward to today and confidence in AI progress has begun once again to echo promises made nearly 60 years ago. The term artificial general intelligence is used to describe the activities of LLMs like those powering AI chatbots like ChatGPT. Artificial general intelligence, or AGI, describes a machine that has intelligence equal to humans, meaning the machine would be self-aware, able to solve problems, learn, plan for the future and possibly be conscious.

Just as Rosenblatt thought his Perceptron was a foundation for a conscious, humanlike machine, so do some contemporary AI theorists about todays artificial neural networks. In 2023, Microsoft published a paper saying that GPT-4s performance is strikingly close to human-level performance.

But before claiming that LLMs are exhibiting human-level intelligence, it might help to reflect on the cyclical nature of AI progress. Many of the same problems that haunted earlier iterations of AI are still present today. The difference is how those problems manifest.

For example, the knowledge problem persists to this day. ChatGPT continually struggles to respond to idioms, metaphors, rhetorical questions and sarcasm unique forms of language that go beyond grammatical connections and instead require inferring the meaning of the words based on context.

Artificial neural networks can, with impressive accuracy, pick out objects in complex scenes. But give an AI a picture of a school bus lying on its side and it will very confidently say its a snowplow 97% of the time.

In fact, it turns out that AI is quite easy to fool in ways that humans would immediately identify. I think its a consideration worth taking seriously in light of how things have gone in the past.

The AI of today looks quite different than AI once did, but the problems of the past remain. As the saying goes: History may not repeat itself, but it often rhymes.

Read the original here:
We've been here before: AI promised humanlike machines in 1958 - The Conversation

Artificial Intelligence Technology Solutions Inc. (OTCMKTS:AITX) Short Interest Update – AmericanBankingNEWS

Artificial Intelligence Technology Solutions Inc. (OTCMKTS:AITX Get Free Report) saw a large decrease in short interest during the month of February. As of February 15th, there was short interest totalling 263,100 shares, a decrease of 88.9% from the January 31st total of 2,368,600 shares. Based on an average daily trading volume, of 170,321,200 shares, the short-interest ratio is presently 0.0 days.

Artificial Intelligence Technology Solutions stock opened at $0.00 on Thursday. Artificial Intelligence Technology Solutions has a twelve month low of $0.00 and a twelve month high of $0.01.

(Get Free Report)

Receive News & Ratings for Artificial Intelligence Technology Solutions Daily - Enter your email address below to receive a concise daily summary of the latest news and analysts' ratings for Artificial Intelligence Technology Solutions and related companies with MarketBeat.com's FREE daily email newsletter.

See more here:
Artificial Intelligence Technology Solutions Inc. (OTCMKTS:AITX) Short Interest Update - AmericanBankingNEWS

What you need to know about using artificial intelligence on your smartphone – WTOP

Artificial intelligence is the most talked about technology right now and much of the hype is well-deserved. How accessible is it? Is it a fingertip away?

WTOP's Neal Augenstein reports when and how the public could use artificial intelligence on their phones.

Q: Is it possible to use AI on my smartphone and if so, how do I do it?

A: Artificial intelligence is the most talked about technology right now and much of the hype is well-deserved.

Some are predicting AI will be as transformative as electricity was 100 years ago, in that it will impact virtually every industry in our lives.

If your smartphone is relatively new and updated with the latest operating system, youre using AI-powered tools without even knowing it.

All of the virtual assistants (Siri, Google Assistant, etc.) are using various forms of AI and are developing new capabilities regularly.

Another common area enhanced by AI is your phones camera, both while youre taking pictures and videos, and when youre editing them afterwards.

AI is used to detect the scene being photographed to provide real-time enhancements based on whether its a human face or a landscape, for instance.

Adjusting exposure, contrast and color balance on the fly eliminates the need for the user to make manual adjustments.

AI is also in play for object recognition, such as knowing when the camera is looking at a QR code.

AI-powered editing tools already on your phone may allow you to get rid of items in the background or eliminate wind noise with a few taps of the screen.

If you havent played with the editing tools associated with your phones camera, I recommend exploring there first.

AI algorithms are also used to help optimize battery life and enhance the accuracy of biometric security and voice recognition, as well as the predictive text that seems to be everywhere on our phones.

WTOP's Neal Augenstein speaks with Data Doctors' Ken Colburn on using AI on your mobile phone.

A useful way to start using AI on your phone is by using chatbots as an alternative to search engines.

The best analogy Ive heard is one that compares the process to going to a library. Traditional search engines are like asking the librarian for a specific piece of information and being told which books may contain what you seek and which shelves to start looking through.

Chatbots, on the other hand, will attempt to provide the specific information directly as if the librarian went to the shelf, picked out a book and found the exact page where the information you seek resides.

Chatbots arent a replacement for search engines, but they can be exponentially more efficient when youre travel planning or comparing specifications while car shopping, for instance.

OpenAIs ChatGPT is considered the most advanced chatbot, but it can be a bit overwhelming for beginners. Microsoft is a big investor in OpenAI and has incorporated GPT-4 into their Bing app, which allows you to do side-by-side comparisons of search vs. chatbot results.

Once you download the Bing app for Android or iOS, try using it instead of your normal search engine to see if its more helpful.

Microsofts chatbot, called Copilot, will appear at the bottom center of the app.

New AI apps are being created almost daily, so if youre in search of a specific AI tool, try asking Copilot for suggestions.

Ken Colburn is founder and CEO ofData Doctors Computer Services. Ask any tech question onFacebookorX.

Get breaking news and daily headlines delivered to your email inbox by signing up here.

2024 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.

See the original post:
What you need to know about using artificial intelligence on your smartphone - WTOP

TotalEnergies unlocks the potential of generative artificial intelligence for its employees – Total

Download the Press Release (PDF)

Paris, 27 February 2024 A TotalEnergies is among the first organizations to deploy Copilot for Microsoft 365, Microsoft's generative artificial intelligence assistant, for its employees. After making Bing Chat Enterprise, a secure AI chat solution based on internal data, available to employees in August 2023, the Company is pursuing its digital transformation.

In September 2023, TotalEnergies launched a test phase with 300 employees, with positive results. TotalEnergies therefore decided to deploy Copilot for Microsoft 365 for its employees to accelerate its operational transformation. As benefits: an improved operational efficiency and greater user comfort.

TotalEnergies will also provide its teams with Microsoft Power Platform licences, a "low code-no code" application development service enabling them to create, on their own, digital applications that turn their ideas into reality. Employees will thus be able to design solutions connected to other TotalEnergies applications and databases, to solve their simple or complex day-to-day problems more quickly and efficiently.

At the same time, TotalEnergies is implementing a program to support and enhance the skills of its employees in order to help them use these new tools and get the most out of them. In 2024, every employee will receive training dedicated to the use of these new IA tools.

"In line with our pioneering spirit, TotalEnergies is committed to digital transformation and supports its employees so that they can make the most of it. The new technologies of generative artificial intelligence and of 'low code no code' will provide them with the simplification and autonomy they need to put their skills and creativity even further at the service of our company's transition strategy," said Patrick Pouyann, CEO of TotalEnergies.

***

About TotalEnergies

TotalEnergies is a global multi-energy company that produces and markets energies: oil and biofuels, natural gas and green gases, renewables and electricity. Our more than 100,000 employees are committed to energy that is ever more affordable, more sustainable, more reliable and accessible to as many people as possible. Active in nearly 130 countries, TotalEnergies puts sustainable development in all its dimensions at the heart of its projects and operations to contribute to the well-being of people.

TotalEnergies Contacts

TotalEnergies on social media

Cautionary Note The terms TotalEnergies, TotalEnergies company or Company in this document are used to designate TotalEnergies SE and the consolidated entities that are directly or indirectly controlled by TotalEnergies SE. Likewise, the words we, us and our may also be used to refer to these entities or to their employees. The entities in which TotalEnergies SE directly or indirectly owns a shareholding are separate legal entities. TotalEnergies SE has no liability for the acts or omissions of these entities. This document may contain forward-looking information and statements that are based on a number of economic data and assumptions made in a given economic, competitive and regulatory environment. They may prove to be inaccurate in the future and are subject to a number of risk factors. Neither TotalEnergies SE nor any of its subsidiaries assumes any obligation to update publicly any forward-looking information or statement, objectives or trends contained in this document whether as a result of new information, future events or otherwise. Information concerning risk factors, that may affect TotalEnergies financial results or activities is provided in the most recent Registration Document, the French-language version of which is filed by TotalEnergies SE with the French securities regulator Autorit des Marchs Financiers (AMF), and in the Form 20-F filed with the United States Securities and Exchange Commission (SEC).

Continued here:
TotalEnergies unlocks the potential of generative artificial intelligence for its employees - Total

Infusion of Artificial Intelligence in Biology – The Scientist

In the early 1990s, protein biologists invested in solving a challenge that had riddled them for decades. The protein folding problem centered on the idea that biologists should be able to predict the three-dimensional structure of a protein based on its amino acid sequence, but they hadnt been able to do so in practice. Researchers knew that the ability to determine protein structure without relying on tedious experiments would unlock a plethora of applicationsbetter drug targets, easy protein function determination, and optimized industrial enzymesso they persisted.

In 1994, a few researchers led by biophysicist John Moult from the University of Maryland started the biannual Critical Assessment of Protein Structure Prediction (CASP) competition as a large-scale experiment to source solutions from the collective. At every event, the brightest minds in protein biology brought forth their models that predicted structures of a few test proteins chosen by the organizers. The model that yielded structures that most closely resembled experimental data won.

David Baker uses deep learning models to create de novo proteins that are better suited to solving modern problems than natural proteins.

Ian C Haydon

For the first several years, scientists relied on physical prediction models for these challenges, recalled David Baker, a protein design specialist at the University of Washington and a CASP competition contributor and advisor. Proteins are made out of amino acid residues, which are made out of atoms, and you try and model all the interactions between the atoms and how they drive the protein to fold up, Baker explained.

In 2018 at CASP13, the attendees witnessed a breakthrough. Demis Hassabis, cofounder and chief executive officer at DeepMind, an artificial intelligence company, and his team challenged the status quo by using a deep learning-based model to predict protein structure. They trained their model, AlphaFold, using the sequences and structures of about 100,000 known proteins to enable it to output pattern-recognition based predictions.1

AlphaFold won the competition that year, and the field progressed rapidly thereafter. By the next CASP meeting, the DeepMind team significantly improved their model, and AlphaFold predicted the structures of the majority of test proteins with an accuracy comparable to experimental methods.2 Based on AlphaFolds success, protein experts declared that the 50-year old protein folding problem was largely solved. AlphaFold inspired researchers to pivot towards AI for their protein folding models; Baker and his team soon launched their open source deep learning-based protein structure predictor RoseTTA fold.3

While these models successfully predicted the structures of almost all existing proteins, Baker was interested in proteins beyond the database, including proteins that did not exist.

AI accelerates protein design

Baker has always been interested in tinkering with proteins and especially in designing new ones. It wasn't too long after our first successes in structure prediction that we started thinking, well, maybe instead of predicting what structure a sequence would fold up to, we could use these methods to make a completely new structure and then find out what sequence could fold to it, he said.

Why is it that Netflix is able to give you recommendations for what movies you're going to like to watch tonight, but your clinician can't get you AI guided recommendations for therapies for how you should be treated? Trey Ideker, University of California San Diego

He and his team developed their first de novo protein, an alpha/beta protein called Top7, using physical modeling methods in 2003.4 Over the years, Bakers team and other researchers steadily expanded the list of de novo proteins.5 Now, with AI tools in their arsenal, researchers could design more complex proteins with a higher success rate, said Baker. Indeed, in the past few years, researchers, including Bakers team, have reported different protein design models.6,7 The team involved in developing one of these models, ProGen, used it to design synthetic enzymes, lysozymes, as a proof of concept.8 Experimental tests revealed that the artificial lysozymes showed catalytic efficiencies matching natural ones, demonstrating the prowess of such models in building utilitarian proteins in the lab.

The proteins in nature evolved under the constraints of natural selection. So, they solve all the problems that were relevant for natural selection during evolution. But now, we can make proteins specifically for 21st century problems. That is what is really exciting about the field, said Baker.

Using advanced machine learning tools, researchers can create artificial proteins with new functions.

Ian C Haydon

Bakers team is tackling several such needs-of-the-hour projects. He recently developed a de novo coronavirus vaccine in collaboration with Neil King, who specializes in protein design at the University of Washington.9 His team also works on targeted cancer drugs, enzymes that break down plastic, and proteins to fix carbon dioxide.

There is always more work to be done. Proteins in cells are often part of macromolecular complexes. Current AI models work well for protein folding predictions or creating a protein with a specific binding site, but they fall short when it comes to designing more complicated complexes, such as molecular motors. With the current methods, it's not so obvious how to design machines. That's still a research problem, said Baker.

Building bridges: AI models map cells

According toTrey Ideker, a computational biologist and functional genomics researcher at the University of California, San Diego, the AI-driven progress in protein folding was a huge milestone for biologists. That impact is still being felt, he said. But it solved just a small part of a complex problem.

With a goal of transforming precision medicine, Trey Ideker develops AI algorithms to analyze tumor genomes.

Trey Ideker

Proteins do not work alone; they interact with other proteins in intricate pathways to enable cellular function and structure. A deeper understanding of cell structure and its determinants will help researchers identify perturbations that indicate diseased states. While cell imaging provides a snapshot of cellular architecture, researchers are far from developing real cell maps and models, according to Ideker.

How do you Alphafold a cell? he questioned. How would you fold an entire cell for every cell in your body? Ideker intends to find the answers, and he has just the right resources to do so: a collaborative group of like-minded scientists.

As AI tools become more widespread in biology, many researchers have turned to deep learning models in their projects to improve precision medicine. With data at the crux of these models, it is vital to ensure that researchers have complete datasets to maximize their chances of success. With a goal of coordinating this progress, the NIH launched the Bridge2AI program with a focus on plugging in the key missing datasets that are needed to train future AI models to take them to the next level. It's not AI yet; it's the bridge to AI, said Ideker.

One focus project under this initiative is the Cell Maps for AI (CM4AI) program, which aims to build spatiotemporal maps of cells and connect genotype to phenotype to get a complete picture of cell health. The scientists involved in this program will achieve this by working on all aspects of cellular biology: genetic perturbations, cell imaging for morphology detection, and protein interaction studies. Ideker leads the functional genomics subgroup in the CM4AI program.

I'm actually optimistic we're going to get there relatively soon. But a lot of work remains and needs continued innovations in AI and data measurements, said Ideker.

Cellular image analysis: AI has an accurate eye

Maddison Masaeli and her team at Deepcell apply AI models to identify cell morphology aberrations in diseases.

Deepcell

Inferring cell health from structure and morphology is second nature for Maddison Masaeli, an engineer scientist and chief executive officer at Deepcell. The way that cells look has been integral to biology since the discovery of cells, she said. It goes all the way from getting a sense about how cells are doing in a culturewhether they're healthy and living and thrivingall the way to diagnosing and staging cancer in a pathology or cytology setting.

When Masaeli worked as a postdoctoral researcher for Euan Ashley, a cardiovascular expert at Stanford University, she studied cardiomyopathy models. Her work relied heavily on phenotypic analysis to determine cardiomyocyte maturity and function. The tools that we had available as scientists were extremely limited, even to the degree that we couldn't even measure a basic volume of cells, she said.

She sought to leverage computer vision and deep learning to help tackle those challenges, and after seeing their success, Masaeli cofounded Deepcell in 2017. She and her team developed an AI-based image analysis platform trained on large datasets of about two billion image data points gathered from cells originating from different tissues from both healthy people and patients with diseases.

According to Masaeli, their disease agnostic platform can detect abnormalities in the morphology of any cell type, which enables a wide range of applications in research and medicine. Some diseases have an obvious connection to cell morphology (for example, tumor cells structurally differ from healthy cells), but finding unexpected connections in other diseases excites Masaeli. For example, in one customer study on aging, the model picked up morphological differences between cells from old patients and those from young patients. After exposing the old-patient cells to drugs being tested to revert aging, Masaeli noted that the treated old-patient cells resembled the morphology of young-patient cells.

This is just fascinating [to find] the most non-obvious applications that could be very minute changes in morphology that we didn't have tools to evaluate directly, said Masaeli.

Predictive AI in precision medicine

While AI use cases have sprouted across diverse basic research areas, from single cell studies to neural network models that decode language, most researchers have their eyes on the prize: improving human health.

Nardin Nakhla and her team at Simmunome intend to fix drug discoverys leaky workflow using machine-learning models.

Claudia Grgoire

Nardin Nakhla, a neuroscientist and chief technology officer at Simmunome, intends to fix the leaky drug discovery pipeline. In the pharma industry, 90 percent of drugs fail, and only 10 percent make it all the way to the market. There's a lot of trial and error, said Nakhla.

A lot of work goes into drug screening and determining the right drug, but sometimes a drug doesnt work because the developers picked the wrong target or causal pathway. Nakhla and her team focus on the early stages of the workflow to minimize downstream losses. They trained their models on how biology works at the molecular level so that the models better understand pathways and can identify causal targets. The team can then simulate the downstream influence of a drug on a pathway and estimate its efficacy in stopping disease progression. The idea is to provide this tool, so instead of [drug developers] trying five times before they get it right, maybe we can get it right from the first or second time, said Nakhla.

In preliminary tests, the team compared the efficacies of drugs tested in 24 oncology clinical trials with prediction data from their simulations. They found that their models predicted drug efficacies with almost 70 percent accuracy. The Simmunome team intends to conduct more tests in the near future to ensure robust predictions in other disease areas.

Recent breakthroughs in machine learning allow scientists to create protein molecules unlike any found in nature.

Ian C Haydon

While Nakhla hopes to streamline conventional drug discovery processes, Ideker envisions a new world in medicine that includes customized patient therapies. A patient with breast cancer, for instance, may possess up to 50 genetic mutations that alter her response to standard medications. Given that genomic signatures differ between patients, researchers and physicians need the right combination of AI models and genomic data to appropriately treat such a complex perturbation of the system, according to Ideker. His team develops algorithms that could analyze a patients genomic mutations to inform the right treatment course.10

Essentially, what it's doing is determining or making a prediction on which drugs will produce a response to that patient, and which drugs are likely to not produce a response, said Ideker. In the future, as researchers build more sophisticated AI models, Ideker believes that there will be an armada of clinical trials where patients could avail themselves of personalized medicines catered to their genomes, maximizing the treatment response. Why is it that Netflix is able to give you recommendations for what movies you're going to like to watch tonight, but your clinician can't get you AI guided recommendations for therapies for how you should be treated? questioned Ideker.

AI advances: proceed with caution

Today, there is no dearth of appreciation for AI in biology from researchers, investors, and the public. That was not always the case. Ideker recalled that being an early bird in this field was frustrating due to the uphill climb of peer acceptance. If you've correctly identified what the gap is, and you are trying to push the field forward, there's always resistance, he said. Its been hard, but it should be.

Although Ideker is happy that biologists are finally warming up to AI, he thinks that some may have veered too far. The hype has gotten to a point where researchers cannot start a new venture without mentioning AI, he joked.

Everybody thinks that now they need to solve their problem one way or another with AI. And sometimes those problems might not be a great fit for AI and deep learning, agreed Masaeli, who experienced a similar skepticism-to-optimism journey. According to her, there is a lot that AI could help achieve in certain topics, but she urged researchers working in areas where large datasets arent available to evaluate existing tools rather than forcing AI-based approaches.

Whether researchers use AI methodologies or any other techniques, they need to possess a deep understanding of their topic to succeed, according to Baker. People were surprised that we transitioned so quickly from physically based models to deep learning models, he said. This was only possible because the researchers had worked on protein design for several years, understood the limitations and possibilities that came with the territory, and developed an intuition for the system, he explained. If you understand the scientific problem, then AI is just another tool.

See more here:
Infusion of Artificial Intelligence in Biology - The Scientist

Anthropics launches artificial intelligence powered Zyler Virtual Try-On for Menswear solution Retail Technology … – Retail Technology Innovation…

Alexander Berend, CEO at Anthropics, says: "We believe that Zyler will transform the way men shop for clothes online. By combining advanced artificial intelligence technology with a user-friendly interface, we aim to provide an unparalleled virtual try-on experience.

This innovative solution not only sets retailers apart in a competitive market but is also a valuable tool for driving customer engagement, satisfaction and loyalty.

Expanding Zyler Virtual Try-On for menswear from an in-store installation, as, for example, currently used by Larusmiani an Italian bespoke menswear retailer to an online offering is a natural move, he adds.

In October of last year, Zyler reported on its work on the fashion rental site of UK retailer John Lewis, which is powered by HURR.

A Try it On feature on said site taps the companys technology.

Customers can see how an outfit looks on them online before they rent it. They upload a headshot and sizing information to try the outfit on virtually, from the comfort of their own home.

Key findings: over 30% of sales come from Zyler users; 16% of web visitors engage with Zyler; Average of 52 outfits viewed per user.

These findings exceeded our expectations and demonstrate a huge customer response to our try-on technology, commented Berend.

There is a significant sales contribution from Zyler users, strong engagement among website visitors, and a high number of outfits viewed per Zyler user.

Danielle Gagola, Innovation Lead at John Lewis, said: "Whatever special event they might be attending, at John Lewis we're always looking for ways to help our customers look and feel their best.

It has been so exciting to offer styling support in a digital environment using the Zyler technology, and the impressive results we've seen from the first few months shows it's resonating with our customers too.

As we move into winter, we're looking forward to even more customers using the Virtual Try-On service to find that perfect Christmas party outfit."

Here is the original post:
Anthropics launches artificial intelligence powered Zyler Virtual Try-On for Menswear solution Retail Technology ... - Retail Technology Innovation...

Artificial intelligence: The future is already here, and businesses will have to play catch-up – The Irish Times

Data is the new oil, said Barry Scannell, an AI law expert with William Fry, and AI companies are going to be the new refineries.

He was addressing an audience of tech industry professionals in Trinity College at a summit on artificial intelligence.

What were doing now will have ripples for the future, Scannell continued, and as he spoke on a panel shared with OpenAI and Logitech, attendees diligently took notes.

Irelands 300-odd indigenous AI companies, more than half of them based in and around Dublin, and their multinational competitors have seen an explosion in interest in AI over the past 18 months after products such as ChatGPT opened the worlds eyes to the potential of the technology.

Tech companies big and small are scrambling to develop the best software to get an edge over rivals, and chip manufacturing companies, such as Nvidia, have seen huge market gains as they struggle to keep up with demand for microchips capable of running AI.

Generative AI will drive a paradigm shift in our interaction with technology, Googles Sebastian Haire told the Dublin summit, adding that the world was entering a fourth industrial revolution spearheaded by AI. He joked that even his presentation was out of date in the time between designing and delivering it.

If industry representatives are to be believed, virtually every company is either integrating some form of AI into its systems or is beginning an exploratory phase to assess how it can help their staff improve productivity. By 2025, worldwide spend on AI will reach $204 billion (188 billion), according to Haire. Thats next year.

It is expected that trillions will be pumped into the technology in the years to come.

At the Trinity College conference, AI experts mingled, trying each others branded cupcakes at industry stands, exchanging niceties and whatnot. But below the surface theres a race going on, and one on which nobody wants to fall behind.

For companies that miss the boat, theyre probably going to miss out on some significant productivity gains, said James Croke, business development officer at Version 1.

To get the most from AI, companies need to get their data right first. They need to migrate to the cloud. Its a challenge that they need to play catch-up on but when you look at the potential impact of AI for SMEs in particular, it could allow them to leapfrog rivals on productivity. This could be a once-in-a-generation chance for SMEs.

Markham Nolan, a former journalist and co-founder of Noan, which helps companies utilise artificial intelligence, said AI was a time machine for small businesses.

Our users save five to 10 hours a week by using AI, he said. If you get the prompts right, whatever AI creates will be 80-90 per cent to where it needs to be: you just have to do the polishing.

The AI has learned [our clients] brand. It has learned to speak for their brand. So rather than having to sit down and write an email from scratch, they just say, I need this email to be about this, to this company, and five seconds later theyll have a fully fleshed-out email in their voice.

So many businesses hit a hurdle, a point they cant get over because they dont have the revenue. AI is a bridging technology that allows people to add capacity without cost, and to grow to become the companies they have the potential to be, Nolan said.

Speaking to those attending the conference, you get the sense that AI no longer rests its promise upon pie-in-the-sky transformations in the future, but offers deliverable ones in the short term. Still, there are some innovations that remain difficult to picture in Ireland any time soon.

[AI-enabled software tool eases the burden on time-pressed project managers]

Matthew Nicholson, a researcher at Trinitys Adapt Centre, was showcasing Swedish company Furhats social robot. Looking a bit like a bust from Will Smiths I, Robot, with an internal projection displaying various facial expressions, it might act like a robot concierge in a hotel room, Nicholson suggested, though for now it seems a little too bizarre to wake up to.

Other AI applications were more obvious in their benefits for humanity. Unicefs AI lead, Dr Irina Mirkina, suggested that in the future AI will help predict cyclical natural disasters, disease outbreaks and how much aid is needed for emergencies at a pace far faster than any human can calculate.

Chris Hokamp, a scientist with Quantexa, sees a future in which there is likely to be another species of AIs.

Matthew Nicholson, a researcher at Trinitys Adapt centre, showcasing Swedish company Furhats social robot. Photograph: Conor Capplis

AI should remain a tool for humans for as long as possible, he said, adding: We dont want to give it emotions. Theres a curious casualness about sweeping predictions such as Hokamps. Its an almost unsurprising prediction in these circles nowadays.

He told the conference that humans must ensure that when AI reaches a superhuman level of intelligence, it must act ethically and adhere to regulations. He takes comfort in the knowledge that bad actors are unlikely to unleash a malevolent superintelligence on the world that they themselves would be unable to control.

Regulation was a key theme of the summit and there was much talk of the EUs proposed AI Act, which seeks for the first time to put in place a legal framework within which AI can operate in the bloc.

Onur Korucu, managing partner at GovernID, a privacy firm, said AI must be regulated in the EU not to stifle innovation but to put a frame around the innovation and democratise the use of AI.

Mark Kelly, the founder of AI Ireland, argued that a framework from which to develop AI would encourage companies to green-light new AI projects. His advice for Irish AI start-ups was not to go competing with the likes of OpenAI with video tools ... but if [a smaller Irish company] can go down and solve a niche industry-specific issue [you will succeed].

He spoke of one company that documented 20,000 client questions over the last 20 years, and created a language model around it to save time in answering queries. It led to a 25 per cent increase in clients within one year.

Skillnet Irelands Tracey Donnery said the number of women in AI is small but is increasing, and she appeared optimistic about AIs ability to augment and change jobs rather than solely replacing them. Hopefully it wont be as dramatic as described by the naysayers, she said.

There is, of course, also a dark side to AI. Dan Purcell, founder of Ceartas, an AI-powered company that takes down deepfakes from the internet, told the conference that sextortion is a growing issue, with young men increasingly accessing technology that uses AI to de-clothe women, an activity that also creates a larger data set for the software to improve its function.

On deepfakes, UCDs Dr Brendan Spillane believes the technology poses a serious risk to society including the integrity of elections. He said states are using deepfakes to sow social distrust and unrest and that it was becoming more common for states to outsource the service to private companies.

On the same subject, Rigr AI founder Edward Dixon helps law enforcement agencies around the world, including An Garda Sochna, to find the likely location of sensitive media. When police receive media depicting crimes against children, for example, they might receive hundreds of thousands of files.

Crimes like terrorism are noisy, public and generally have a lot of bystander imagery and accounts, he told The Irish Times. The kind of crimes we focus on happen quietly.

The companys tools suggest plausible locations based on the photo or videos environment, languages spoken, named mentioned ultimately saving hundreds of hours for investigators and potential psychological damage from examining sensitive media.

It is just one more example of how the long-predicted AI revolution now appears to be here. Whether you are worried, enthused or just baffled, it is hard not to feel as though the AI tide is coming in relentlessly.

Read the original post:
Artificial intelligence: The future is already here, and businesses will have to play catch-up - The Irish Times

I created an AI app in 10 minutes and I might never think about artificial intelligence in the same way again – TechRadar

Pretty much anything we can do with AI today might have seemed like magic just a year ago, but MindStudio's platform for creating custom AI apps in a matter of minutes feels like a new level of alchemy.

The six-month-old free platform, which you can find right now under youai.ai, is a visual studio for building AI workflows, assistants, and AI chatbots. In its short lifespan it's already been used, according to CEO Dimitry Shapiro, to build more than 18,000 apps.

Yes, he called them "apps", and if you're struggling to understand how or why anyone might want to build AI applications, just look at OpenAI's relatively new GPT apps (aka GPTs). These let you lock the powerful GPT-3.5 into topic-based thinking that you can package up, share, and sell. Shapiro, however, noted the limits of OpenAI's approach.

He likened GPTs to "bookmarking a prompt" within the GPT sphere. MindStudio, on the other hand, is generative model-agnostic. The system lets you use multiple models within one app.

If adding more model options sounds complicated, I can assure you it's not. MindStudio is the AI development platform for non-developers.

To get you started, the company provides an easy-to-follow 18-minute video tutorial. The system also helps by offering a healthy collection of templates (many of them business-focused), or you can choose a blank template. I followed the guide to recreate the demo AI app (a blog post generator), and my only criticism is that the video is slightly out of date, with some interface elements having been moved or renamed. There are some prompts to note the changes, but the video could still do with a refresh.

Still, I had no trouble creating that first AI blog generator. The key here is that you can get a lot of the work done through a visual interface that lets you add blocks along a workflow and then click on them to customize, add details, and choose which AI model you want to use (the list includes GPT- 3.5 turbo, PaLM 2, Llama 2, and Gemini Pro). While you don't necessarily have to use a particular model for each task in your app, it might be that, for example, you should be using GPT-3.5 for fast chatbots or that PaLM would be better for math; however, MindStudio cannot, at least yet, recommend which model to use and when.

Image 1 of 2

The act of adding training data is also simple. I was able to find web pages of information, download the HTML, and upload it to MindStudio (you can upload up to 150 files on a single app). MindStudio uses the information to inform the AI, but will not be cutting and pasting information from any of those pages into your app responses.

Most of MindStudio's clients are in business, and it does hide some more powerful features (embedding on third-party websites) and models (like GPT 4 Turbo) behind a paywall, but anyone can try their hand at building and sharing AI apps (you get a URL for sharing).

Confident in my newly acquired, if limited, knowledge, I set about building an AI app revolving around mobile photography advice. Granted, I used the framework I'd just learned in the AI blog post generator tutorial, but it still went far better than I expected.

One of the nice things about MindStudio is that it allows for as much or as little coding as you're prepared to do. In my case, I had to reference exactly one variable that the model would use to pull the right response.

There are a lot of smart and dead-simple controls that can even teach you something about how models work. MindStudio lets you set, for instance, the 'Temperature' of your model to control the randomness of its responses. The higher the 'temp', the more unique and creative each response. If you like your model verbose, you can drag another slider to set a response size of up to 3,000 characters.

The free service includes unlimited consumer usage and messages, some basic metrics, and the ability to share your AI via a link (as I've done here). Pro users can pay $23 a month for the more powerful models like GPT-4, less MindStudio branding, and, among other things, site embedding. The $99 a-month tier includes all you get with Pro, but adds the ability to charge for access to your AI app, better analytics, API access, full chat transcripts, and enterprise support.

Image 1 of 2

I can imagine small and medium-sized businesses using MindStudio to build customer engagement and content capture on their sites, and even as a tool for guiding users through their services.

Even at the free level, though, I was surprised at the level of customization MindStudio offers. I could add my own custom icons and art, and even build a landing page.

I wouldn't call my little AI app anything special, but the fact that I could take the germ of an idea and turn it into a bespoke chatbot in 10 minutes is surprising even to me. That I get to choose the right model for each job within an AI app is even better; and that this level of fun and utility is free is the icing on the cake.

See the original post here:
I created an AI app in 10 minutes and I might never think about artificial intelligence in the same way again - TechRadar