Category Archives: Artificial Intelligence

New artificial intelligence software has worrisome implications – The Ticker

Art produced by artificial intelligence is popping up more and more on peoples feeds without them knowing.

This art can range from simple etchings to surrealist imagery. It can look like a bowl of soup or a monster or cats playing chess on a beach.

While a boom in AI that has the capacity to create art has been electrifying the high tech world, these new developments in AI have many worrisome implications.

Despite positive uses, newer AI systems have the potential to pose as a tool of misinformation, create bias and undervalue artists skills.

In the beginning of 2021, advances in AI created deep-learning models that could generate images simply by being fed a description of what the user was imagining.

This includes OpenAIs DALL-E 2, Midjourney, Hugging Faces Craiyon, Metas Make-A-Scene, Googles Imagen and many others.

With the help of skillful language and creative ideation, these tools marked a huge cultural shift and eliminated technical human labor.

A San Francisco based AI company launched DALL-E paying homage to WALL-E, the 2008 animated movie, and Salvador Dal, the surrealist painterlast year, a system which can create digital images simply by being fed a description of what the user wants to see.

However, it didnt immediately capture the publics interest.

It was only when OpenAI introduced DALL-E 2, an improved version of DALL-E, that the technology began to gain traction.

DALL-E 2 was marketed as a tool for graphic artists, allowing them shortcuts for creating and editing digital images.

Similarly, restrictive measures were added to the software to prevent its misuse.

The tool is not yet available to everyone. It currently has 100,000 users globally, and the company hopes to make it accessible to at least 1 million in the near future.

We hope people love the tool and find it useful. For me, its the most delightful thing to play with weve created so far. I find it to be creativity-enhancing, helpful for many different situations, and fun in a way I havent felt from technology in a while, CEO of OpenAI Sam Altman wrote.

However, the new technology has many alarming implications. Experts say that if this sort of technology were to improve, it could be used to spread misinformation, as well as generate pornography or hate speech.

Similarly, AI systems might show bias toward women and people of color because the data is being pulled from pools and online text which exhibit a similar bias.

You could use it for good things, but certainly you could use it for all sorts of other crazy, worrying applications, and that includes deep fakes, Professor Subbarao Kambhampati told The New York Times. Kambhampati teaches computer science at Arizona State University.

The company content policy prohibits harassment, bullying, violence and generating sexual and political content. However, users who have access can still create any sort of imagery from the data set.

Its going to be very hard to ensure that people dont use them to make images that people find offensive, AI researcher Toby Walsh told The Guardian.

Walsh warned that the public should generally be more wary of the things they see and read online, as fake or misleading images are currently flooding the internet.

The developers of DALL-E are actively trying to fight against the misuse of their technology.

For instance, researchers are attempting to mitigate potentially dangerous content in the training dataset, particularly imagery that might be harmful toward women.

However, this cleansing process also results in the generation of fewer images of women, contributing to an erasure of the gender.

Bias is a huge industry-wide problem that no one has a great, foolproof answer to, Miles Brundage, head of policy research at OpenAI, said. So a lot of the work right now is just being transparent and upfront with users about the remaining limitations.

However, OpenAI is not the only company with the potential to wreak havoc in cyberspace.

While OpenAI did not disclose its code for DALL-E 2, a London technology startup, Stability AI, shared the code for a similar, image-generating model for anyone to use and rebuilt the program with fewer restrictions.

The companys founder and CEO, Emad Mostaque, told The Washington Post he believes making this sort of technology public is necessary, regardless of the potential dangers. I believe control of these models should not be determined by a bunch of self-appointed people in Palo Alto, he said. I believe they should be open.

Mostaque is displaying an innately reckless strain of logic. Allowing these powerful AI tools to fall into the hands of just anyone will undoubtedly result in drastic, wide-scale consequences.

Technology, particularly software like DALL-E 2, can easily be misused as tools to spread hate and misinformation, and therefore need to be regulated before its too late.

More here:
New artificial intelligence software has worrisome implications - The Ticker

Mobile phone app accurately detects COVID-19 infection in people’s voices with the help of artificial intelligence – EurekAlert

image:Early identification of COPD exacerbations can be managed via the myCOPD mobile app view more

Credit: my mHealth Ltd

Artificial intelligence (AI) can be used to detect COVID-19 infection in peoples voices by means of a mobile phone app, according to research to be presented on Monday at the European Respiratory Society International Congress in Barcelona, Spain [1].

The AI model used in this research is more accurate than lateral flow/rapid antigen tests and is cheap, quick and easy to use, which means it can be used in low-income countries where PCR tests are expensive and/or difficult to distribute.

Ms Wafaa Aljbawi, a researcher at the Institute of Data Science, Maastricht University, The Netherlands, told the congress that the AI model was accurate 89% of the time, whereas the accuracy of lateral flow tests varied widely depending on the brand. Also, lateral flow tests were considerably less accurate at detecting COVID infection in people who showed no symptoms.

These promising results suggest that simple voice recordings and fine-tuned AI algorithms can potentially achieve high precision in determining which patients have COVID-19 infection, she said. Such tests can be provided at no cost and are simple to interpret. Moreover, they enable remote, virtual testing and have a turnaround time of less than a minute. They could be used, for example, at the entry points for large gatherings, enabling rapid screening of the population.

COVID-19 infection usually affects the upper respiratory track and vocal cords, leading to changes in a persons voice. Ms Aljbawi and her supervisors, Dr Sami Simons, pulmonologist at Maastricht University Medical Centre, and Dr Visara Urovi, also from the Institute of Data Science, decided to investigate if it was possible to use AI to analyse voices in order to detect COVID-19.

They used data from the University of Cambridges crowd-sourcing COVID-19 Sounds App that contains 893 audio samples from 4,352 healthy and non-healthy participants, 308 of whom had tested positive for COVID-19. The app is installed on the users mobile phone, the participants report some basic information about demographics, medical history and smoking status, and then are asked to record some respiratory sounds. These include coughing three times, breathing deeply through their mouth three to five times, and reading a short sentence on the screen three times.

The researchers used a voice analysis technique called Mel-spectrogram analysis, which identifies different voice features such as loudness, power and variation over time.

In this way we can decompose the many properties of the participants voices, said Ms Aljbawi. In order to distinguish the voice of COVID-19 patients from those who did not have the disease, we built different artificial intelligence models and evaluated which one worked best at classifying the COVID-19 cases.

They found that one model called Long-Short Term Memory (LSTM) out-performed the other models. LSTM is based on neural networks, which mimic the way the human brain operates and recognises the underlying relationships in data. It works with sequences, which makes it suitable for modelling signals collected over time, such as from the voice, because of its ability to store data in its memory.

Its overall accuracy was 89%, its ability to correctly detect positive cases (the true positive rate or sensitivity) was 89%, and its ability to correctly identify negative cases (the true negative rate or specificity) was 83%.

These results show a significant improvement in the accuracy of diagnosing COVID-19 compared to state-of-the-art tests such as the lateral flow test, said Ms Aljbawi. The lateral flow test has a sensitivity of only 56%, but a higher specificity rate of 99.5%. This is important as it signifies that the lateral flow test is misclassifying infected people as COVID-19 negative more often than our test. In other words, with the AI LSTM model, we could miss 11 out 100 cases who would go on to spread the infection, while the lateral flow test would miss 44 out of 100 cases.

The high specificity of the lateral flow test means that only one in 100 people would be wrongly told they were COVID-19 positive when, in fact, they were not infected, while the LSTM test would wrongly diagnose 17 in 100 non-infected people as positive. However, since this test is virtually free, it is possible to invite people for PCR tests if the LSTM tests show they are positive.

The researchers say that their results need to be validated with large numbers. Since the start of this project, 53,449 audio samples from 36,116 participants have now been collected and can be used to improve and validate the accuracy of the model. They are also carrying out further analysis to understand which parameters in the voice are influencing the AI model.

In a second study, Mr Henry Glyde, a PhD student in the faculty of engineering at the University of Bristol, showed that AI could be harnessed via an app called myCOPD to predict when patients with chronic obstructive pulmonary disease (COPD) might suffer a flare-up of their disease, sometimes called acute exacerbation. COPD exacerbations can be very serious and are associated with increased risk of hospitalisation. Symptoms include shortness of breath, coughing and producing more phlegm (mucus).

Acute exacerbations of COPD have poor outcomes. We know that early identification and treatment of exacerbations can improve these outcomes and so we wanted to determine the predictive ability of a widely used COPD app, he said.

The myCOPD app is a cloud-based interactive app, developed by patients and clinicians and is available to use in the UKs National Health Service. It was established in 2016 and, so far, has over 15,000 COPD patients using it to help them manage their disease.

The researchers collected 45,636 records for 183 patients between August 2017 and December 2021 [3]. Of these, 45,007 were records of stable disease and 629 were exacerbations. Exacerbation predictions were generated one to eight days before a self-reported exacerbation event. Mr Glyde and colleagues used these data to train AI models on 70% of the data and test it on 30%.

The patients were high engagers, who had been using the app weekly over months or even years to record their symptoms and other health information, record medication, set reminders, and have access to up-to-date health and lifestyle information. Doctors can assess the data via a clinician dashboard, enabling them to provide oversight, co-management and remote monitoring.

The most recent AI model we developed has a sensitivity of 32% and a specificity of 95%. This means that the model is very good at telling patients when they are not about to experience an exacerbation, which may help them to avoid unnecessary treatment. It is less good at telling them when they are about to experience one. Improving this will be the focus of the next phase of our research, said Mr Glyde.

Speaking before the congress, Dr James Dodd, Associate Professor in respiratory medicine at the University of Bristol and project lead, said: To our knowledge, this study is the first of its kind to model real world data from COPD patients, extracted from a widely deployed therapeutic app. As a result, exacerbation predictive models generated from this study have the potential to be deployed to thousands more COPD patients after further safety and efficacy testing. It would empower patients to have more autonomy and control over their health. This is also a significant benefit for their doctors as such a system would likely reduce patient reliance on primary care. In addition, better-managed exacerbations could prevent hospitalisation and alleviate the burden on the healthcare system. Further study is required into patient engagement to determine what level of accuracy is acceptable and how an exacerbation alert system would work in practice. The introduction of sensing technologies may further enhance monitoring and improve the predictive performance of models.

One of the limitations of the study is the small number of frequent users of the app. The current model requires a patient to input a COPD assessment test score, fill out their medication diary and then report they are having an exacerbation accurately days later. Usually, only patients who are highly engaged with the app, using it daily or weekly, can provide the amount of data needed for the AI modelling. In addition, because there are significantly more days the users are stable than when they are having an exacerbation, there is a significant imbalance between the exacerbation and non-exacerbation data available. This results in even further difficulty in the models correctly predicting events after training on this imbalanced data.

A recent partnership between patients, clinicians and carers to set research priorities in COPD found that the highest-rated question was how to identify better ways to prevent exacerbations. We have focused on this question ,and we will be working closely with patients to design and implement the system, concluded Mr Glyde. [4]

Chair of the ERS Science Council, Professor Chris Brightling, is the National Institute for Health and Care Research (NIHR) Senior Investigator at the University of Leicester, UK, and was not involved with the research. He commented: These two studies show the potential of artificial intelligence and apps on mobile phones and other digital devices to make a difference in how diseases are managed. Having more data available for training these artificial intelligence models, including appropriate control groups, as well as validation in multiple studies, will improve their accuracy and reliability. Digital health using AI models presents an exciting opportunity and is likely to impact future health care.

(ends)

[1] Abstract no: OA1626, Developing a multivariate prediction model for the detection of COVID-19 from crowd-sourced respiratory voice data, presented by Wafaa Aljbawi in Digital medicine for COVID-19 session, 08.15-09.30 hrs CEST on Monday 5 September 2022, https://k4.ersnet.org/prod/v2/Front/Program/Session?e=377&session=14843

Also available as a pre-print paper at https://arxiv.org/ from 5 September: Developing a multi-variate prediction model for the detection of COVID-19 from crowd-sourced respiratory voice data, by Wafaa Aljbawi, Sami O. Simons and Visara Urovi.

[2] Abstract no. PA2728, Exacerbation predictive modelling using real-world data from the myCOPD app, presented by Henry Glyde, thematic poster Digital health interventions in respiratory practice, 13.00-14.00 hrs CEST on Monday 5 September 2022, https://k4.ersnet.org/prod/v2/Front/Program/Session?e=377&session=14775

[3] Data have been updated after the time of abstract submission. Please use the data in this release as they are the most recent.

[4] Research priorities for exacerbations of COPD. The Lancet Respiratory Medicine. 2021;9(8):824826. DOI: https://doi.org/10.1016/S2213-2600(21)00227-7

Observational study

People

5-Sep-2022

See the original post here:
Mobile phone app accurately detects COVID-19 infection in people's voices with the help of artificial intelligence - EurekAlert

Three Keys to Implementing Artificial Intelligence in Drug Discovery – Pharmacy Times

AI-based technologies are increasingly being used for things such as virtual screening, physics-based biological activity assessment, and drug crystal-structure prediction.

Despite the buzz around artificial intelligence (AI), most industry insiders know that the use of machine learning (ML) in drug discovery is nothing new. For more than a decade, researchers have used computational techniques for many purposes, such as finding hits, modeling drug-protein interactions, and predicting reaction rates.

Whatisnew is the hype. As AI has taken off in other industries, countless start-ups have emerged promising to transform drug discovery and design with AI-based technologies for things such as virtual screening, physics-based biological activity assessment, and drug crystal-structure prediction.

Investors have made huge bets that these start-ups will succeed. Investment reached$13.8 billionin 2020 and more than one-third of large-pharma executivesreportusing AI technologies.

Although a few AI-native candidates are in clinical trials,around 90%remain in discovery or preclinical development, so it will take years to see if the bets pay off.

Artificial Expectations

Along with big investments comes high expectationsdrug the undruggable, drastically shorten timelines, virtually eliminate wet lab work.Insider Intelligenceprojectsthat discovery costs could be reduced by as much as 70% with AI.

Unfortunately, its just not that easy. The complexity of human biology precludes AI from becoming a magic bullet. On top of this, data must be plentiful and clean enough to use.

Models must be reliable, prospective compounds need to be synthesizable, and drugs have to pass real-life safety and efficacy tests. Although this harsh reality hasnt slowed investment, it has led to fewer companies receiving funding, to devaluations, and to discontinuation of some more lofty programs, such as IBMs Watson AI for drug discovery.

This begs the question: Is AI for drug discovery more hype than hope? Absolutely not.

Do we need to adjust our expectations and position for success? Absolutely, yes. But how?

Three Keys to Implementing AI in Drug Discovery

Implementing AI in drug discovery requires reasonable expectations, clean data, and collaboration. Lets take a closer look.

1. Reasonable Expectations

AI can be a valuable part of a companys larger drug discovery program. But, for now, its best thought of as one option in a box of tools. Clarifying when, why, and how AI is used is crucial, albeit challenging.

Interestingly, investment has largely fallen to companies developing small molecules, which lend themselves to AI because theyre relatively simple compared to biologics, and also because there are decades of data upon which to build models. There is also great variance in the ease of applying AI across discovery, with models for early screening and physical-property prediction seemingly easier to implement than those for target prediction and toxicity assessment.

Although the potential impact of AI is incredible, we should remember that good things take time.Pharmaceutical Technologyrecently askedits readers to project how long it might take for AI to reach its peak in drug discovery, and by far, the most common answer was more than 9 years.

2. Clean Data

The main challenge to creating accurate and applicable AI models is that the available experimental data is heterogenous, noisy, and sparse, so appropriate data curation and data collection is of the utmost importance.

This quote from a2021Expert Opinion on Drug Discoveryarticlespeaks wonderfully to the importance of collecting clean data. While it refers to ADEMT and activity prediction models, the assertion also holds true in general. AI requires good data, and lots of it.

But good data are hard to come by. Publicly available data can be inadequate, forcing companies to rely on their own experimental data and domain knowledge.

Unfortunately, many companies struggle to capture, federate, mine, and prepare their data, perhaps due to skyrocketing data volumes, outdated software, incompatible lab systems, or disconnected research teams. Success with AI will likely elude these companies until they implement technology and workflow processes that let them:

3. Collaboration

Companies hoping to leverage AI need a full view of all their data, not just bits and pieces. This demands a research infrastructure that lets computational and experimental teams collaborate, uniting workflows and sharing data across domains and locations. Careful process and methodology standardization is also needed to ensure that results obtained with the help of AI are repeatable.

Beyond collaboration within organizations, key industry players are also collaborating to help AI reach its full potential, making security and confidentiality key concerns. For example, many large pharma companies have partnered with start-ups to help drive their AI efforts.

Collaborative initiatives, such as the MELLODDY Project, have formed to help companies leverage pooled data to improve AI models and vendors such as Dotmatics are building AI models using customers collective experimental data.

About the Author

Haydn Boehm is Director of Product Marketing at Dotmatics, a leader in R&D scientific software connecting science, data, and decision-making. Its enterprise R&D platform and scientists favorite applications drive efficiency and accelerate innovation.

More:
Three Keys to Implementing Artificial Intelligence in Drug Discovery - Pharmacy Times

The pharma industry found it easier to fill artificial intelligence vacancies in Q2 2022 – Pharmaceutical Technology

Artificial intelligence related jobs that were closed during Q2 2022 had been online for an average of 30 days when they were taken offline.

This was a decrease compared to the equivalent figure a year earlier, indicating that the required skillset for these roles has become easier to find in the past year.

Artificial intelligence is one of the topics that GlobalData, our parent company and from whom the data for this article is taken, have identified as being a key disruptive technology force facing companies in the coming years. Companies that excel and invest in these areas now are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

On a regional level, these roles were hardest to fill in North America, with related jobs that were taken offline in Q2 2022 having been online for an average of 34 days.

The next most difficult place to fill these roles was found to be the Middle East and Africa, while Europe was in third place.

At the opposite end of the scale, jobs were filled fastest in Asia-Pacific, with adverts taken offline after 18.1 days on average.

While the pharmaceutical industry found it easier to fill these roles in the latest quarter, these companies also found it easier to recruit artificial intelligence jobs than the wider market, with ads online for 25% less time on average compared to similar jobs across the entire jobs market.

GlobalData's job analytics database tracks the daily hiring patterns of thousands of companies across the world, drawing in jobs as they're posted and tagging them with additional layers of data on everything from the seniority of each position to whether a job is linked to wider industry trends.

Thermal Analysis Techniques and Rheometers for the Pharmaceutical Industry

Filtration & Separation Solutions for Purifying Active Pharmaceutical Ingredients

GlobalData exists to help businesses decode the future to profit from faster, more informed decisions.

View post:
The pharma industry found it easier to fill artificial intelligence vacancies in Q2 2022 - Pharmaceutical Technology

Equality watchdog takes action to address discrimination in use of artificial intelligence – PoliticsHome

The use of artificial intelligence by public bodies is to be monitored by Britains equality regulator for the first time to ensure technologies are not discriminating against people.

There is emerging evidence that bias built into algorithms can lead to less favourable treatment of people with protected characteristics such as race and sex.

The Equality and Human Rights Commission has made tackling discrimination in AI a major strand of its new three-year strategy.

It is today publishing new guidance to help organisations avoid breaches of equality law, including the public sector equality duty (PSED). The guidance gives practical examples of how AI systems may be causing discriminatory outcomes.

From October, the Commission will work with a cross-section of 30 local authorities and other public bodies in England and Scotland to understand how they are using AI to deliver essential services, such as benefits payments, amid concerns that automated systems are inappropriately flagging certain families as a fraud risk.

The EHRC is also exploring how best to use its powers to examine how organisations are using facial recognition technology, following concerns that the software may be disproportionately affecting people from ethnic minorities.

These interventions will improve how organisations use AI and encourage public bodies to take action to address any negative equality and human rights impacts.

Marcial Boo, chief executive of the EHRC, said:

While technology is often a force for good, there is evidence that some innovation, such as the use of artificial intelligence, can perpetuate bias and discrimination if poorly implemented.

Many organisations may not know they could be breaking equality law, and people may not know how AI is used to make decisions about them.

Its vital for organisations to understand these potential biases and to address any equality and human rights impacts.

As part of this, we are monitoring how public bodies use technology to make sure they are meeting their legal responsibilities, in line with our guidance published today. The EHRC is committed to working with partners across sectors to make sure technology benefits everyone, regardless of their background.

The monitoring projects will last several months and will report initial findings early next year.

The Artifical Intelligence in Public Services guidance advises organisations to consider how the PSED applies to automated processes, to be transparent about how the technology is used and to keep systems under constant review.

In the private sector, the EHRC is currently supporting a taxi driver in a race discrimination claim regarding Ubers use of facial recognition technology for identification purposes.

Continue reading here:
Equality watchdog takes action to address discrimination in use of artificial intelligence - PoliticsHome

Save the date: Artificial Intelligence and Emerging Technologies Partnership meeting #2 on September 22 – United States Patent and Trademark Office

Published on: 08/31/2022 15:03 PM

[[VIEW_THIS]]

The Artificial Intelligence (AI) and Emerging Technologies (ET) Partnership Series will hold itsnext meeting,AI/ET Partnership Series #2: AI & Biotech, virtually and in person at the United States Patent and Trademark Office's (USPTO) Silicon Valley Regional Office on September 22, 2022 from 9:30 a.m. to noon PT. During this meeting, panelists from industry and the USPTO will explore various patent policy issues with respect to the biotech industry, including:

A full agenda with speakers will be posted prior to the event. This event is free and open to the public, so register early to attend in person or virtually.

Stay connected with the USPTO by subscribing to regular email updates.

Visit our subscription center at http://www.uspto.gov/subscribe to update or change your email preferences.

This email was sent from an unmonitored mailbox. To contact us, please visit our website http://www.uspto.gov/about/contacts. To ensure that you continue to receive our news and notices, please modify your email filters to allow mail from subscriptioncenter@subscriptions.uspto.gov.

Read more:
Save the date: Artificial Intelligence and Emerging Technologies Partnership meeting #2 on September 22 - United States Patent and Trademark Office

From Google Home to Alexa, Artificial Intelligence to play large in trading of cryptocurrencies – The Financial Express

From Google Home to Alexa, the role of artificial intelligence (AI) seems to have grown over the years. It is now believed that AI will play a greater role when it comes crypto being traded. As a greater number of financial institutions start offering crypto-assets as wealth management offering, the roles of AI-supported trading will become more popular. There are over 4,000 cryptocurrencies and even the oldest coins show large fluctuations in their prices. Likewise, Bitcoin 30-day volatility index is twice the value from 2016 (as per data published on buybitcoinworldwide), Saurav Raaj, founder, director, Wize, a non-fungible token (NFT) infrastructure for businesses company, told FE Digital Currency.

As per industry observers, AI is used in intelligent trading systems for stock market prediction and currency price prediction. As per a report by IEEE Access, Generalised Autoregressive Conditional Heteroskedasticity (GARCH), is a time-series statistical model used for understanding volatility. AI is in the area of market sentiment analysis. Unlike traditional stocks, discussions among trading communities and social media reports, can drive trading decisions. AI with natural language processing (NLP) can analyse market and community sentiments and provide valuable insights to the traders, Raaj added.

Courtesy: IEEE Access, ResearchGate.

It is believed that trading decision is usually based on behavioural biases that cause them to act on an emotion which could lead to mistakes while processing information. AI-guided crypto trading is unlikely to get rid of emotional factors, it is likely to amplify that via machine learning. A deliberate fix in AI programmes to avoid trading at large corrections, and surges may help. Still, it is also likely to slow the usual stop-loss or take-profit exercise, Liquing Yu, Economic Intelligence Units (EIU) analyst on India, Indonesia, and Singapore, said.

Furthermore, industry experts noted that if properly implemented and trained, AI can help eliminate human bias. According to Vikram Pandya, director, Fintech, SP Jain, it definitely helps make scientific decisions backed by data and not by impulse.

According to Business Insider Report in June 2019, there are three areas where AI is used in banking, namely, conversational banking, anti-fraud detection, risk assessment, and credit underwriting. AI-based systems can help to process trading data which can assist traders to make better investment decisions. AI with machine learning (ML) can provide safeguards against such attacks and reduce damages in real-time. In extreme cases, it can be utilised to trigger circuit breakers and even stop trading, added Raaj.

Also Read: From centralisation to decentralisation; how blockchain-oriented fintech can benefit the financial sector

Follow us onTwitter,Facebook,LinkedIn

Read the rest here:
From Google Home to Alexa, Artificial Intelligence to play large in trading of cryptocurrencies - The Financial Express

The future of AI in music is now. Artificial Intelligence was in the music industry long before FN Meka. – Grid

Music has forever been moved by technology from the invention of the phonograph, to Bob Dylan pivoting from acoustic to electric guitar, to the ubiquity of streaming platforms and, most recently, an ambitious attempt at crossing AI with commercial music.

FN Meka, introduced in 2021 as a virtual rapper whose lyrics and beats were constructed with proprietary AI technology, had a promising rise.

But just days after he signed on with Capitol Records the label that carried The Beatles, Nat King Cole and The Beach Boys and released his debut track Florida Water, the record company dropped him. His pink slip was a response in part to fans and activists widely criticizing his image a digital avatar with face tattoos, green braids and a golden grill and decrying his blend of stereotypes and slur-infused lyrics.

The AI artist, voiced by a real person and created by a company called Factory New, was not, technologically, a groundbreaking experiment. But it was a needle-mover for a discussion that is imminent within the industry: How AI will continue to shape how we experience music.

In 1984, classical trombonist George Lewis used three Apple II computers to program Yamaha digital synthesizers to improvise along with a live quartet. The resulting record a syrupy and spacey co-creation of computer and human musicians was titled Rainbow Family, and is considered by many as the first instance of artificially intelligent music

In the years since, advances in mixing boards popularized the practice of sampling and interpolation igniting debates about remixing old songs to make new ones (art form or cheap trick?) and Auto-Tune became a central tool in singers recorded and onstage performances.

FN Meka isnt the only AI artist out there. Some have been introduced, and lasted, with less commercial backing. YONA, a virtual singer-songwriter and AI poet made by Ash Koosha, has performed live at music festivals around the globe, including MUTEK in Montreal, Rewire in the Netherlands and Barbican in the U.K.

In fact, the most crucial and successful partnerships between AI and music have been under the hood, said Patricia Alessandrini, a composer, sound artist and researcher at Stanford Universitys Center for Computer Research in Music and Acoustics.

During the pandemic, the music world leaned heavily on digital tools to overcome challenges of sharing and playing music while remote, Alessandrini said. JackTrip Virtual Studio, for example, was an online platform used to teach university music lessons while students were remote. It minimized time delay, making audiovisual synchronicity much easier, and was born from machine learning sound research.

And for producers who deal with large music files and digital compression, AI can play a role in signal processing, Alessandrini said. This is important for sound engineers and musicians alike, saving time and helping them more smoothly create, or export, big records.

There are beneficial applications for technology and music to intersect when it comes to accessibility, she said. Instruments have been made using AI to require less strength or pressure in order to generate sound, for example allowing those with injuries or disabilities to play with eye movements alone.

Alessandrinis own projects include the Piano Machine which uses computers and voltages as fingers to create new sounds and Harp Fingers, a technology that allows users to play a harp without physically touching it.

On a meta level, algorithms are the ubiquitous drivers of online streaming platforms Spotify, Apple Music, SoundCloud, YouTube and others are constantly using machine learning, in less transparent ways, to personalize playlists, releases, lists of nearby concerts and music recommendations.

Less agreed upon is the concept of an AI artist itself. Reactions have been split among those loyal to the humanity of art; some who argued that if certain artists were indistinguishable from AI, then they deserved to be replaced; others who invited the newness; and many whose feelings fall somewhere in between.

With any cultural form, part of what youre dealing with are peoples expectations for what things sound like or what an artist looks like, Oliver Wang, a music writer and sociology professor at California State University, Long Beach, told Grid.

Some experts argue that those questions leave out a critical point: Whatever the technology, there is always a human behind the work and that should count.

Sometimes people dont know or see how much human work is behind artificial intelligence, said Adriana Amaral, a professor at UNISINOS in Brazil and expert in pop culture, influencers and fan studies. Its a team of people developers, programmers, designers, people from production and marketing.

But this misunderstanding isnt always the fault of the public, said Alessandrini. It often comes down to marketing. Its more exciting to say that somethings made entirely by AI, Alessandrini said. This was how FN Meka was marketed and promoted online as an AI artist. But while his lyrics, sound and beats were AI-generated, they then were performed by a human and animated, cartoon-style.

If it sounds strange that one would become a dedicated fan of a virtual persona, it shouldnt, Amaral said. The world of competitive video gaming, which is nothing without its on-screen characters, is a multibillion-dollar industry that sells out arenas worldwide.

Still, music purists and audiophiles and any person who appreciates music as an experience, rather than just entertainment may very well resist AI musicians. In particular, Alessandrini said, AI is better at generating content faster and copying genres, though unable to innovate new ones a result of training their computing models, largely, using what music already exists.

When a rap artist has these different influences and their own specific cultural experience, then thats the kind of magical thing that they use to create, Alessandrini said. You can say that Bobby Shmurda is one of the first Brooklyn drill artists because of a particular song. So thats a [distinctly] human capacity, compared to AI.

Alessandrini likens this artistic experience to the advancements of AI in medicine the applications of robotic technologies used during surgeries that are more efficient and mitigate the risk of human error. But, she said, there are some things that humans do better caring for a patient, understanding their suffering.

Its hard to imagine AI vocals ever reaching the emotional and beautifully human depths, say, of a Nina Simone or Ann Peebles; or channeling the authentic camaraderie and bounce of a group like OutKast.

In 2017, the French government commissioned mathematician and politician Cdric Villani to lay ambitious groundwork for the countrys artificially intelligent (AI) future.

His strategy, one that considered economics, ethics and education, foremost straddled the thinning line between creation and consumption.

The division between the noncreative machine and the creative human is ever less clear-cut, he wrote. Creativity, he went on to say, was no longer just an artists skill it was a necessary tool for a world of co-inhabitance, machine and human together.

Is that what is happening?

One cant talk about music on grand scales without also talking about money. Though FN Meka was a failure, AI has strong ties to the music sphere that wont be broken because one AI rapper got cut from a label. And it feels inevitable that another big record company or music festival will give it a go.

Why? It might all come down to cost, say experts and music listeners who run the cynicism gamut.

Wang said he has a sneaking suspicion that record companies and executives see AI musicians as a way to save money on royalty payments and travel costs moving forward.

Beyond the money-hungry music industry, there is also room for a lot of good moving forward with AI, said Amaral. She hopes FN Mekas image, and how he was received, was a wake-up call for whatever AI artist inevitably comes next. She also mentioned YONA, which she saw in concert in Japan, as a thin, white, able pop star not unlike many who dominate the music scene today.

We have all the technological tools to make someone who could be green, or fat or any way we like, and we still are stuck on these patterns, she said.

What will the landscape look like five or 10 or 15 years from now? Wang asks. Pop music, despite peoples cynicism, rarely stays static. Its constantly changing, and perhaps these computer-based attempts at creating artists will be part of that change.

Thanks to Dave Tepps for copy editing this article.

See more here:
The future of AI in music is now. Artificial Intelligence was in the music industry long before FN Meka. - Grid

Artificial Intelligence-powered (AI) Spatial Biology Market Market to Record an Exponential CAGR by 2030 – Exclusive Report by InsightAce Analytic -…

JERSEY CITY, N.J., Aug. 30, 2022 /PRNewswire/ -- InsightAce Analytic Pvt. Ltd. announces the release of market assessment report on "Global Artificial Intelligence-powered (AI) Spatial Biology Market By Data Analyzed (DNA, RNA, and Protein) By Application (Translation Research, Drug Discovery and Development, Single Cell Analysis, Cell Biology, Clinical Diagnostics, and Other Applications) Technology Trends, Industry Competition Analysis, Revenue and Forecast Till 2030"

InsightAce_Analytic_Logo

According to the latest research by InsightAce Analytic, the global artificial intelligence-powered (AI) spatial biology market is expected to record a promising CAGR of 16.4% during the period of 2022-2030. By region, North America dominates the global market with the major contribution in terms of revenue.

Request for Sample Pages: https://www.insightaceanalytic.com/request-sample/1358

In recent years, enormous advances in biological research and automated molecular biology have been gained using artificial intelligence (AI). AI has the ability to effectively assist in specific areas in biology, which may enable novel biotechnology-derived medicines to facilitate the deployment of precision medicine approaches. It is predicted that using AI on cell-by-cell maps of gene or protein activity will lead to major inventions in spatial biology. The next significant step in the comprehension of biology might be achieved by incorporating spatially resolved data. When applied to gene expression, spatial transcriptomics (spRNA-Seq) combines the strengths of conventional histopathology with those of single-cell gene expression profiling. Mapping specific disease pathologies is made possible by linking the spatial arrangement of molecules in cells and tissues with their gene expression state. Machine learning has the ability to generate images of gene transcripts at sub-cellular resolution and decipher molecular proximities from sequencing data.

Artificial Intelligence in spatial biology has gained faster development in sequencing and analysis, drug discovery, and disease diagnosis. Increased interest in AI in spatial biology can be attributed to the widespread use of similar technologies in other sectors and the growing popularity of increased use of Artificial Intelligence. Moreover, Market expansion can also be attributed to government spending on research around the world. The increasing demand for novel analysis analytical tools and subsequent funding has resulted in the market launch of high-throughput technology. However, Despite the availability of new high-complexity spatial imaging methods, it is still challenging and labour-intensive to extract, analyze, and interpret biological information from these images.

Story continues

In 2021, the market was led by North America. Technological developments, the existence of a well-established research infrastructure and key players, and increased spending in drug discovery R&D are all factors contributing to the expansion of the regional market. Due to the region's large and growing demand from research and the pharmaceutical industry, North America is currently the largest market for artificial intelligence applications in spatial omics.

The major players operating in artificial intelligence-powered (AI) spatial biology market players areNucleai, Inc., Reveal Biosciences, Inc., Alpenglow Biosciences, SpIntellx, Inc., ONCOHOST, Pathr.ai, Phenomic AI, BioTuring Inc., Indica Labs, Rebus Biosystems, Inc., Genoskin, Algorithmic Biologics, Castle Biosciences, Inc. (TissueCypher), and Other Prominent Players. The leading spatial omics solution providers are focusing on strategies like investmenst for innovations, partnerships, collaborations, mergers, and agreements with AI based service providers.

Curious about the full report? Get Report Details @ https://www.insightaceanalytic.com/enquiry-before-buying/1358

Key Developments In The Market

In Aug 2022, SpIntellx, Inc. and iCura Diagnostics announced cooperation to revolutionize precision oncology by releasing the power of genomic, proteomic, and transcriptomic data through the application of advanced spatial analytics. This new alliance combines SpIntellx's software-as-a-service (SaaS) solutions for precision pathology applications that leverage unbiased spatial analytics and explainable AI with iCura Diagnostics' technical CRO expertise in accelerating immunotherapy and targeted therapy development.

In July 2022, Nucleai announced a relationship with Sirona DX, a US-based contract research company. The alliance intends to further the AI-driven identification of novel spatial biomarkers indications of solid tumour recurrence, treatment response, and prognosis. Nucleai is developing a precision oncology platform based on artificial intelligence for research and therapy decisions.

In May 2022, OncoHost announced the completion of a Series C investment round worth $35 million. The financing will be used to expand OncoHost's ongoing multicenter PROPHETIC trial utilizing PROphet, the company's machine learning-based host response profiling technology, and to support the upcoming commercial launch of the precision oncology diagnostic solution in the United States.

In March 2022, CellCarta announced the release of imageDx PRISM to significantly improve the spatial biology data that can be obtained from Akoya Bioscience's multiplex immunofluorescence (mIF) tests. imageDx PRISM from Reveal Biosciences integrates the most recent AI advancements to novel pattern discovery and spatial biomarker characterization, providing difficult-to-discover patient insights previously.

In Jan 2022, Single Cell Discoveries and BioTuring announced a partnership to advance the field of single-cell sequencing. Single Cell Discoveries will integrate BioTuring's single-cell data processing solution into its single-cell sequencing services as part of the agreement. The cooperation intends to bridge the gap between wet-lab services for single-cell sequencing and solutions for single-cell data analysis.

For More Customization @ https://www.insightaceanalytic.com/customisation/1358

Market Segments

Global Artificial Intelligence-powered (AI) Spatial Biology Market, by Data Analyzed, 2022-2030 (Value US$ Mn)

Global Artificial Intelligence-powered (AI) Spatial Biology Market, by Application, 2022-2030 (Value US$ Mn)

Global Artificial Intelligence-powered (AI) Spatial Biology Market, by Region, 2022-2030 (Value US$ Mn)

North America

Europe

Asia Pacific

Latin America

Middle East & Africa

North America Artificial Intelligence-powered (AI) Spatial Biology Market, by Country, 2022-2030 (Value US$ Mn)

Europe Artificial Intelligence-powered (AI) Spatial Biology Market, by Country, 2022-2030 (Value US$ Mn)

Germany

France

Italy

Spain

Russia

Rest of Europe

Asia Pacific Artificial Intelligence-powered (AI) Spatial Biology Market, by Country, 2022-2030 (Value US$ Mn)

India

China

Japan

South Korea

Australia & New Zealand

Latin America Artificial Intelligence-powered (AI) Spatial Biology Market, by Country, 2022-2030 (Value US$ Mn)

Brazil

Mexico

Rest of Latin America

Middle East & Africa Artificial Intelligence-powered (AI) Spatial Biology Market, by Country, 2022-2030 (Value US$ Mn)

Why should buy this report:

To receive a comprehensive analysis of the prospects for global artificial intelligence-powered (AI) spatial biology market

To receive industry overview and future trends of global artificial intelligence-powered (AI) spatial biology market

To analyse the artificial intelligence-powered (AI) spatial biology market drivers and challenges

To get information on artificial intelligence-powered (AI) spatial biology market size value (US$ Mn) forecast till 2030

To get information on major Investments, Mergers & Acquisition in global artificial intelligence-powered (AI) spatial biology market industry

For More Customization @ https://www.insightaceanalytic.com/customisation/1358

Other Related Reports Published by InsightAce Analytic:

Global Spatial Omics Solutions Market

Global Proteome Profiling Services Market

Global Single-Cell Bioinformatics Software and Services Market

Global Oligonucleotide Synthesis, Modification, and Purification Services Market

Global Circulating Cell-Free DNA (ccfDNA) Diagnostics Market

About Us:

InsightAce Analytic is a market research and consulting firm that enables clients to make strategic decisions. Our qualitative and quantitative market intelligence solutions inform the need for market and competitive intelligence to expand businesses. We help clients gain competitive advantage by identifying untapped markets, exploring new and competing technologies, segmenting potential markets and repositioning Data Analyzeds. Our expertise is in providing syndicated and custom market intelligence reports with an in-depth analysis with key market insights in a timely and cost-effective manner.

Contact Us:

InsightAce Analytic Pvt. Ltd.Tel.: +1 551 226 6109Email:info@insightaceanalytic.comSite Visit:www.insightaceanalytic.comFollow Us on LinkedIn @bit.ly/2tBXsgSFollow Us OnFacebook@bit.ly/2H9jnDZ

Logo: https://mma.prnewswire.com/media/1729637/InsightAce_Analytic_Logo.jpg

Cision

View original content:https://www.prnewswire.com/news-releases/artificial-intelligence-powered-ai-spatial-biology-market-market-to-record-an-exponential-cagr-by-2030---exclusive-report-by-insightace-analytic-301614607.html

SOURCE InsightAce Analytic Pvt. Ltd.

Read more from the original source:
Artificial Intelligence-powered (AI) Spatial Biology Market Market to Record an Exponential CAGR by 2030 - Exclusive Report by InsightAce Analytic -...

Companies increasingly rely on technology-based solutions such as artificial intelligence, robots or mobile applications to fill workforce shortage -…

The staff policies of companies around the world increasingly rely on technology to fill the workforce shortage, with almost 60% of them estimating an increase in the use of artificial intelligence (AI), robots or chatbots, while 37% foresee a more intensive collaboration with mobile app developers and providers over the next two years, according to the study Orchestrating Workforce Ecosystems, conducted by Deloitte and MIT Sloan Management Review.

Moreover, most companies consider it beneficial to organize their workforce as an ecosystem, defined as a structure relying on both internal and external collaborators, between whom multiple relationships of interdependence and complementarity are established, in order to generate added value for the organization.

Almost all the companies participating in the study (93%) claim that the so-called external employees, such as service providers, management consultants or communication agencies, fixed-term or project-based employees, including developers and technology solution providers, are already part of the organization. On the other hand, however, only 30% of companies are ready to manage a mixed structure of the workforce.

The main reasons behind the decision to turn to external labour resources are the desire to reduce costs (62%), the intention to migrate to an on-demand work model based on a variable staffing scheme (41%) or the need to attract more employees with basic skills (40%).

The results of the study indicate that the workforce can no longer be defined strictly in terms of permanent, full-time employees. The need for flexibility, increasingly evident lately, amid events that have disrupted the global economy, such as the COVID-19 pandemic or the war in Ukraine, has led companies to look for ways to add to the workforce other solutions, especially in markets where it is deficient. But employers who want to go further in this direction need to make sure that they comply with the labour laws applicable in their jurisdiction, which, from case to case, may be more permissive or more restrictive. In the particular case of Europe, attention and consideration to the new trends in the field of workforce orchestration within a company are still required as the legal framework has yet to catch up with the challenges such new practices bring, said Raluca Bontas, Partner, Global Employer Services, Deloitte Romania.

Almost half of the companies (49%) consider that the optimal staffing structure should include both internal and external collaborators, provided that the first category is dominant. At the same time, 74% of the surveyed directors believe that the effective management of external collaborators is essential for the success of their organization.

At the same time, 89% are convinced that it is important for the external workforce to be integrated into the internal one, in order to create high-performing teams. On the other hand, 83% consider that the two categories have different expectations that require distinct offers in terms of benefits, rewards or flexibility in the way of working.

The responsibility for the workforce strategy lies with the entire top management team, mainly with the CEO (45% of respondents) and the human resources director (41%), but also with the COO, the CFO, the strategy and the legal director, according to the study.

The Orchestrating Workforce Ecosystems study was conducted by Deloitte and the MIT Sloan Management Review among more than 4,000 respondents, executives working in 29 industries, from 129 countries across all continents.

More:
Companies increasingly rely on technology-based solutions such as artificial intelligence, robots or mobile applications to fill workforce shortage -...