Category Archives: Machine Learning

Patient Finding Is One of the Most Common Uses of Machine Learning Within Commercial Operations in Biopharma – Business Wire

WALTHAM, Mass.--(BUSINESS WIRE)--Trinity Life Sciences, a leader in global life sciences commercialization solutions, finds that 45 percent of all machine learning use cases within biopharma companies are for finding patients. According to the new TGaS report entitled AIML Use Case Landscape Report, within patient finding applications, 75 percent are designed to enhance health care provider (HCP) targeting or develop HCP alerts.

If a life sciences company is not doing patient finding alerts/targeting with machine learning, they are probably behind the industry, said Steve Laux, Vice President, Artificial Intelligence & Machine Learning at TGaS Advisors, a division of Trinity Life Sciences. It is clear that patient finding can have a big impact. Biopharma executives may be surprised to learn that it is more feasible and practical than they think.

The report also includes separate use cases on machine learning for:

Trinity is hosting an upcoming webinar on the topic entitled Everything You Wanted to Know About Patient Finding but Were Afraid to Ask Focus on Commerical Applications on September 29 at 1 p.m. ET. Key topics that will be addressed include:

With a roster of large, emerging, and precommercial life sciences companies, TGaS Advisors, a division of Trinity, provides robust comparative intelligence and collaborative network membership services.

Media interested in receiving a copy of the report should contact Elizabeth Marshall at

About Trinity Life Sciences

Trinity Life Sciences is a trusted strategic commercialization partner, providing evidence-based solutions for the life sciences. With 25 years of experience, Trinity is committed to solving clients most challenging problems through exceptional levels of service, powerful tools, and data-driven insights. Trinitys range of products and solutions includes industry-leading benchmarking solutions, powered by TGaS Advisors. To learn more about how Trinity is elevating life sciences and driving evidence to action, visit

Read the original:
Patient Finding Is One of the Most Common Uses of Machine Learning Within Commercial Operations in Biopharma - Business Wire

Machine Learning Tools for COVID-19 Patient Screening and Improved Lab Test Management to Be Discussed at the 2021 AACC Annual Scientific Meeting -…

ATLANTA, Sept. 27, 2021 /PRNewswire/ -- Scientists have created a new machine learning tool that could help healthcare workers to quickly screen and direct the flow of COVID-19 patients arriving at hospitals. Results from an evaluation of this algorithm, along with an artificial intelligence method that improves test utilization and reimbursement, were presented today at the 2021 AACC Annual Scientific Meeting & Clinical Lab Expo.


Streamlining Hospital Admission of COVID-19 Patients

It is important for clinicians to quickly diagnose COVID-19 patients when they arrive at hospitals, both to triage them and to separate them from other vulnerable patients who may be immunocompromised or have pre-existing medical conditions. This can be difficult, however, because COVID-19 shares many symptoms with other viral infections, and the most accurate PCR-based tests for COVID-19 can take several days to yield results.

A team of researchers led by Rana Zeeshan Haider, PhD, and Tahir Sultan Shamsi, FRCP, of the National Institute of Blood Disease in Karachi, Pakistan, has therefore created a machine learning algorithm to help healthcare workers efficiently screen incoming COVID-19 patients. The scientists extracted routine diagnostic and demographic data from the records of 21,672 patients presenting at hospitals and applied several statistical techniques to develop this algorithm, which is a predictive model that differentiates between COVID-19 and non-COVID-19 patients. During validation experiments, the model performed with an accuracy of up to 92.5% when tested with an independent dataset and showed a negative predictive value of up to 96.9%. The latter means that the model is particularly reliable when identifying patients who don't have COVID-19.

"The true negative labeling efficiency of our research advocates its utility as a screening test for rapid expulsion of SARS-CoV-2 from emergency departments, aiding prompt care decisions, directing patient-case flow, and fulfilling the role of a 'pre-test' concerning orderly RT-PCR testing where it is not handy," said Haider. "We propose this test to accept the challenge of critical diagnostic needs in resource constrained settings where molecular testing is not under the flag of routine testing panels."

Story continues

Optimizing Lab Test Selection and Reimbursement

Of the 5 billion lab orders submitted each year, at least 20% are considered inappropriate. These inappropriate tests can lead to slower or incorrect diagnoses for patients. Such tests may also not be covered by Medicare if they weren't meant to be used for particular medical conditions or if they were ordered with the wrong ICD-10 diagnostic codes, which in turn raises health costs.

Rojeet Shrestha, PhD, of Patients Choice Laboratories in Indianapolis, set out along with colleagues to determine if an automated test management system known as the Laboratory Decision System (LDS) could help improve test ordering. The LDS scores potential tests based on medical necessity and testing indication, helping providers minimize test misutilization and select the best tests for a given medical condition.

Using LDS, the researchers re-evaluated a total of 374,423 test orders from a reference laboratory, 48,049 of which had not met the criteria for coverage under Medicare. For 96.4% of the first 10,000 test claims, the LDS ranking system recommended alternative tests that better matched the medical necessity or had a more appropriate ICD-10 code. Of these recommendations, 80.5% would also meet Medicare policies. All of this indicates that the LDS could help correct mistaken or inappropriate lab orders.

"Our study implies that use of the automated test ordering system LDS would be extremely helpful for providers, laboratories, and payers," said Shrestha. "Use of this algorithm-based testing selection and ordering database, which rates and scores potential tests for any given disease based on clinical relevance, medical necessity, and testing indication, would eventually help providers to select and order the right test and reduce over- and under-utilization of tests."

Abstract InformationAACC Annual Scientific Meeting registration is free for members of the media. Reporters can register online here:

Abstract A-226: Machine-learning based decipherment of cell population data; a promising hospital front-door screening tool for COVID-19 will be presented during:

Student Poster PresentationMonday, September 279 a.m. 5 p.m. Scientific Poster Session

Tuesday, September 289:30 a.m. - 5 p.m. (presenting author in attendance from 1:30 - 2:30 p.m.)Abstract B-011: Use of artificial intelligence for effective test utilization and to increase reimbursement will be presented during:

Scientific Poster SessionWednesday, September 299:30 a.m. - 5 p.m. (presenting author in attendance from 1:30 - 2:30 p.m.)

All sessions will take place in the Poster Hall, which is located in Exhibit Hall C of the Georgia World Congress Center in Atlanta.

About the 2021 AACC Annual Scientific Meeting & Clinical Lab ExpoThe AACC Annual Scientific Meeting offers 5 days packed with opportunities to learn about exciting science from September 26-30. Plenary sessions explore COVID-19 vaccines and virus evolution, research lessons learned from the pandemic, artificial intelligence in the clinic, miniaturization of diagnostic platforms, and improvements to treatments for cystic fibrosis.

At the AACC Clinical Lab Expo, more than 400 exhibitors will fill the show floor of the Georgia World Congress Center in Atlanta with displays of the latest diagnostic technology, including but not limited to COVID-19 testing, artificial intelligence, mobile health, molecular diagnostics, mass spectrometry, point-of-care, and automation.

About AACCDedicated to achieving better health through laboratory medicine, AACC brings together more than 50,000 clinical laboratory professionals, physicians, research scientists, and business leaders from around the world focused on clinical chemistry, molecular diagnostics, mass spectrometry, translational medicine, lab management, and other areas of progressing laboratory science. Since 1948, AACC has worked to advance the common interests of the field, providing programs that advance scientific collaboration, knowledge, expertise, and innovation. For more information, visit

Christine DeLongAACCSenior Manager, Communications & PR(p)

Molly PolenAACCSenior Director, Communications & PR(p) 202.420.7612(c)


View original content to download multimedia:


See the original post:
Machine Learning Tools for COVID-19 Patient Screening and Improved Lab Test Management to Be Discussed at the 2021 AACC Annual Scientific Meeting -...

Machine Learning: 6 Practical Applications of the Budding Technology – News Anyway

Post Views: 409

Machine learning is a novel and innovative technology that promises to revolutionize industrial and professional processes across various niches. It is a subset of artificial intelligence (AI) that uses statistical and computational techniques to develop intelligent digital systems that can learn from databases.

Machine learning programs allow businesses to analyze large datasets quickly to develop beneficial strategies and solutions. They can adjust to new conditions or changes since machine learning algorithms allow for adaptation without prior programming.

There are various applications for this technology, from reading emails to predicting future trends and market opportunities. Here are some of the most exciting practical applications of the technology being used today.

We all know of Siri, Google Now, and Alexa as some of the most popular virtual assistants on the market. These assistants help their users to find information, set reminders and schedule their day-to-day lives using voice-based commands. These programs will search for relevant information, recall previous queries you have made, and control other resources to fulfill your command.

Machine learning plays an important role in this technology, as using this technology, these programs can refine the information they provide you based on previous interactions. Your usage data can then be used to further refine the service they provide, delivering results that are better tailored to your requirements later on.

Ultimately, virtual personal assistants are becoming increasingly important in our day-to-day lives. We rely on them more than ever before, and the integration of machine learning is helping these systems to become more valuable to us with each passing day.

The conventional way to monitor video feeds is for a single person to attempt to watch multiple screens simultaneously. This is a challenging task and is also exceptionally boring. However, with the advent of machine learning, it is now possible to train computer programs for this purpose.

Many video surveillance systems these days are powered by AI. Once a far-fetched Sci-Fi fantasy in films like Minority Report, machine learning can now identify crimes before they happen in some cases. AI video surveillance can track unusual behavior on video feeds. For example, if an individual has been standing still for a long time and acting suspiciously, the system can automatically flag this as potentially dangerous behavior.

Human attendants can then review the feed that has been flagged, ultimately helping to avoid mishaps and prevent crime. Additionally, the more the system successfully flags situations where there is genuinely suspicious activity, the more accurate it will become in the future. This is thanks to backend machine learning underlying the system.

One of the more annoying aspects of the digital age we live in is the seemingly relentless spam emails that clog up our inboxes. Thankfully, machine learning provides solutions to this issue.

People that send out spam emails are constantly evolving and adapting their processes to bypass rule-based spam filters that individuals implement in their inboxes. Machine learning enables new and more effective methods for catching these potentially harmful emails.

Using past data, machine learning algorithms can filter out potentially harmful emails while adding scammers newer techniques to their databases. This helps email filters to stay up to date with the current trends and react to them, ultimately keeping you safer.

More than 450,000 new forms of malware are detected each day, but these tend to share between 90 and 98% of their code with previous iterations. Machine learning algorithms can recognize these coding patterns and detect malware resembling the same codes older forms. Therefore, using machine learning in email filters can significantly boost cybersecurity.

Google and other search engines have begun to implement machine learning to show you results for your queries. Every time you search on the internet using Google, a backend algorithm will watch and analyze how you react to your results.

For example, if you open the top results and then stay on the ensuing website for an extended period, the search engine will assume that this website displayed relevant information to your query. On the other hand, if you get to the second or third page of results, the search engine will assume that the first results were irrelevant to the query. This can help search engines display more helpful information to users, streamlining our experience on the internet.

Many websites nowadays offer customers the opportunity to chat with a customer service representative while navigating the site itself. However, it is not always a real-life person that answers your queries. Most of the time, when you use these services, you are actually talking to a chatbot.

These automated chatbots extract the relevant information from the messages you send them and then present relevant information on the website to the customer.

With every passing moment, these chatbots become more advanced thanks to machine learning. They can understand user queries better and deliver better results thanks to the ever-evolving nature of machine learning algorithms.

Another way machine learning promises to make using online services safer is in its applications for tracking internet fraud.

For example, PayPal, one of the worlds largest money exchange platforms, is integrating machine learning into its services to protect against money laundering. The company is implementing a set of tools that include machine learning algorithms to analyze and compare millions of transactions. Using past data, the systems can distinguish between legitimate and illegitimate transactions between buyers and sellers.

Suspicious transactions can then be flagged for investigation. If there are any illegitimate reasons for these transactions, such as moving ill-gotten money from place to place to launder it, PayPal can notify the relevant authorities.

How You Can Capitalize on Machine Learning

As stated above, there are myriad uses for machine learning across many different industries and markets. Therefore, you might be wondering how you can harness machine learning to improve your day-to-day operations.

One way that you can enhance your understanding of the technology is to gain a machine learning certification. Taking a machine learning course online can help you to understand the benefits of practical machine learning. Additionally, such machine learning courses are valuable for gaining knowledge of the technologys processes. This can be a valuable resource for integrating things like machine learning analytics and automation into your operations.

Ultimately, taking a short course in London on the benefits and intricacies of machine learning can significantly boost productivity and efficiency within your operations. Additionally, with the advent of online teaching, it is now possible to earn a certificate in machine learning online. It can help you identify areas that can be streamlined by technology and free up time for more pressing matters.

The Takeaway

Overall, machine learning and AI are here to stay in the modern world, and their applications could change the world as we know it. From optimizing processes to identifying certain forms of cancer, machine learning has been touted as a powerful tool for the progression of humanity in the 21st century.

While some of its applications are still in their infancy stages, the technology is widely used across other facets of life. The use cases outlined above represent some of the practical applications of machine learning that are presently utilized but expect more developments in this field in the coming months and years.

Read this article:
Machine Learning: 6 Practical Applications of the Budding Technology - News Anyway

Machine Learning Uncovers Genes of Importance in Agriculture and Medicine – NYU News

Machine learning can pinpoint genes of importance that help crops to grow with less fertilizer, according to a new study published in Nature Communications. It can also predict additional traits in plants and disease outcomes in animals, illustrating its applications beyond agriculture.

Using genomic data to predict outcomes in agriculture and medicine is both a promise and challenge for systems biology. Researchers have been working to determine how to best use the vast amount of genomic data available to predict how organisms respond to changes in nutrition, toxins, and pathogen exposurewhich in turn would inform crop improvement, disease prognosis, epidemiology, and public health. However, accurately predicting such complex outcomes in agriculture and medicine from genome-scale information remains a significant challenge.

In the Nature Communications study, NYU researchers and collaborators in the U.S. and Taiwan tackled this challenge using machine learning, a type of artificial intelligence used to detect patterns in data.

We show that focusing on genes whose expression patterns are evolutionarily conserved across species enhances our ability to learn and predict genes of importance to growth performance for staple crops, as well as disease outcomes in animals, explained Gloria Coruzzi, Carroll & Milton Petrie Professor in NYUs Department of Biology and Center for Genomics and Systems Biology and the papers senior author.

Our approach exploits the natural variation of genome-wide expression and related phenotypes within or across species, added Chia-Yi Cheng of NYUs Center for Genomics and Systems Biology and National Taiwan University, the lead author of this study. We show that paring down our genomic input to genes whose expression patterns are conserved within and across species is a biologically principled way to reduce dimensionality of the genomic data, which significantly improves the ability of our machine learning models to identify which genes are important to a trait.

As a proof-of-concept, the researchers demonstrated that genes whose responsiveness to nitrogen are evolutionarily conserved between two diverse plant speciesArabidopsis, a small flowering plant widely used as a model organism in plant biology, and varieties of corn, Americas largest cropsignificantly improved the ability of machine learning models to predict genes of importance for how efficiently plants use nitrogen. Nitrogen is a crucial nutrient for plants and the main component of fertilizer; crops that use nitrogen more efficiently grow better and require less fertilizer, which has economic and environmental benefits.

The researchers conducted experiments that validated eight master transcription factors as genes of importance to nitrogen use efficiency. They showed that altered gene expression in Arabidopsis or corn could increase plant growth in low nitrogen soils, which they tested both in the lab at NYU and in cornfields at the University of Illinois.

More here:
Machine Learning Uncovers Genes of Importance in Agriculture and Medicine - NYU News

Career Twist: Top Machine Learning Jobs to Apply this Weekend – Analytics Insight

Before scientists introduced artificial intelligence technologies to us, sci-fi movies did. Famous Hollywood films like 2001: A Space Odyssey portrayed machines and technology without a clear vision of what was approaching in the future. Somehow, whatever they showed in the movie, turned out to be a reality today. Artificial intelligence is a broad subject today. It is accelerating every industry by unleashing new developments, applications, and solutions constantly. One of the subsets of AI called machine learning, is advancing the capabilities of technology using mathematics and software programming. So far, machine learning is greatly admired in the business sector for its prominence and benefits. Companies that routinely engage with customers, employ machine learning professionals to streamline repetitive works. For example, the Netflix recommendation system, Facebooks suggestions, and traffic alerts on Google are powered by machine learning technology. Owing to the increasing usage, the demand for top machine learning jobs has also gone up. According to a job website, machine learning jobs are ranked #1 among the top jobs in the US, citing a 344% growth rate and an average salary of US$1,40,000 per year. Unfortunately, landing in a machine learning job is not easy. It requires special skills in programming and system design, along with other basics. Analytics Insight has listed the top machine learning jobs that you should apply this weekend to gear up your career.

Location(s): Gurgaon, Bengaluru

Roles and Responsibilities: As a machine learning & automation expert- data science at Genpact, the candidate is expected to focus on three main areas of the company namely consultant, solutioning, and pre-sales. He/she should translate the customers business needs into a techno-analytic problem and appropriately work with the statistical and technology to bring large-scale analytic solutions to fruition. They should also participate in assessments of automation, predictive analytics opportunities, and engagements. The candidate is expected to actively collaborate with other team members and develop solutions in AI, ML, and BI areas to stitch solutions for business problems.


Apply here for the job.

Location(s): Bengaluru

Roles and Responsibilities: IBM expects the Data Engineer: Machine Learning to help transform the companys clients data into tangible business value by analyzing information, communicating outcomes, and collaborating on product development. The candidate should know data expertise to manipulate and integrate big data and different data types such as videos, images, documents, and other structured data based. He/she should define problems and opportunities in a complex business area. They should define problems and opportunities in a complex business area and develop suitable advanced analytics products.


Apply here for the job.

Location(s): Bengaluru

Roles and Responsibilities: As a machine learning compiler at Qualcomm Technologies, the candidate should research, develop, design, enhance, and implement the different components of machine learning-based performance and code-size needs of the customer workloads and benchmarks. He/she should analyze software requirements, determine the feasibility of design within the given constraints, consult with architecture, and HW engineers, and implement software solutions best suited for Qualcomms SOCs. They should also analyze and identify system-level integration issues, interface with the software development, integration, and test teams.


Apply here for the job.

Location(s): Bengaluru

Roles and Responsibilities: The machine learning engineer at Intel is expected to perform profiling of various algorithms and platforms using simulation or platform systems for various internal and external XPUs and architectures. He/she should be good at performance and functional model development in Python, System Influence IP architecture development through profiling and modeling results. They should constantly perform validation of the performance of pre-silicon, FPGA, and Silicon.


Apply here for the job.

Location(s): Bengaluru

Roles and Responsibilities: By joining as a Senior Machine Learning Engineer at Microsoft, the candidate will become a part of an interactive and fast-paced environment where they can drive impact, innovation, and apply state-of-the-art techniques to solve problems on a global scale. He/she should work closely with engineering, product management, analytics and transformation, and data science teams from across MCDS to deliver outstanding value to stakeholders and the companys products. They should build and enhance the frameworks and tools that Microsoft uses in its MLOps platforms. The candidate should build out services for impact assessment, back-testing, model performance monitoring, data drift, model drift, concept drift, explainability, experimentation, performance testing, reproducibility check, etc.


Apply here for the job.

See the rest here:
Career Twist: Top Machine Learning Jobs to Apply this Weekend - Analytics Insight

Explainable AI Is the Future of AI: Here Is Why – CMSWire


Artificial intelligence is going mainstream. If you're using Google docs, Ink for All or any number of digital tools, AI is being baked in. AI is already making decisions in the workplace, around hiring, customer service and more. However, a recurring issue with AI is that it can be a bit of a "black box" or mystery as to how it arrived at its decisions. Enter explainable AI.

Explainable Artificial Intelligence, or XAI, is similar to a normal AI application except that the processes and results of an XAI algorithm are able to be explained so that they can be understood by humans. The complex nature of artificial intelligence means that AI is making decisions in real-time based on the insights it has discovered in the data that it has been fed. When we do not fully understand how AI is making these decisions, we are not able to fully optimize the AI application to be all that it is capable of. XAI enables people to understand how AI and Machine Learning (ML) are being used to make decisions, predictions, and insights. Explainable AI allows brands to be transparent in their use of AI applications, which increases user trust and the overall acceptance of AI.

There is a valid need for XAI if AI is going to be used across industries. According to a report by FICO, 65% of surveyed employees could not explain how AI model decisions or predictions are determined. The benefits of XAI are beginning to be well-recognized, and not just by scientists and data engineers. The European Unions draft AI regulations are specifying XAI as a prerequisite for the eventual normalization of machine learning in society. Standardization organizations including the European Telecommunications Standards Institute (ETSI) and the Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) also recognize the importance of XAI in relation to the acceptance and trust of AI in the future.

Philip Pilgerstorfer, data scientist and XAI specialist at QuantumBlack, has indicated that the benefits of XAI include:

This is because the majority of AI with ML operates in what is referred to as a black box, that is, in an area that is unable to provide any discernible insights as to how it comes to make decisions. Many AI/ML applications are moderately benign decision engines that are used with online retail recommender systems, so it is not absolutely necessary to ensure transparency or explainability. For other, more risky decision processes, such as medical diagnoses in healthcare, investment decisions in the financial industry, and safety-critical systems in autonomous automobiles, the stakes are much higher. As such, the AI used in those systems should be explainable, transparent, and understandable in order to be trusted, reliable, and consistent.

When brands are better able to understand potential weaknesses and failures in an application, they are better prepared to maximize performance and improve the AI app. Explainable AI enables brands to more easily detect flaws in the data model, as well as biases in the data itself. It can also be used for improving data models, verifying predictions, and gaining additional insights into what is working, and what is not.

Explainable AI has the benefits of allowing us to understand what has gone wrong and where it has gone wrong in an AI pipeline when the whole AI system makes an erroneous classification or prediction, said Marios Savvides, Bossa Nova Robotics Professor of Artificial Intelligence, Electrical and Computer Engineering and Director of theCyLab Biometrics Centerat Carnegie Mellon University. These are the benefits of an XAI pipeline. In contrast, a conventional AI system involving a complete end-to-end black-box deep learning solution is more complex to analyze and more difficult to pinpoint exactly where and why an error has occurred.

Many businesses today use AI/ML applications to automate the decision-making process, as well as to gain analytical insights. Data models can be trained so that they are able to predict sales based on variable data, while an explainable AI model would enable a brand to increase revenue by determining the true drivers of sales.

Kevin Hall, CTO and co-founder of Ripcord, an organization that provides robotics, AI and machine learning solutions, explained that although AI-enabled technologies have proliferated throughout enterprise businesses, there are still complexities that exist that are preventing widespread adoption, largely that AI is still mysterious and complicated for most people. "In the case of intelligent document processing (IDP), machine learning (ML) is an incredibly powerful technology that enables higher accuracy and increased automation for document-based business processes around the world," said Hall. "Yet the performance and continuous improvement of these models is often limited by a complexity barrier between technology platforms and critical knowledge workers or end-users. By making the results of ML models more easily understood, Explainable AI will allow for the right stakeholders to more directly interact with and improve the performance of business processes."

Related Article:What Is Explainable AI (XAI)?

Its a fact that unconscious or algorithmic biases are built into AI applications. Thats because no matter how advanced or smart the AI app is, or if it uses ML or deep learning, it was developed by human beings, each of which has their own unconscious biases, and a biased data set was used to train the AI algorithm. Explainable AI systems can be architected in a way to minimize bias dependencies on different types of data, which is one of the leading issues when complete black box solutions introduce biases and make errors, explained Professor Savvides.

A recent CMSWire article on unconscious biases reflected on Amazons failed use of AI for job application vetting. Although the shopping giant did not use prejudiced algorithms on purpose, their data set looked at hiring trends over the last decade, and suggested the hiring of similar job applicants for positions with the company. Unfortunately, the data revealed that the majority of those who were hired were white males, a fact that itself reveals the biases within the IT industry. Eventually, Amazon gave up on the use of AI for its hiring practices, and went back to its previous practices, relying upon human decisioning. Many other biases can sneak into AI applications, including racial bias, name bias, beauty bias, age bias, and affinity bias.

Fortunately, XAI can be used to eliminate unconscious biases within AI data sets. Several AI organizations, including OpenAI and the Future of Life Institute, are working with other businesses to ensure that AI applications are ethical and equitable for all of humanity.

Being able to explain why a person was not selected for a loan, or a job will go a long way to improving the public trust in AI algorithms and machine learning processes. "Whether these models are clearly detailing the reason why a loan was rejected or why an invoice was flagged for fraud review, the ability to explain the model results will greatly improve the quality and efficiency of many document processes, which will lead to cost savings and greater customer satisfaction," said Hall.

Related Article:Ethics and Transparency: How We Can Reach Trusted AI

Along with the unconscious biases we previously discussed, XAI has other challenges to conquer, including:

Professor Savvides said that XAI systems need architecting into different sub-task modules where sub-module performance can be analyzed. The challenge is that these different AI/ML components need compute resources and require a data pipeline, so in general they can be more costly than an end-to-end system from a computational perspective.

There is also the issue of additional errors for an XAI algorithm, but there is a tradeoff because errors in an XAI algorithm are easier to track down. Additionally, there may be cases where a black-box approach may give fewer performance errors than an XAI system, he said. However, there is no insight into the failure of the traditional AI approach other than trying to collect these cases and re-train, whereas the XAI system may be able to pinpoint the root cause of the error.

As AI applications become smarter and are used in more industries to solve bigger and bigger problems, the need for a human element in AI becomes more vital. XAI can help do just that.

The next frontier of AI is the growth and improvements that will happen in Explainable AI technologies. They will become more agile, flexible, and intelligent when deployed across a variety of new industries. XAI is becoming more human-centric in its coding and design, reflected AJ Abdallat, CEO ofBeyond Limits, an enterprise AI software solutions provider. Weve moved beyond deep learning techniques to embed human knowledge and experiences into the AI algorithms, allowing for more complex decision-making to solve never-seen-before problems those problems without historical data or references. Machine learning techniques equipped with encoded human knowledge allow for AI that lets users edit their knowledge base even after its been deployed. As it learns by interacting with more problems, data, and domain experts, the systems will become significantly more flexible and intelligent. With XAI, the possibilities are truly endless.

Related Article: Make Responsible AI Part of Your Company's DNA

Artificial Intelligence is being used across many industries to provide everything from personalization, automation, financial decisioning, recommendations, and healthcare. For AI to be trusted and accepted, people must be able to understand how AI works and why it comes to make the decisions it makes. XAI represents the evolution of AI, and offers opportunities for industries to create AI applications that are trusted, transparent, unbiased, and justified.

Read the rest here:
Explainable AI Is the Future of AI: Here Is Why - CMSWire

Discovery Education Collaborates With AWS to Enhance Recommendation Engine – Yahoo Finance

SILVER SPRING, Md. --News Direct-- Discovery Education

Discovery Educationa worldwide edtech leader supporting learning wherever it takes placetoday announced that it has enhanced its K-12 learning platform with Amazon Web Services (AWS) machine learning capabilities.

SILVER SPRING, Md., September 27, 2021 /3BL Media/ Discovery Educationa worldwide edtech leader supporting learning wherever it takes placetoday announced that it has enhanced its K-12 learning platform with Amazon Web Services (AWS) machine learning capabilities. The pioneering use of machine learning within the Discovery Education platform helps educators spend less time searching for digital resources and more time teaching.

Connecting educators to a vast collection of high-quality, standards-aligned content, ready-to-use digital lessons, intuitive quiz and activity creation tools, and professional learning resources, Discovery Educations award-winning learning platform facilitates engaging, daily instruction in any learning environment. Several months of planning and deep collaboration with AWS enabled Discovery Education to innovatively integrate Amazon Personalize technology into the Just For You area of its K-12 platform. The Just For You row connects educators to a unique, personalized set of resources based on the grade level taught, preferences, and assets used in the past.

As the ability to deliver more sophisticated digital experiences has evolved over time, so has the expectation and demand from teachers seeking a more personalized user experience similar to that which they receive through interactions with brands in the retail, media, and entertainment spaces. Todays tech-savvy teachers expect real-time, curated experiences across the digital resources they use daily, and the integration of the Amazon Personalize technology into Discovery Educations digital resources for the first time delivers that experience to users in the K-12 education space.

"For some time, educators have desired more resources to help personalize teaching and learning. ML technology is already being used to curate our entertainment experiences, help with workforce productivity, and more, and its exciting to see this innovation is being integrated into classrooms, said Alec Chalmers, Director, EdTech and GovTech Markets at AWS. Amazon Personalize creates high quality recommendations that better respond to the specific needs and preferences of todays learners, which ultimately improves engagement in teaching and learning. AWS is proud to be collaborating with Discovery Education to support the educators and students they serve.

Story continues

Discovery Educations team is continuously adding, contextualizing, and organizing exciting new content and timely and relevant resources to the platform in response to current events and the ever-evolving needs of educators. These resources, sourced from trusted partners, are aligned to state and national standards, and help educators bring the outside world into teaching and learning every day. These resources are the centerpiece of the Just for You row which adapts and changes with user behavior and preference over time.

The K-12 learning platform is designed to work within school systems existing infrastructure and workflows and provides safe, secure, simple access methods for educators and students. Through expanded, lasting partnerships with Brightspace, Clever, and others, integrating Discovery Educations K-12 learning platform into existing IT architecture is easier than ever.

Continuous improvement is a core value at Discovery Education, and as such, we are constantly seeking innovative new ways to improve our resources and save educators time, said Pete Weir, Discovery Educations Chief Product Officer. Integrating AWSs robust machine learning technology into our K-12 platforms recommendation engine helps improve educators productivity by providing the digital content they want and need even faster than before. We are incredibly proud to be collaborating with AWS and pioneering how education technology can personalize teaching and learning. The success of this collaboration to date encourages my team to look for even more places within our services to integrate machine learning technology and improve our services ability to adapt to our users.

For more information about Discovery Educations digital resources and professional learning services, visit and stay connected with Discovery Education on social media through Twitter and LinkedIn.


About Discovery EducationDiscovery Education is the worldwide edtech leader whose state-of-the-art digital platform supports learning wherever it takes place. Through its award-winning multimedia content, instructional supports, and innovative classroom tools, Discovery Education helps educators deliver equitable learning experiences engaging all students and supporting higher academic achievement on a global scale. Discovery Education serves approximately 4.5 million educators and 45 million students worldwide, and its resources are accessed in over 140 countries and territories. Inspired by the global media company Discovery, Inc., Discovery Education partners with districts, states, and trusted organizations to empower teachers with leading edtech solutions that support the success of all learners. Explore the future of education at

Stephen WakefieldDiscovery EducationPhone:

View additional multimedia and more ESG storytelling from Discovery Education on

View source version on

Continue reading here:
Discovery Education Collaborates With AWS to Enhance Recommendation Engine - Yahoo Finance

YC-backed Malloc wants to take the sting out of mobile spyware – TechCrunch

Mobile spyware is one of the most invasive and targeted kinds of unregulated surveillance, since it can be used to track where you go, who you see and what you talk about. And because of its stealthy nature, mobile spyware can be nearly impossible to detect.

But now one Y Combinator-backed startup is building an app with the aim of helping anyone identify potential mobile spyware on their phones.

Malloc, a Cyprus-based early-stage company, made its debut with Antistalker, an app that monitors the sensors and apps running on a phone initially for Android only to detect if the microphone or camera is quietly activated or data transmitted without the users knowledge. Thats often a hallmark of consumer-grade spyware, which can also steal messages, photos, web browsing history and real-time location data from a victims phone without their permission.

The rising threat of spyware has prompted both Apple and Google to introduce indicators when a devices microphone or camera are used. But some of the more elusive and more capable spyware the spyware typically used by governments and nation states can slip past the hardened defenses built into iOS and Android.

Thats where Malloc says Antistalker comes in. Mallocs co-founders Maria Terzi, Artemis Kontou and Liza Charalambous built the app around a machine learning (ML) model, which allows the app to detect and block device activity that could be construed as spyware recording or sending data.

Malloc co-founders Liza Charalambous (left), Maria Terzi (middle), Artemis Kontou (right). Image Credits: Malloc/supplied

Terzi, who specializes in ML, told TechCrunch that the startup trained its ML model using known stalkerware apps to help simulate real-world surveillance. Machine learning helps to improve the apps ability to detect a broad range of new and previously unknown threats over time, rather than relying on the more traditional methods of scanning for signatures of known spyware apps.

We already know applications that are spyware. Why dont we use their behavior to train a machine learning model that will then be able to recognize new spyware? Terzi told TechCrunch.

The ML model runs on the device to be more privacy-preserving than sending data to the cloud. Malloc said it collects some anonymized data to improve the ML model over time, to help the app to detect more threats as they emerge on users devices.

The app also looks for anomalous app activity, like bursts of data sent by apps that havent been used for days, and allows the user to look at which apps have accessed the microphone and camera and when.

Its a bet thats already catching the eyes of investors, with the startup securing close to $2 million from Y Combinator and the Urban Innovation Fund.

Terzi said the company has more than 80,000 monthly active users and growing since it launched earlier this year, and plans an enterprise offering to help companies protect their employees from surveillance threats. The company is also planning to launch an iOS app in the near future.

Excerpt from:
YC-backed Malloc wants to take the sting out of mobile spyware - TechCrunch

Harnessing machine learning to help patients with ALS – The Irish Times

What inspired your interest in using machine learning in healthcare?

I studied computer science as an undergraduate in Athens, where I grew up, and I went on to do a masters degree in biostatistics in Glasgow. I liked that biostatistics applies to real-world problems, and my research there used machine learning to look at data from patients who had heart failure.

What prompted you to move to Ireland?

My partner and I moved to Dublin, and I got a PhD position at University College Dublin and FutureNeuro with Dr Catherine Mooney, to work more on how machine learning can analyse healthcare records.

The idea is that machine learning might be able to find less linear links between patient data and their needs, and this could help to support clinicians when they are planning care for the patient.

Tell us about the project you have been working on.

My project has been looking at patients with ALS, or motor neurone disease. Over the years, Prof Orla Hardiman and her team at Trinity College Dublin have worked with groups across Europe, and have gathered data about ALS patients with their consent.

With funding from the Health Research Board and other agencies I was able to interrogate these anonymised data, and additional information that the team was able to provide from consenting caregivers and patients, to explore what factors could be likely to affect their quality of life.

What did you find, using this machine learning approach?

There were some aspects for the patients like the timing of when the disease symptoms started and whether they have issues with breathing when lying down that could reduce their quality of life. Also for primary caregivers, how they view their role and purpose seemed to be linked to their quality of life.

How might the technology be used to help people with ALS?

The models that we made can be used as part of a clinical decision support system, which could automatically flag up to a nurse or doctor a pattern of patient or caregiver characteristics that suggests the patient or caregiver might be at risk of greater psychological stress or a lower quality of life. This would help them to build a personalised plan to support the patient and caregiver.?

What has kept you going through the research?

The human side of it. I was able to visit an MND clinic and observe some of the sessions with the consent of those attending, which gave me an important context these data arent just numbers I was working with on the computer, we are talking about real-world conditions and interactions.

Also we did a user study on a prototype clinical decision support system with clinicians, to see whether and how clinicians would use such a system, and it was encouraging to see our research being translated into a real-world context.

You recently wrote up your thesis, how did you find that?

It has been quite rewarding to see everything fitting together. I was also able to move back to Athens and I will defend my thesis online, which is easier for all the examiners than travelling.

And finally, how do you like to take a break?

I like to do creative things and work with my hands, to get a break from the computer. During the lockdown in Ireland I made and decorated cakes and I also did embroidery. I find its a good balance to sitting looking at a computer screen.

See the original post:
Harnessing machine learning to help patients with ALS - The Irish Times

Cellino is using AI and machine learning to scale production of stem cell therapies – TechCrunch

Cellino, a company developing a platform to automate stem cell production, presented today at TechCrunch Disrupt 2021s Startup Battlefield to detail how its system, which combines AI technology, machine learning, hardware, software and yes, lasers! could eventually democratize access to cell therapies. It aims to bring down costs associated with the manufacturing of human cells, while also increasing yields.

Founded by a team whose backgrounds include physics, stem cell biology and machine learning, Cellino operates in the regenerative medicine industry. This space is currently undergoing a revolution, where new developments in gene and cell therapies could lead to breakthrough cures for a number of leading diseases. For example, the use of personalized human retinal cells could be transplanted to halt or reverse age-related macular degeneration, which can cause blindness. But today, such cell therapies are out of reach for most people because the process of cell production hasnt been automated or made scalable and efficient.

Instead, human cells being used now in these clinical trials are mostly being made by hand by scientists who are looking at cells and evaluating using their many years of training and expertise which cells are low quality and need to be removed. They then scrape away those unwanted cells with a pipette tip. The process, as you can imagine, is time-consuming and produces only a small yield. In this manual process, youd see a yield of about 10% to 20% of cells that would be able to pass the final quality assurance tests required for human transplant.

Cellino is working to improve this process in order to produce more cells of higher quality. Its goal is to push the yield to at least 80% over the next three years.

To do so, Cellinos system is automating all the human steps in the production process using machine learning techniques.

To identify which cells are high quality or low quality, the company is collecting large training data sets where its teaching algorithms to make determinations about cell quality based on a variety of factors. This includes the cell morphology meaning, the shape, size and density of cells. Fluorescence-based surface markers can also be used to identify other factors of importance to the line of cells being produced, like the location of proteins on the cell, for example.

By using machine learning and AI to do the identification based on standard and well-accepted biological assays used by the FDA, the system could move away from human annotation and the variability that introduces into the process of human cell production.

After Cellinos software has identified which low-quality cells need to be removed, it then uses a laser to target them. The laser creates large enough cavitation bubbles to kill the cell, but its done in a highly localized way where youre not harming the neighboring cells, as thermal heat does not dissipate to the nearby cells. This is also a more precise technique than the manual method. (Cellinos system has a 5-micron resolution, while cells are 10-15 microns in size). This results in a throughput of about 5,000 cells per minute, which is highly efficient compared with manual techniques.

Over time, this automation and efficiency could bring the cost down from nearly a million dollars per patient, which is what clinicians have to pay today to run a clinical trial, when outsourcing cell production. Cellino aims to get the cost down into the tens of thousands of dollars over time.

By scaling cell production, personalized cell therapies could also help a broader range of patients compared with other techniques relying on banks of stem cells. These arent always genetically diverse samples, leaving smaller ethnic groups out of the progress being made in this space. Banked cells also require recipients to take immunosuppressants, as the cells arent your own and the body may reject them.

The use of lasers is an idea developed by Cellino co-founder and CEO Nabiha Saklayen, who patented an invention in cellular laser editing while at Harvard earning her PhD in Physics. She was encouraged to turn the technology into a startup by her collaborators, who included had leading biologists like George Church and David Scadden.

Not all scientists become entrepreneurs, and I became an entrepreneur because I had an amazing support network around me, notes Saklayen, of the push to join the startup space. She immediately recruited Marinna Madrid, an applied physicist she had worked with for years on the co-invention of laser-based intracellular delivery techniques, as her other co-founder. To gain more mentorship about growing a startup, Saklayen turned to the Boston area startup ecosystem.

I didnt know anything about startups. I wanted to work with people who knew how to build companies, how to commercialize technology, how to build instruments and the Boston ecosystem is fantastic in that way. So I started connecting with lots of people in those early weeks anybody that was in the biotech realm or Harvard Business School, Saklayen explains.

This led her to Cellino co-founder and CTO Mattias Wagner, who had built companies before in the optics and instrumentation space.

Thats how the founding team came together. It was very complementary because Marina and I were co-inventors of the original technology that inspired the platform and Mattias brought this tremendous background in semiconductors and optical instrumentation, says Saklayen.

Since its 2017 founding, Cellino has gone on to raise $16 million in seed funding in a round co-led byThe Engine and Khosla Ventures, with participation from Humboldt Fund and 8VC.

The company is now collaborating with the NIH on compatibility studies. Currently, that means Cellino is making stem cells on its system which its then comparing with the ones made at the NIH that are already being tested in humans for personalized cell therapies for retinal diseases. Cellino later hopes to use its system to address areas like Parkinsons, muscle disorders and skin grafts, among others.

The company wanted to present at TechCrunch Disrupt to share more about what its building and to source new talent.

For me, its about talking about this idea around democratization and industrialization of cell therapies. I really want to get that message out because that is the movement we need to drive over the next decade for all of these cell therapies to be accessible to all patients, says Saklayen.

Cellinos angle is also very unique in the sense that, because we have this automated system to manufacture human cells, our system could make cells for every human being in this country, in the world, she continues. And there are a lot of cell therapy approaches that are looking to use off-the-shelf cells and off-the-shelf therapies, which will only work for certain parts of the population. As the U.S. becomes more diverse, ethnically, we need personalized solutions for everybody.

View post:
Cellino is using AI and machine learning to scale production of stem cell therapies - TechCrunch