Category Archives: Machine Learning

Announcing the ORBIT dataset: Advancing real-world few-shot learning using teachable object recognition – Microsoft

Object recognition systems have made spectacular advances in recent years, but they rely on training datasets with thousands of high-quality, labelled examples per object category. Learning new objects from only a few examples could open the door to many new applications. For example, robotics manufacturing requires a system to quickly learn new parts, while assistive technologies need to be adapted to the unique needs and abilities of every individual.

Few-shot learning aims to reduce these demands by training models that canrecognizecompletely novel objects from only a fewexamples, say 1 to 10.In particular,meta-learning algorithmswhichlearn to learnusing episodic trainingareapromisingapproachto significantly reduce the number of training examplesneeded totrain a model.However, most research infew-shot learning has been driven bybenchmark datasets that lack the high variationthat applications face when deployed in therealworld.

In partnership with City, University of London, we introduce the ORBIT dataset and few-shot benchmark for learning new objects from only a few, high-variation examples to close this gap. The dataset and benchmark set a new standard for evaluating machine learning models in few-shot, high-variation learning scenarios, which will help to train models for higher performance in real-world scenarios. This work is done in collaboration with a multi-disciplinary team, including Simone Stumpf, Lida Theodorou, and Matthew Tobias Harris from City, University of London and Luisa Zintgraf from University of Oxford. The work was funded by Microsoft AI for Accessibility. You can read more about the ORBIT research project and its goal to make AI more inclusive of people with disabilities in this AI Blog post.

You can learn more aboutthe workin our research papers:ORBIT: A Real-World Few-Shot Dataset for Teachable Object Recognition,published atthe International Conference of Computer Vision (ICCV2021),andDisability-first Dataset Creation: Lessons from Constructing a Dataset for Teachable Object Recognition with Blind and Low Vision Data Collectors, published at the 23rd International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2021).

Youre also invited to join Senior Researcher DanielaMassicetifor a talk about the ORBIT benchmark dataset and harnessing few-shot learning for teachable AI at the firstMicrosoft Research Summit.Massicetiwill be presenting Bucket of me: Using few-shot learning to realize teachable AI systems as part of the Responsible AI track on October 19. To view the presentation on demand, register at the Research Summit event page.

The ORBIT benchmark dataset contains 3,822 videos of 486 objects recorded by 77 people who are blind or low vision using their mobile phonesa total of 2,687,934 frames. Code for loading the dataset, computing benchmark metrics, and running baselines is available at the ORBIT dataset GitHub page.

The ORBIT dataset and benchmark are inspired by a real-world application for the blind and low-vision community: teachable object recognizers. These allow a person to teach a system to recognize objects that may be important for them by capturing just a few short videos of those objects. These videos are then used to train an object recognizer that is personalized. This would allow a person who is blind to teach the object recognizer their house keys or favorite shirt, and then recognize them with a phone. Such objects cannot be identified by typical object recognizers as they are not included in common object recognition training datasets.

Teachable object recognition is an excellent example of a few-shot, high-variation scenario. Its few-shot because people can only capture a handful of short videos recorded to teach a new object. Most current machine learning models for object recognition require thousands of images to train. Its not feasible to have people submit videos at that scale, which is why few-shot learning is so important when people are teaching object recognizers from their own videos. Its high-variation because each person has only a few objects, and the videos they capture of these objects will vary in quality, blur, centrality of object, and other factors as shown in Figure 2.

While datasets are fundamental for driving innovation in machine learning, good metrics are just as important in helping researchers evaluate their work in realistic settings. Grounded in this challenging, real-world scenario, we propose a benchmark on the ORBIT dataset. Unlike typical computer vision benchmarks, performance on the teachable object recognition benchmark is measured based on input from each user.

This means that the trained machine learning model is given just the objects and associated videos for a single user, and it is evaluated by how well it can recognize that users objects. This process is done for each user in a set of test users. The result is a suite of metrics that more closely captures how well a teachable object recognizer would work for a single user in the real world.

Evaluations on highly cited few-shot learning models show that there is significant scope for innovation in high-variation, few-shot learning. Despite saturation of model performance on existing few-shot benchmarks, few-shot models only achieve 50-55% accuracy on the teachable object recognition benchmark. Moreover, there is a high variance between users. These results illustrate the need to make algorithms more robust to high-variation (or noisy) data.

Creating teachable object recognizers presents challenges for machine learning beyond object recognition. One example of a challenge posed by a human-centric task formulation is the need for the model to provide feedback to users about the data they provided when training in a new personal object. Is it enough data? Is it good-quality data? Uncertainty quantification is an area of machine learning that can contribute to solving this challenge.

Moreover, the challenges in building teachable object recognition systems go beyond machine learning algorithmic improvements, making it an area ripe for multi-disciplinary teams. Designing the feedback of the model to help users become better teachers requires a great deal of subtlety in user interaction. Supporting the adaptation of models to run on resource-constrained devices such as mobile phones is also a significant engineering task.

In summary, the ORBIT dataset and benchmark provide a rich playground to drive research in approaches that are more robust to few-shot, high-variation conditions, a step beyond existing curated vision datasets and benchmarks. In addition to the ORBIT benchmark, the dataset can be used to explore a wide set of other real-world recognition tasks. We hope that these contributions will not only have real-world impact by shaping the next generation of recognition tools for the blind and low-vision community, but also improve the robustness of computer vision systems across a broad range of other applications.

The rest is here:
Announcing the ORBIT dataset: Advancing real-world few-shot learning using teachable object recognition - Microsoft

Stop Bashing ML Hackathons Already, Because They Are Not Close To Real-World – Analytics India Magazine

For years, people have been comparing machine learning and data science hackathons with real-world implications. Yet, ironically, the debates are never-ending and often ambiguous.

For instance, if you look at online hackathon platforms like Kaggle or MachineHack. These platforms allow users to find and publish data sets, explore and build models in a web-based data-science environment, collaborate/work with other data scientists and machine learning engineers, and enter the competition to solve data science and machine learning challenges across experience levels beginners to intermediate and expert.

Hackathon platforms have been serving as a test-bed for data scientists and machine learning professionals. As per Kaggle, more than 55 per cent of data scientists have less than three years of experience, and six per cent of them pursuing data science have been using machine learning for more than a decade.

There are a lot more gains than losses by participating in hackathons. Some of the benefits/advantages include:

In this article, we will talk about the differences between hackathon platforms and real-world machine learning projects and draw a clear conclusion between the both.

Before we delve deep into understanding the difference between hackathons and real-world machine learning projects, lets look into a lifecycle of a machine learning project. As explained by Steve Nouri, founder, AI4Diversity, it typically involves:

Many industry experts believe that the hackathon platforms might be an amazing way to experiment and learn. Still, it only aligns with a single stage of the ML lifecycle i.e., training the model. However, when a data scientist builds a model in the real world and optimises the metric, they need to consider the RoI, inference, re-training cost and costs in general. That is a completely missing puzzle while working on hackathon platforms.

To drive the adoption of an ML model within the business stakeholders, it is important we think about interpretability as well, said Sushanth Dasari, data scientist at Trust, stating that it drives a lot of key decisions in each of the steps in the life cycle, which is never the case with a hackathon.

In real-world ML projects, 90 per cent of the time is spent on acquiring, cleaning and processing the data, often querying different databases and merging this data. The quality of the input data needs to be carefully assessed and checked for correctness, integrity, and consistency, said Daniele Gadler, data scientist at ONE LOGIC GmbH.

Further, he said once the Ml model had been developed and deployed, a lot of time goes into monitoring, re-training the model and re-training it based on newly ingested data (MLOps). Instead, in hackathons, the data is already provided and is generally cleaner than in real-world projects. Furthermore, there are no concerns about real-world issues such as model stability, maintainability, deployability, etc. You can just focus on developing a super-complex unmaintainable huge model with the goal of obtaining the best performance on the data provided for the competition, hoping it will generalise on newly unseen data, said Gadler.

Joseph Wehbe, co-founder and CEO of DAIMLAS.com, said that time is wasted improving 0.000001 accuracies on hackathon platforms, but you do not do that in the real world. It focuses only on one performance metric. However, in the real world, you focus on scalability, speed, deployment, and cost. You dont learn how to clean raw data. You dont learn understanding the business problem, deployment skills, team skills interacting with leadership, and analysis to understand what business problem you are trying to solve, he added.

While hackathon platforms like Kaggle, MachineHack, etc., push users to explore new problems, it also helps them understand the science part well enough to do real-world work.

Hackathon platforms can be as real as real-world, but only the environments are different. For example, what a gym is for athletes, hackathon platforms are for data scientists and machine learning professionals, a great place to practice and learn.

Amit Raja Naik is a senior writer at Analytics India Magazine, where he dives deep into the latest technology innovations. He is also a professional bass player.

Read the original here:
Stop Bashing ML Hackathons Already, Because They Are Not Close To Real-World - Analytics India Magazine

Machine Learning Tools for COVID-19 Patient Screening and Improved Lab Test Management to Be Discussed at the 2021 AACC Annual Scientific Meeting -…

ATLANTA, Sept. 27, 2021 /PRNewswire/ -- Scientists have created a new machine learning tool that could help healthcare workers to quickly screen and direct the flow of COVID-19 patients arriving at hospitals. Results from an evaluation of this algorithm, along with an artificial intelligence method that improves test utilization and reimbursement, were presented today at the 2021 AACC Annual Scientific Meeting & Clinical Lab Expo.

(PRNewsfoto/AACC)

Streamlining Hospital Admission of COVID-19 Patients

It is important for clinicians to quickly diagnose COVID-19 patients when they arrive at hospitals, both to triage them and to separate them from other vulnerable patients who may be immunocompromised or have pre-existing medical conditions. This can be difficult, however, because COVID-19 shares many symptoms with other viral infections, and the most accurate PCR-based tests for COVID-19 can take several days to yield results.

A team of researchers led by Rana Zeeshan Haider, PhD, and Tahir Sultan Shamsi, FRCP, of the National Institute of Blood Disease in Karachi, Pakistan, has therefore created a machine learning algorithm to help healthcare workers efficiently screen incoming COVID-19 patients. The scientists extracted routine diagnostic and demographic data from the records of 21,672 patients presenting at hospitals and applied several statistical techniques to develop this algorithm, which is a predictive model that differentiates between COVID-19 and non-COVID-19 patients. During validation experiments, the model performed with an accuracy of up to 92.5% when tested with an independent dataset and showed a negative predictive value of up to 96.9%. The latter means that the model is particularly reliable when identifying patients who don't have COVID-19.

"The true negative labeling efficiency of our research advocates its utility as a screening test for rapid expulsion of SARS-CoV-2 from emergency departments, aiding prompt care decisions, directing patient-case flow, and fulfilling the role of a 'pre-test' concerning orderly RT-PCR testing where it is not handy," said Haider. "We propose this test to accept the challenge of critical diagnostic needs in resource constrained settings where molecular testing is not under the flag of routine testing panels."

Story continues

Optimizing Lab Test Selection and Reimbursement

Of the 5 billion lab orders submitted each year, at least 20% are considered inappropriate. These inappropriate tests can lead to slower or incorrect diagnoses for patients. Such tests may also not be covered by Medicare if they weren't meant to be used for particular medical conditions or if they were ordered with the wrong ICD-10 diagnostic codes, which in turn raises health costs.

Rojeet Shrestha, PhD, of Patients Choice Laboratories in Indianapolis, set out along with colleagues to determine if an automated test management system known as the Laboratory Decision System (LDS) could help improve test ordering. The LDS scores potential tests based on medical necessity and testing indication, helping providers minimize test misutilization and select the best tests for a given medical condition.

Using LDS, the researchers re-evaluated a total of 374,423 test orders from a reference laboratory, 48,049 of which had not met the criteria for coverage under Medicare. For 96.4% of the first 10,000 test claims, the LDS ranking system recommended alternative tests that better matched the medical necessity or had a more appropriate ICD-10 code. Of these recommendations, 80.5% would also meet Medicare policies. All of this indicates that the LDS could help correct mistaken or inappropriate lab orders.

"Our study implies that use of the automated test ordering system LDS would be extremely helpful for providers, laboratories, and payers," said Shrestha. "Use of this algorithm-based testing selection and ordering database, which rates and scores potential tests for any given disease based on clinical relevance, medical necessity, and testing indication, would eventually help providers to select and order the right test and reduce over- and under-utilization of tests."

Abstract InformationAACC Annual Scientific Meeting registration is free for members of the media. Reporters can register online here: https://www.xpressreg.net/register/aacc0921/media/landing.asp

Abstract A-226: Machine-learning based decipherment of cell population data; a promising hospital front-door screening tool for COVID-19 will be presented during:

Student Poster PresentationMonday, September 279 a.m. 5 p.m. Scientific Poster Session

Tuesday, September 289:30 a.m. - 5 p.m. (presenting author in attendance from 1:30 - 2:30 p.m.)Abstract B-011: Use of artificial intelligence for effective test utilization and to increase reimbursement will be presented during:

Scientific Poster SessionWednesday, September 299:30 a.m. - 5 p.m. (presenting author in attendance from 1:30 - 2:30 p.m.)

All sessions will take place in the Poster Hall, which is located in Exhibit Hall C of the Georgia World Congress Center in Atlanta.

About the 2021 AACC Annual Scientific Meeting & Clinical Lab ExpoThe AACC Annual Scientific Meeting offers 5 days packed with opportunities to learn about exciting science from September 26-30. Plenary sessions explore COVID-19 vaccines and virus evolution, research lessons learned from the pandemic, artificial intelligence in the clinic, miniaturization of diagnostic platforms, and improvements to treatments for cystic fibrosis.

At the AACC Clinical Lab Expo, more than 400 exhibitors will fill the show floor of the Georgia World Congress Center in Atlanta with displays of the latest diagnostic technology, including but not limited to COVID-19 testing, artificial intelligence, mobile health, molecular diagnostics, mass spectrometry, point-of-care, and automation.

About AACCDedicated to achieving better health through laboratory medicine, AACC brings together more than 50,000 clinical laboratory professionals, physicians, research scientists, and business leaders from around the world focused on clinical chemistry, molecular diagnostics, mass spectrometry, translational medicine, lab management, and other areas of progressing laboratory science. Since 1948, AACC has worked to advance the common interests of the field, providing programs that advance scientific collaboration, knowledge, expertise, and innovation. For more information, visit http://www.aacc.org.

Christine DeLongAACCSenior Manager, Communications & PR(p) 202.835.8722cdelong@aacc.org

Molly PolenAACCSenior Director, Communications & PR(p) 202.420.7612(c) 703.598.0472mpolen@aacc.org

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/machine-learning-tools-for-covid-19-patient-screening-and-improved-lab-test-management-to-be-discussed-at-the-2021-aacc-annual-scientific-meeting-301385691.html

SOURCE AACC

See the original post:
Machine Learning Tools for COVID-19 Patient Screening and Improved Lab Test Management to Be Discussed at the 2021 AACC Annual Scientific Meeting -...

Patient Finding Is One of the Most Common Uses of Machine Learning Within Commercial Operations in Biopharma – Business Wire

WALTHAM, Mass.--(BUSINESS WIRE)--Trinity Life Sciences, a leader in global life sciences commercialization solutions, finds that 45 percent of all machine learning use cases within biopharma companies are for finding patients. According to the new TGaS report entitled AIML Use Case Landscape Report, within patient finding applications, 75 percent are designed to enhance health care provider (HCP) targeting or develop HCP alerts.

If a life sciences company is not doing patient finding alerts/targeting with machine learning, they are probably behind the industry, said Steve Laux, Vice President, Artificial Intelligence & Machine Learning at TGaS Advisors, a division of Trinity Life Sciences. It is clear that patient finding can have a big impact. Biopharma executives may be surprised to learn that it is more feasible and practical than they think.

The report also includes separate use cases on machine learning for:

Trinity is hosting an upcoming webinar on the topic entitled Everything You Wanted to Know About Patient Finding but Were Afraid to Ask Focus on Commerical Applications on September 29 at 1 p.m. ET. Key topics that will be addressed include:

With a roster of large, emerging, and precommercial life sciences companies, TGaS Advisors, a division of Trinity, provides robust comparative intelligence and collaborative network membership services.

Media interested in receiving a copy of the report should contact Elizabeth Marshall at EMarshall@trinitylifesciences.com.

About Trinity Life Sciences

Trinity Life Sciences is a trusted strategic commercialization partner, providing evidence-based solutions for the life sciences. With 25 years of experience, Trinity is committed to solving clients most challenging problems through exceptional levels of service, powerful tools, and data-driven insights. Trinitys range of products and solutions includes industry-leading benchmarking solutions, powered by TGaS Advisors. To learn more about how Trinity is elevating life sciences and driving evidence to action, visit trinitylifesciences.com.

Read the original:
Patient Finding Is One of the Most Common Uses of Machine Learning Within Commercial Operations in Biopharma - Business Wire

Career Twist: Top Machine Learning Jobs to Apply this Weekend – Analytics Insight

Before scientists introduced artificial intelligence technologies to us, sci-fi movies did. Famous Hollywood films like 2001: A Space Odyssey portrayed machines and technology without a clear vision of what was approaching in the future. Somehow, whatever they showed in the movie, turned out to be a reality today. Artificial intelligence is a broad subject today. It is accelerating every industry by unleashing new developments, applications, and solutions constantly. One of the subsets of AI called machine learning, is advancing the capabilities of technology using mathematics and software programming. So far, machine learning is greatly admired in the business sector for its prominence and benefits. Companies that routinely engage with customers, employ machine learning professionals to streamline repetitive works. For example, the Netflix recommendation system, Facebooks suggestions, and traffic alerts on Google are powered by machine learning technology. Owing to the increasing usage, the demand for top machine learning jobs has also gone up. According to a job website, machine learning jobs are ranked #1 among the top jobs in the US, citing a 344% growth rate and an average salary of US$1,40,000 per year. Unfortunately, landing in a machine learning job is not easy. It requires special skills in programming and system design, along with other basics. Analytics Insight has listed the top machine learning jobs that you should apply this weekend to gear up your career.

Location(s): Gurgaon, Bengaluru

Roles and Responsibilities: As a machine learning & automation expert- data science at Genpact, the candidate is expected to focus on three main areas of the company namely consultant, solutioning, and pre-sales. He/she should translate the customers business needs into a techno-analytic problem and appropriately work with the statistical and technology to bring large-scale analytic solutions to fruition. They should also participate in assessments of automation, predictive analytics opportunities, and engagements. The candidate is expected to actively collaborate with other team members and develop solutions in AI, ML, and BI areas to stitch solutions for business problems.

Qualifications:

Apply here for the job.

Location(s): Bengaluru

Roles and Responsibilities: IBM expects the Data Engineer: Machine Learning to help transform the companys clients data into tangible business value by analyzing information, communicating outcomes, and collaborating on product development. The candidate should know data expertise to manipulate and integrate big data and different data types such as videos, images, documents, and other structured data based. He/she should define problems and opportunities in a complex business area. They should define problems and opportunities in a complex business area and develop suitable advanced analytics products.

Qualifications:

Apply here for the job.

Location(s): Bengaluru

Roles and Responsibilities: As a machine learning compiler at Qualcomm Technologies, the candidate should research, develop, design, enhance, and implement the different components of machine learning-based performance and code-size needs of the customer workloads and benchmarks. He/she should analyze software requirements, determine the feasibility of design within the given constraints, consult with architecture, and HW engineers, and implement software solutions best suited for Qualcomms SOCs. They should also analyze and identify system-level integration issues, interface with the software development, integration, and test teams.

Qualifications:

Apply here for the job.

Location(s): Bengaluru

Roles and Responsibilities: The machine learning engineer at Intel is expected to perform profiling of various algorithms and platforms using simulation or platform systems for various internal and external XPUs and architectures. He/she should be good at performance and functional model development in Python, System Influence IP architecture development through profiling and modeling results. They should constantly perform validation of the performance of pre-silicon, FPGA, and Silicon.

Qualifications:

Apply here for the job.

Location(s): Bengaluru

Roles and Responsibilities: By joining as a Senior Machine Learning Engineer at Microsoft, the candidate will become a part of an interactive and fast-paced environment where they can drive impact, innovation, and apply state-of-the-art techniques to solve problems on a global scale. He/she should work closely with engineering, product management, analytics and transformation, and data science teams from across MCDS to deliver outstanding value to stakeholders and the companys products. They should build and enhance the frameworks and tools that Microsoft uses in its MLOps platforms. The candidate should build out services for impact assessment, back-testing, model performance monitoring, data drift, model drift, concept drift, explainability, experimentation, performance testing, reproducibility check, etc.

Qualifications:

Apply here for the job.

See the rest here:
Career Twist: Top Machine Learning Jobs to Apply this Weekend - Analytics Insight

Machine Learning Uncovers Genes of Importance in Agriculture and Medicine – NYU News

Machine learning can pinpoint genes of importance that help crops to grow with less fertilizer, according to a new study published in Nature Communications. It can also predict additional traits in plants and disease outcomes in animals, illustrating its applications beyond agriculture.

Using genomic data to predict outcomes in agriculture and medicine is both a promise and challenge for systems biology. Researchers have been working to determine how to best use the vast amount of genomic data available to predict how organisms respond to changes in nutrition, toxins, and pathogen exposurewhich in turn would inform crop improvement, disease prognosis, epidemiology, and public health. However, accurately predicting such complex outcomes in agriculture and medicine from genome-scale information remains a significant challenge.

In the Nature Communications study, NYU researchers and collaborators in the U.S. and Taiwan tackled this challenge using machine learning, a type of artificial intelligence used to detect patterns in data.

We show that focusing on genes whose expression patterns are evolutionarily conserved across species enhances our ability to learn and predict genes of importance to growth performance for staple crops, as well as disease outcomes in animals, explained Gloria Coruzzi, Carroll & Milton Petrie Professor in NYUs Department of Biology and Center for Genomics and Systems Biology and the papers senior author.

Our approach exploits the natural variation of genome-wide expression and related phenotypes within or across species, added Chia-Yi Cheng of NYUs Center for Genomics and Systems Biology and National Taiwan University, the lead author of this study. We show that paring down our genomic input to genes whose expression patterns are conserved within and across species is a biologically principled way to reduce dimensionality of the genomic data, which significantly improves the ability of our machine learning models to identify which genes are important to a trait.

As a proof-of-concept, the researchers demonstrated that genes whose responsiveness to nitrogen are evolutionarily conserved between two diverse plant speciesArabidopsis, a small flowering plant widely used as a model organism in plant biology, and varieties of corn, Americas largest cropsignificantly improved the ability of machine learning models to predict genes of importance for how efficiently plants use nitrogen. Nitrogen is a crucial nutrient for plants and the main component of fertilizer; crops that use nitrogen more efficiently grow better and require less fertilizer, which has economic and environmental benefits.

The researchers conducted experiments that validated eight master transcription factors as genes of importance to nitrogen use efficiency. They showed that altered gene expression in Arabidopsis or corn could increase plant growth in low nitrogen soils, which they tested both in the lab at NYU and in cornfields at the University of Illinois.

More here:
Machine Learning Uncovers Genes of Importance in Agriculture and Medicine - NYU News

Discovery Education Collaborates With AWS to Enhance Recommendation Engine – Yahoo Finance

SILVER SPRING, Md. --News Direct-- Discovery Education

Discovery Educationa worldwide edtech leader supporting learning wherever it takes placetoday announced that it has enhanced its K-12 learning platform with Amazon Web Services (AWS) machine learning capabilities.

SILVER SPRING, Md., September 27, 2021 /3BL Media/ Discovery Educationa worldwide edtech leader supporting learning wherever it takes placetoday announced that it has enhanced its K-12 learning platform with Amazon Web Services (AWS) machine learning capabilities. The pioneering use of machine learning within the Discovery Education platform helps educators spend less time searching for digital resources and more time teaching.

Connecting educators to a vast collection of high-quality, standards-aligned content, ready-to-use digital lessons, intuitive quiz and activity creation tools, and professional learning resources, Discovery Educations award-winning learning platform facilitates engaging, daily instruction in any learning environment. Several months of planning and deep collaboration with AWS enabled Discovery Education to innovatively integrate Amazon Personalize technology into the Just For You area of its K-12 platform. The Just For You row connects educators to a unique, personalized set of resources based on the grade level taught, preferences, and assets used in the past.

As the ability to deliver more sophisticated digital experiences has evolved over time, so has the expectation and demand from teachers seeking a more personalized user experience similar to that which they receive through interactions with brands in the retail, media, and entertainment spaces. Todays tech-savvy teachers expect real-time, curated experiences across the digital resources they use daily, and the integration of the Amazon Personalize technology into Discovery Educations digital resources for the first time delivers that experience to users in the K-12 education space.

"For some time, educators have desired more resources to help personalize teaching and learning. ML technology is already being used to curate our entertainment experiences, help with workforce productivity, and more, and its exciting to see this innovation is being integrated into classrooms, said Alec Chalmers, Director, EdTech and GovTech Markets at AWS. Amazon Personalize creates high quality recommendations that better respond to the specific needs and preferences of todays learners, which ultimately improves engagement in teaching and learning. AWS is proud to be collaborating with Discovery Education to support the educators and students they serve.

Story continues

Discovery Educations team is continuously adding, contextualizing, and organizing exciting new content and timely and relevant resources to the platform in response to current events and the ever-evolving needs of educators. These resources, sourced from trusted partners, are aligned to state and national standards, and help educators bring the outside world into teaching and learning every day. These resources are the centerpiece of the Just for You row which adapts and changes with user behavior and preference over time.

The K-12 learning platform is designed to work within school systems existing infrastructure and workflows and provides safe, secure, simple access methods for educators and students. Through expanded, lasting partnerships with Brightspace, Clever, and others, integrating Discovery Educations K-12 learning platform into existing IT architecture is easier than ever.

Continuous improvement is a core value at Discovery Education, and as such, we are constantly seeking innovative new ways to improve our resources and save educators time, said Pete Weir, Discovery Educations Chief Product Officer. Integrating AWSs robust machine learning technology into our K-12 platforms recommendation engine helps improve educators productivity by providing the digital content they want and need even faster than before. We are incredibly proud to be collaborating with AWS and pioneering how education technology can personalize teaching and learning. The success of this collaboration to date encourages my team to look for even more places within our services to integrate machine learning technology and improve our services ability to adapt to our users.

For more information about Discovery Educations digital resources and professional learning services, visit http://www.discoveryeducation.com and stay connected with Discovery Education on social media through Twitter and LinkedIn.

###

About Discovery EducationDiscovery Education is the worldwide edtech leader whose state-of-the-art digital platform supports learning wherever it takes place. Through its award-winning multimedia content, instructional supports, and innovative classroom tools, Discovery Education helps educators deliver equitable learning experiences engaging all students and supporting higher academic achievement on a global scale. Discovery Education serves approximately 4.5 million educators and 45 million students worldwide, and its resources are accessed in over 140 countries and territories. Inspired by the global media company Discovery, Inc., Discovery Education partners with districts, states, and trusted organizations to empower teachers with leading edtech solutions that support the success of all learners. Explore the future of education at http://www.discoveryeducation.com.

Stephen WakefieldDiscovery EducationPhone: 202-316-6615swakefield@discoveryed.com

View additional multimedia and more ESG storytelling from Discovery Education on 3blmedia.com

View source version on newsdirect.com: https://newsdirect.com/news/discovery-education-collaborates-with-aws-to-enhance-recommendation-engine-883171463

Continue reading here:
Discovery Education Collaborates With AWS to Enhance Recommendation Engine - Yahoo Finance

Explainable AI Is the Future of AI: Here Is Why – CMSWire

PHOTO:Adobe

Artificial intelligence is going mainstream. If you're using Google docs, Ink for All or any number of digital tools, AI is being baked in. AI is already making decisions in the workplace, around hiring, customer service and more. However, a recurring issue with AI is that it can be a bit of a "black box" or mystery as to how it arrived at its decisions. Enter explainable AI.

Explainable Artificial Intelligence, or XAI, is similar to a normal AI application except that the processes and results of an XAI algorithm are able to be explained so that they can be understood by humans. The complex nature of artificial intelligence means that AI is making decisions in real-time based on the insights it has discovered in the data that it has been fed. When we do not fully understand how AI is making these decisions, we are not able to fully optimize the AI application to be all that it is capable of. XAI enables people to understand how AI and Machine Learning (ML) are being used to make decisions, predictions, and insights. Explainable AI allows brands to be transparent in their use of AI applications, which increases user trust and the overall acceptance of AI.

There is a valid need for XAI if AI is going to be used across industries. According to a report by FICO, 65% of surveyed employees could not explain how AI model decisions or predictions are determined. The benefits of XAI are beginning to be well-recognized, and not just by scientists and data engineers. The European Unions draft AI regulations are specifying XAI as a prerequisite for the eventual normalization of machine learning in society. Standardization organizations including the European Telecommunications Standards Institute (ETSI) and the Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) also recognize the importance of XAI in relation to the acceptance and trust of AI in the future.

Philip Pilgerstorfer, data scientist and XAI specialist at QuantumBlack, has indicated that the benefits of XAI include:

This is because the majority of AI with ML operates in what is referred to as a black box, that is, in an area that is unable to provide any discernible insights as to how it comes to make decisions. Many AI/ML applications are moderately benign decision engines that are used with online retail recommender systems, so it is not absolutely necessary to ensure transparency or explainability. For other, more risky decision processes, such as medical diagnoses in healthcare, investment decisions in the financial industry, and safety-critical systems in autonomous automobiles, the stakes are much higher. As such, the AI used in those systems should be explainable, transparent, and understandable in order to be trusted, reliable, and consistent.

When brands are better able to understand potential weaknesses and failures in an application, they are better prepared to maximize performance and improve the AI app. Explainable AI enables brands to more easily detect flaws in the data model, as well as biases in the data itself. It can also be used for improving data models, verifying predictions, and gaining additional insights into what is working, and what is not.

Explainable AI has the benefits of allowing us to understand what has gone wrong and where it has gone wrong in an AI pipeline when the whole AI system makes an erroneous classification or prediction, said Marios Savvides, Bossa Nova Robotics Professor of Artificial Intelligence, Electrical and Computer Engineering and Director of theCyLab Biometrics Centerat Carnegie Mellon University. These are the benefits of an XAI pipeline. In contrast, a conventional AI system involving a complete end-to-end black-box deep learning solution is more complex to analyze and more difficult to pinpoint exactly where and why an error has occurred.

Many businesses today use AI/ML applications to automate the decision-making process, as well as to gain analytical insights. Data models can be trained so that they are able to predict sales based on variable data, while an explainable AI model would enable a brand to increase revenue by determining the true drivers of sales.

Kevin Hall, CTO and co-founder of Ripcord, an organization that provides robotics, AI and machine learning solutions, explained that although AI-enabled technologies have proliferated throughout enterprise businesses, there are still complexities that exist that are preventing widespread adoption, largely that AI is still mysterious and complicated for most people. "In the case of intelligent document processing (IDP), machine learning (ML) is an incredibly powerful technology that enables higher accuracy and increased automation for document-based business processes around the world," said Hall. "Yet the performance and continuous improvement of these models is often limited by a complexity barrier between technology platforms and critical knowledge workers or end-users. By making the results of ML models more easily understood, Explainable AI will allow for the right stakeholders to more directly interact with and improve the performance of business processes."

Related Article:What Is Explainable AI (XAI)?

Its a fact that unconscious or algorithmic biases are built into AI applications. Thats because no matter how advanced or smart the AI app is, or if it uses ML or deep learning, it was developed by human beings, each of which has their own unconscious biases, and a biased data set was used to train the AI algorithm. Explainable AI systems can be architected in a way to minimize bias dependencies on different types of data, which is one of the leading issues when complete black box solutions introduce biases and make errors, explained Professor Savvides.

A recent CMSWire article on unconscious biases reflected on Amazons failed use of AI for job application vetting. Although the shopping giant did not use prejudiced algorithms on purpose, their data set looked at hiring trends over the last decade, and suggested the hiring of similar job applicants for positions with the company. Unfortunately, the data revealed that the majority of those who were hired were white males, a fact that itself reveals the biases within the IT industry. Eventually, Amazon gave up on the use of AI for its hiring practices, and went back to its previous practices, relying upon human decisioning. Many other biases can sneak into AI applications, including racial bias, name bias, beauty bias, age bias, and affinity bias.

Fortunately, XAI can be used to eliminate unconscious biases within AI data sets. Several AI organizations, including OpenAI and the Future of Life Institute, are working with other businesses to ensure that AI applications are ethical and equitable for all of humanity.

Being able to explain why a person was not selected for a loan, or a job will go a long way to improving the public trust in AI algorithms and machine learning processes. "Whether these models are clearly detailing the reason why a loan was rejected or why an invoice was flagged for fraud review, the ability to explain the model results will greatly improve the quality and efficiency of many document processes, which will lead to cost savings and greater customer satisfaction," said Hall.

Related Article:Ethics and Transparency: How We Can Reach Trusted AI

Along with the unconscious biases we previously discussed, XAI has other challenges to conquer, including:

Professor Savvides said that XAI systems need architecting into different sub-task modules where sub-module performance can be analyzed. The challenge is that these different AI/ML components need compute resources and require a data pipeline, so in general they can be more costly than an end-to-end system from a computational perspective.

There is also the issue of additional errors for an XAI algorithm, but there is a tradeoff because errors in an XAI algorithm are easier to track down. Additionally, there may be cases where a black-box approach may give fewer performance errors than an XAI system, he said. However, there is no insight into the failure of the traditional AI approach other than trying to collect these cases and re-train, whereas the XAI system may be able to pinpoint the root cause of the error.

As AI applications become smarter and are used in more industries to solve bigger and bigger problems, the need for a human element in AI becomes more vital. XAI can help do just that.

The next frontier of AI is the growth and improvements that will happen in Explainable AI technologies. They will become more agile, flexible, and intelligent when deployed across a variety of new industries. XAI is becoming more human-centric in its coding and design, reflected AJ Abdallat, CEO ofBeyond Limits, an enterprise AI software solutions provider. Weve moved beyond deep learning techniques to embed human knowledge and experiences into the AI algorithms, allowing for more complex decision-making to solve never-seen-before problems those problems without historical data or references. Machine learning techniques equipped with encoded human knowledge allow for AI that lets users edit their knowledge base even after its been deployed. As it learns by interacting with more problems, data, and domain experts, the systems will become significantly more flexible and intelligent. With XAI, the possibilities are truly endless.

Related Article: Make Responsible AI Part of Your Company's DNA

Artificial Intelligence is being used across many industries to provide everything from personalization, automation, financial decisioning, recommendations, and healthcare. For AI to be trusted and accepted, people must be able to understand how AI works and why it comes to make the decisions it makes. XAI represents the evolution of AI, and offers opportunities for industries to create AI applications that are trusted, transparent, unbiased, and justified.

Read the rest here:
Explainable AI Is the Future of AI: Here Is Why - CMSWire

YC-backed Malloc wants to take the sting out of mobile spyware – TechCrunch

Mobile spyware is one of the most invasive and targeted kinds of unregulated surveillance, since it can be used to track where you go, who you see and what you talk about. And because of its stealthy nature, mobile spyware can be nearly impossible to detect.

But now one Y Combinator-backed startup is building an app with the aim of helping anyone identify potential mobile spyware on their phones.

Malloc, a Cyprus-based early-stage company, made its debut with Antistalker, an app that monitors the sensors and apps running on a phone initially for Android only to detect if the microphone or camera is quietly activated or data transmitted without the users knowledge. Thats often a hallmark of consumer-grade spyware, which can also steal messages, photos, web browsing history and real-time location data from a victims phone without their permission.

The rising threat of spyware has prompted both Apple and Google to introduce indicators when a devices microphone or camera are used. But some of the more elusive and more capable spyware the spyware typically used by governments and nation states can slip past the hardened defenses built into iOS and Android.

Thats where Malloc says Antistalker comes in. Mallocs co-founders Maria Terzi, Artemis Kontou and Liza Charalambous built the app around a machine learning (ML) model, which allows the app to detect and block device activity that could be construed as spyware recording or sending data.

Malloc co-founders Liza Charalambous (left), Maria Terzi (middle), Artemis Kontou (right). Image Credits: Malloc/supplied

Terzi, who specializes in ML, told TechCrunch that the startup trained its ML model using known stalkerware apps to help simulate real-world surveillance. Machine learning helps to improve the apps ability to detect a broad range of new and previously unknown threats over time, rather than relying on the more traditional methods of scanning for signatures of known spyware apps.

We already know applications that are spyware. Why dont we use their behavior to train a machine learning model that will then be able to recognize new spyware? Terzi told TechCrunch.

The ML model runs on the device to be more privacy-preserving than sending data to the cloud. Malloc said it collects some anonymized data to improve the ML model over time, to help the app to detect more threats as they emerge on users devices.

The app also looks for anomalous app activity, like bursts of data sent by apps that havent been used for days, and allows the user to look at which apps have accessed the microphone and camera and when.

Its a bet thats already catching the eyes of investors, with the startup securing close to $2 million from Y Combinator and the Urban Innovation Fund.

Terzi said the company has more than 80,000 monthly active users and growing since it launched earlier this year, and plans an enterprise offering to help companies protect their employees from surveillance threats. The company is also planning to launch an iOS app in the near future.

Excerpt from:
YC-backed Malloc wants to take the sting out of mobile spyware - TechCrunch

Cellino is using AI and machine learning to scale production of stem cell therapies – TechCrunch

Cellino, a company developing a platform to automate stem cell production, presented today at TechCrunch Disrupt 2021s Startup Battlefield to detail how its system, which combines AI technology, machine learning, hardware, software and yes, lasers! could eventually democratize access to cell therapies. It aims to bring down costs associated with the manufacturing of human cells, while also increasing yields.

Founded by a team whose backgrounds include physics, stem cell biology and machine learning, Cellino operates in the regenerative medicine industry. This space is currently undergoing a revolution, where new developments in gene and cell therapies could lead to breakthrough cures for a number of leading diseases. For example, the use of personalized human retinal cells could be transplanted to halt or reverse age-related macular degeneration, which can cause blindness. But today, such cell therapies are out of reach for most people because the process of cell production hasnt been automated or made scalable and efficient.

Instead, human cells being used now in these clinical trials are mostly being made by hand by scientists who are looking at cells and evaluating using their many years of training and expertise which cells are low quality and need to be removed. They then scrape away those unwanted cells with a pipette tip. The process, as you can imagine, is time-consuming and produces only a small yield. In this manual process, youd see a yield of about 10% to 20% of cells that would be able to pass the final quality assurance tests required for human transplant.

Cellino is working to improve this process in order to produce more cells of higher quality. Its goal is to push the yield to at least 80% over the next three years.

To do so, Cellinos system is automating all the human steps in the production process using machine learning techniques.

To identify which cells are high quality or low quality, the company is collecting large training data sets where its teaching algorithms to make determinations about cell quality based on a variety of factors. This includes the cell morphology meaning, the shape, size and density of cells. Fluorescence-based surface markers can also be used to identify other factors of importance to the line of cells being produced, like the location of proteins on the cell, for example.

By using machine learning and AI to do the identification based on standard and well-accepted biological assays used by the FDA, the system could move away from human annotation and the variability that introduces into the process of human cell production.

After Cellinos software has identified which low-quality cells need to be removed, it then uses a laser to target them. The laser creates large enough cavitation bubbles to kill the cell, but its done in a highly localized way where youre not harming the neighboring cells, as thermal heat does not dissipate to the nearby cells. This is also a more precise technique than the manual method. (Cellinos system has a 5-micron resolution, while cells are 10-15 microns in size). This results in a throughput of about 5,000 cells per minute, which is highly efficient compared with manual techniques.

Over time, this automation and efficiency could bring the cost down from nearly a million dollars per patient, which is what clinicians have to pay today to run a clinical trial, when outsourcing cell production. Cellino aims to get the cost down into the tens of thousands of dollars over time.

By scaling cell production, personalized cell therapies could also help a broader range of patients compared with other techniques relying on banks of stem cells. These arent always genetically diverse samples, leaving smaller ethnic groups out of the progress being made in this space. Banked cells also require recipients to take immunosuppressants, as the cells arent your own and the body may reject them.

The use of lasers is an idea developed by Cellino co-founder and CEO Nabiha Saklayen, who patented an invention in cellular laser editing while at Harvard earning her PhD in Physics. She was encouraged to turn the technology into a startup by her collaborators, who included had leading biologists like George Church and David Scadden.

Not all scientists become entrepreneurs, and I became an entrepreneur because I had an amazing support network around me, notes Saklayen, of the push to join the startup space. She immediately recruited Marinna Madrid, an applied physicist she had worked with for years on the co-invention of laser-based intracellular delivery techniques, as her other co-founder. To gain more mentorship about growing a startup, Saklayen turned to the Boston area startup ecosystem.

I didnt know anything about startups. I wanted to work with people who knew how to build companies, how to commercialize technology, how to build instruments and the Boston ecosystem is fantastic in that way. So I started connecting with lots of people in those early weeks anybody that was in the biotech realm or Harvard Business School, Saklayen explains.

This led her to Cellino co-founder and CTO Mattias Wagner, who had built companies before in the optics and instrumentation space.

Thats how the founding team came together. It was very complementary because Marina and I were co-inventors of the original technology that inspired the platform and Mattias brought this tremendous background in semiconductors and optical instrumentation, says Saklayen.

Since its 2017 founding, Cellino has gone on to raise $16 million in seed funding in a round co-led byThe Engine and Khosla Ventures, with participation from Humboldt Fund and 8VC.

The company is now collaborating with the NIH on compatibility studies. Currently, that means Cellino is making stem cells on its system which its then comparing with the ones made at the NIH that are already being tested in humans for personalized cell therapies for retinal diseases. Cellino later hopes to use its system to address areas like Parkinsons, muscle disorders and skin grafts, among others.

The company wanted to present at TechCrunch Disrupt to share more about what its building and to source new talent.

For me, its about talking about this idea around democratization and industrialization of cell therapies. I really want to get that message out because that is the movement we need to drive over the next decade for all of these cell therapies to be accessible to all patients, says Saklayen.

Cellinos angle is also very unique in the sense that, because we have this automated system to manufacture human cells, our system could make cells for every human being in this country, in the world, she continues. And there are a lot of cell therapy approaches that are looking to use off-the-shelf cells and off-the-shelf therapies, which will only work for certain parts of the population. As the U.S. becomes more diverse, ethnically, we need personalized solutions for everybody.

View post:
Cellino is using AI and machine learning to scale production of stem cell therapies - TechCrunch