Category Archives: Machine Learning

Moderna Announced Partnership With Amazon Web Services for Their Analytics and Machine Learning Services – Science Times

The $29 billion biotech company Modernahas announced on Wednesday, August 5, that they will be partnering with Amazon Web Servicesto become their preferred cloud partner.

Moderna is currently considered the lead COVID-19 vaccine developer as it is the first company to reach the third phase of vaccine development in late July.

(Photo : Getty Images)CAMBRIDGE, MASSACHUSETTS - MAY 08: A view of Moderna headquarters on May 08, 2020 in Cambridge, Massachusetts. Moderna was given FDA approval to continue to phase 2 of Coronavirus (COVID-19) vaccine trials with 600 participants. (Photo by Maddie Meyer/Getty Images)

Read Also: 'Very Low' Dose Moderna COVID-19 Vaccine Elicits Immune Response with No Side Effect, First Human Trial Show

Vaccine development could take years of research and lab testing before it can be administered to people. As one of the leading companies who joined the race for a COVID-19 vaccine, Moderna gave 30,000 peoplelast week their first vaccine candidate that reached phase 3 of testing the United States.

At present, Moderna has been using AWS to run its everyday operations in accounting and inventory management and also to power its production facility, robotic tools, and engineering systems. According to the press release by the biotech company, this allows them to achieve greater efficiency and visibility across its operations.

Moderna CEO Stphane Bancel said that with AWS, the company's researchers could have the ability to quickly design and perform experiments and, in no time, uncover novel insights to produce faster life-saving treatments.

Modernizing IT infrastructures through the use of artificial intelligenceis one of the things that biotech companies, such as Moderna, are looking into helping them in the race of developing new medicines and treatments.

The race for a COVID-19 vaccine has made the biotechnology sector a sought-after market these days. Like AWS, its rival Microsoft Azurehas recently inked a big cloud and artificial intelligence deal with drugmaker Novartis as well.

According to biotech analyst Michael Yee, the vaccine test results could be made public in October.

Read Next: Is Moderna Coronavirus Vaccine Leading the Race? Early Trials Show the Jab Gives Off Immunity

Moderna Therapeutics' co-founder and chairman, Dr. Noubar Afeyan, said that the biotech company is the first US firm to enter Phase 3 of a clinical trial for their candidate COVID-19 vaccine.

The blind trial will include 30,000 volunteers in which half of them will receive Moderna's drug, and the other half will receive a placebo of sodium and water. The volunteers are 18 years old and older who are interested in participating in the clinical trial.

Afeyan said that the Food and Drug Administration's authorization would be based on how fast some 150 cases of the infection occur. If the trial proves to be successful, those people who received the vaccine should have a disproportionately lower number of cases than those who received the placebo.

At the end of the day, the FDAmust ensure that the vaccine meets all the necessary safety and efficacy measures. The administration mandated at least 50% protection value for any vaccine before considering authorizing them.

Moreover, Moderna hopes to have authorization from the FDA by the last quarter of 2020. Afeyan said that they expect to have 500 million to 1 billion doses of their vaccine ready for distribution once they get the FDA authorization.

Read More: Moderna COVID-19 Vaccine Trial Volunteer Suffered 'Severe Adverse Reaction'

Read the original post:
Moderna Announced Partnership With Amazon Web Services for Their Analytics and Machine Learning Services - Science Times

Surprisingly Recent Galaxy Discovered Using Machine Learning May Be the Last Generation Galaxy in the Long Cosmic History – SciTechDaily

HSC J1631+4426 broke the record for the lowest oxygen abundance. Credit: NAOJ/Kojima et al.

Breaking the lowest oxygen abundance record.

New results achieved by combining big data captured by the Subaru Telescope and the power of machine learning have discovered a galaxy with an extremely low oxygen abundance of 1.6% solar abundance, breaking the previous record of the lowest oxygen abundance. The measured oxygen abundance suggests that most of the stars in this galaxy formed very recently.

To understand galaxy evolution, astronomers need to study galaxies in various stages of formation and evolution. Most of the galaxies in the modern Universe are mature galaxies, but standard cosmology predicts that there may still be a few galaxies in the early formation stage in the modern Universe. Because these early-stage galaxies are rare, an international research team searched for them in wide-field imaging data taken with the Subaru Telescope. To find the very faint, rare galaxies, deep, wide-field data taken with the Subaru Telescope was indispensable, emphasizes Dr. Takashi Kojima, the leader of the team.

However, it was difficult to find galaxies in the early stage of galaxy formation from the data because the wide-field data includes as many as 40 million objects. So the research team developed a new machine learning method to find such galaxies from the vast amount of data. They had a computer repeatedly learn the galaxy colors expected from theoretical models, and then let the computer select only galaxies in the early stage of galaxy formation.

The research team then performed follow-up observations to determine the elemental abundance ratios of 4 of the 27 candidates selected by the computer. They have found that one galaxy (HSC J1631+4426), located 430 million light-years away in the constellation Hercules, has an oxygen abundance only 1.6 percent of that of the Sun. This is the lowest values ever reported for a galaxy. The measured oxygen abundance suggests that most of the stars in this galaxy formed very recently. In other words, this galaxy is undergoing an early stage of the galaxy evolution.

What is surprising is that the stellar mass of the HSC J1631+4426 galaxy is very small, 0.8 million solar masses. This stellar mass is only about 1/100,000 of our Milky Way galaxy, and comparable to the mass of a star cluster in our Milky Way, said Prof. Ouchi of the National Astronomical Observatory of Japan and the University of Tokyo. This small mass also supports the primordial nature of the HSC J1631+4426 galaxy.

The research team thinks that there are two interesting indications from this discovery. First, this is the evidence about a galaxy at such an early stage of galaxy evolution existing today. In the framework of the standard cosmology, new galaxies are thought to be born in the present universe. The discovery of the HSC J1631+4426 galaxy backs up the picture of the standard cosmology. Second, we may witness a new-born galaxy at the latest epoch of the cosmic history. The standard cosmology suggests that the matter density of the universe rapidly drops in our universe whose expansion accelerates. In the future universe with the rapid expansion, matter does not assemble by gravity, and new galaxies wont be born. The HSC J1631+4426 galaxy may be the last generation galaxy in the long cosmic history.

Read this article:
Surprisingly Recent Galaxy Discovered Using Machine Learning May Be the Last Generation Galaxy in the Long Cosmic History - SciTechDaily

STMicroelectronics Releases STM32 Condition-Monitoring Function Pack Leveraging Tools from Cartesiam for Simplified Machine Learning – ELE Times

STMicroelectronicshas released a free STM32 software function pack that lets users quickly build, train, and deployintelligent edge devices for industrial condition monitoringusing a microcontroller Discovery kit.

Developed in conjunction with machine-learning expert and ST Authorized Partner Cartesiam, theFP-AI-NANOEDG1 software packcontains all the necessary drivers, middleware, documentation, and sample code to capture sensor data, integrate, and run Cartesiams NanoEdge libraries. Users without specialist AI skills can quickly create and export custom machine-learning libraries for their applications using Cartesiams NanoEdge AI Studio tool running on a Windows 10 or Ubuntu PC. The function pack simplifies complete prototyping and validation free of charge on STM32 development boards, before deploying on customer hardware where standard Cartesiam fees apply.

The straightforward methodology established with Cartesiam uses industrial-grade sensors on-board a Discovery kit such as theSTM32L562E-DKto capture vibration data from the monitored equipment both in normal operating modes and under induced abnormal conditions. Software to configure and acquire sensor data is included in the function pack. NanoEdge AI Studio analyzes the benchmark data and selects pre-compiled algorithms from over 500 million possible combinations to create optimized libraries for training and inference. The function-pack software provides stubs for the libraries that can be easily replaced for simple embedding in the application. Once deployed, the device can learn the normal pattern of the operating mode locally during the initial installation phase as well as during the lifetime of the equipment, as the function pack permits switching between learning and monitoring modes.

Using the Discovery kit to acquire data, generate, train, and monitor the solution, leveraging free tools and software, and the support of theSTM32 ecosystem, developers can quickly create a proof-of-concept model at low cost and easily port the application to other STM32 microcontrollers. As an intelligent edge device, unlike alternatives that rely on AI in the cloud, the solution allows equipment owners greater control over potentially sensitive information by processing machine data on the local device.

The FP-AI-NANOEDG1 function pack is available now atwww.st.com, free of charge.

The STM32L562E-DK Discovery kit contains anSTM32L562QEI6QUultra-low-power microcontroller, an iNEMO 3D accelerometer and 3D gyroscope, as well as two MEMS microphones, a 240240 color TFT-LCD module, and on-board STLINK-V3E debugger/programmer. The budgetary price for the Discovery kit is $76.00, and it is available fromwww.st.comor distributors.

For further information, visitwww.st.com

Visit link:
STMicroelectronics Releases STM32 Condition-Monitoring Function Pack Leveraging Tools from Cartesiam for Simplified Machine Learning - ELE Times

Machine Learning Reveals What Makes People Happy In A Relationship – Forbes

Who you are together is more important than who you are alone.

What makes us happy in a romantic relationship? The question might seem too complex to answer, too varied couple to couple. But a new study in the Proceedings of the National Academy of Sciences attempts to answer just that - using machine learning.

Previous studies on romantic satisfaction were limited in size. By using machine learning, however, researchers were able to analyze a massive amount of data, which included over 11,000 different couples from 43 data sets. Individual studies are many times limited - it is difficult and expensive to recruit couples for the studies. Its also exhausting for the participants. Using machine learning to analyze a large amount of data from pre-existing studies bypasses these problems.

The researchers looked at variables that could predict happiness within a relationship. Some of these, such as neuroticism, political orientation, conscientiousness or family history were qualities of the individuals involved. Others, such as appreciation, affection and perceived partner commitment were qualities of the relationship.

Of these, qualities of the relationship, rather than the individuals involved, contributed more to overall satisfaction. The five most important were how much they believed their partner was committed to the relationship, how much they appreciated their partner, sexual satisfaction, how much they believed their partner was happy in the relationship, and not fighting often.

Appreciation and commitment are key for a fulfilling relationship.

Qualities of the individuals contribute too - but not as much. In fact, 45% of the variability in a relationship is due to the qualities of the relationship. 21% were due to the individuals themselves. In addition, once qualities of the relationship were taken into account, the differences due to the individuals were not as important.

Experiencing negative affect, depression, or insecure attachment are surely relationship risk factors. But if people nevertheless manage to establish a relationship characterized by appreciation, sexual satisfaction, and a lack of conflictand they perceive their partner to be committed and responsivethose individual risk factors may matter little, say the authors.

In other words, for a happy relationship, its more important who you are together than who you are apart.

Originally posted here:
Machine Learning Reveals What Makes People Happy In A Relationship - Forbes

Benefits Of AI And Machine Learning | Expert Panel | Security News – SecurityInformed

The real possibility of advancing intelligence through deep learning and other AI-driven technology applied to video is that, in the long term, were not going to be looking at the video until after something has happened. The goal of gathering this high level of intelligence through video has the potential to be automated to the point that security operators will not be required to make the decisions necessary for response. Instead, the intelligence-driven next steps will be automatically communicated to various stakeholders from on-site guards to local police/fire departments. Instead, when security leaders access the video that corresponds to an incident, it will be because they want to see the incident for themselves. And isnt the automation, the ability to streamline response, and the instantaneous response the goal of an overall, data-rich surveillance strategy? For almost any enterprise, the answer is yes.

More:
Benefits Of AI And Machine Learning | Expert Panel | Security News - SecurityInformed

Preparing new machine learning models used to take weeks Activeloop teams up with NVIDIA to reduce that time to hours – MENAFN.COM

(MENAFN - EIN Presswire)

Activeloop user interface and toolset work with NVIDIA processing to help InteinAir achieve great ML results

Activeloop.ai logo

Y Combinator alum achieves better aerial data pipelines for IntelinAir in an industry-leading Agriculture Tech solution

MOUNTAIN VIEW, CA, USA, August 4, 2020 / EINPresswire.com / -- In a case study now available online, Activeloop ( [To enable links contact MENAFN] ), a Y Combinator-backed startup, is announcing a major success in helping an early customer, IntelinAir , improve the efficiency of their AI analysis of aerial footage. Activeloop's software builds plug-and-play data pipelines for unstructured data. The software helps data scientists streamline their data aggregation and preparation, and automates and optimizes their training of machine learning models. Together with NVIDIA , Activeloop has achieved a massive reduction in the time-to-value and cost of machine learning / deep learning efforts. The case study documents a breakthrough in the field of aerial imagery with their joint customer IntelinAir, a leading crop intelligence firm.

Activeloop's solution is becoming available just in time for the exploding artificial intelligence and advanced machine learning market, projected to grow up to $281.24 billion by 2026 with CAGR of 37.95%. This coincides with the massive growth of data available to be analyzed by AI. All data generated by the end of 2020 will be about 40 trillion gigabytes (40 zettabytes), with IBM estimating that 90% of it has been created over the past 2 years. As data gets bigger faster than ever, translating it into actionable insights is becoming increasingly difficult and expensive. As a result, the effort needed to set up a new model and get it running efficiently can be beyond the reach of many teams who could otherwise benefit from machine learning. Existing solutions often have large cloud storage and processing costs. These solutions can't be made more efficient without radical changes.

'Unstructured data - including text, images, or videos, comprises about 80-90% of the data people generate today', says Davit Buniatyan, Activeloop Founder and CEO. 'As it comes in different forms, sizes, and even shapes, analyzing and managing it is an extremely difficult and costly task. In fact, data scientists spend about 50 to 80% of their time setting up their unstructured dataset rather than analyzing it via machine or deep learning. We're changing that by creating a fast, simple platform for building and scaling data pipelines for machine learning.'

'We operate in an agile fashion: we want to focus on building high-quality models instead of fighting with data pipelines, infrastructure, and deployment challenges' says Jennifer Hobbs, Director of Machine Learning at IntelinAir. 'Thanks to Activeloop, we've been able to deploy new models in a matter of days instead of weeks. With the help of Activeloop's platform and NVIDIA's powerful GPUs, we were able to increase the inference speed threefold and improve the accuracy of the trained models at half the cost."

You can read more about the success story here: [To enable links contact MENAFN] .

###

About Activeloop

Activeloop ( [To enable links contact MENAFN] ), is a startup backed by Y Combinator and prominent Silicon Valley investors. The company has already been featured by major outlets including TechCrunch and is now coming out of stealth mode to make its product available to the machine learning community. Formerly named Snark AI, Activeloop aims to optimize the way machine and deep learning models are trained and streamline the huge amounts of data required for this work. Activeloop is a member of NVIDIA's Inception program for AI/ML development.

About IntelinAir

IntelinAir ( [To enable links contact MENAFN] ) is a full-season and full-spectrum crop intelligence company focused on agriculture that delivers actionable intelligence to help farmers make data-driven decisions to improve operational efficiency, yields, and ultimately their profitability.

Mikayel HarutyunyanActiveloop.ai+1 415-876-5667email us here Visit us on social media:Facebook Twitter LinkedIn

Activeloop introduction and demo

Follow this link:
Preparing new machine learning models used to take weeks Activeloop teams up with NVIDIA to reduce that time to hours - MENAFN.COM

IoT automation trend rides the next wave of machine learning, Big Data – Urgent Communications

An array of new methods along with unexpected new pressures cast todays IoT automation efforts in an utterly new light.

Progress today in IoT automation is based on fresh methods employing big data, machine learning, asset intelligence and edge computing architecture. It is also enabled by emerging approaches to service orchestration and workflow, and by ITOps efforts that stress better links between IT and operations.

On one end, advances in IoT automation includerobotic process automation(RPA) tools that use sensor data to inform backroom and clerical tasks. On the other end are true robots that maintain the flow of goods onfactory floors.

Meanwhile, nothing has focused business leaders on automation like COVID-19. Automation technologies have gained priority in light of 2020s pandemic, which is spurring use of IoT sensors, robots and software to enable additional remote monitoring. Still, this work was well underway before COVID-19 emerged.

Cybersecurity Drives Advances in IoT Automation

In particular, automated discovery of IoT environments for cybersecurity purposes has been an ongoing driver of IoT automation. That is simply because there istoo much machine information to manually track,according to Lerry Wilson, senior director for innovation and digital ecosystems at Splunk. The target is anomalies found in data stream patterns.

Anomalous behavior starts to trickle into the environment, and theres too much for humans to do, Wilson said. And, while much of this still requires a human somewhere in the loop, the role of automation continues to grow.

Wilson said Splunk, which focuses on integrating a breadth of machine data, has worked with partners to ensure incoming data can now kick off useful functions in real time. These kinds of efforts are central to emerging information technology/operations technology (IT/OT) integration. This, along with machine learning (ML), promises increased automation of business workflows.

Today, we and our partners are creating machine learning that will automatically set up a work order people dont have to [manually] enter that anymore, he said, adding that what once took the form of analytical reports now is correlated with historic data for immediate execution.

We moved past reporting to action, Wilson said.

Notable use cases Splunk has encountered include systems that collect signals to monitor and optimize factory floor and campus activity as well as to correlate asset information, Wilson indicated.

To read the complete article, visit IoT World Today.

See original here:
IoT automation trend rides the next wave of machine learning, Big Data - Urgent Communications

Decoding Practical Problems and Business Implications of Machine Learning – Analytics Insight

Machine learning typically is used to solve a host of diverse problems within an organization, extracting predictive knowledge from both structured and unstructured data and using them to deliver value. The technology has already made its way into different aspects of a business ranging from finding data patterns to detect anomalies and making recommendations. Machine learning helps organizations gain a competitive edge by processing a voluminous amount of data and applying complex computations.

With machine learning, companies can develop better applications according to their business requirements. This technology is mainly designed to make everything programmatic. Applications of ML have the potential to drive business outcomes that can extensively affect a companys bottom line. The rapid evolution of new techniques in recent years has further expanded the machine learning application to nearly boundless possibilities.

Industries relying on massive volumes of data are significantly leveraging ML techniques to process their data and to build models, strategize, and plan.

While implementing the effective application of machine learning enables businesses to grow, gain competitive advantage and prepare for the future, there are some key practical issues in ML and their business implications organizations must consider.

As machine learning significantly relies on data, the occurrence of noisy data can considerably impact any information prediction. Generally, data from a dataset carries extraneous and meaningless information which can significantly affect data analysis, clustering and association analysis. Having a lack of quality data can also restrain the capabilities of building ML models. In order to cope with quality data and noise, businesses need to apply better and effective machine learning strategies through data cleansing and overall processing of data.

There is no doubt that the development of machine learning has made it possible to learn directly from data rather than human knowledge with a strong emphasis on accuracy. However, the lack of the ability to explain or present data in understandable terms to a human, often called interpretability, is one of the biggest issues in machine learning. The introduction of possible biases in data has also led to ethical and legal issues with ML models. Theinterpretabilitylevels in the field of machine learning and algorithms may significantly vary. Some methods are human-compatible as they are highly interpretable, while some are too complex to apprehend, thus require ad-hoc methods to gain an interpretation.

In the context of supervised machine learning, an imbalanced dataset often involves two or more classes. There is an imbalance among labels in the training data in several real-world datasets. This imbalance in a dataset has the potential to affect the choice of learning, the process of selecting algorithms, model evaluation and verification. The models can even suffer large biases, and the learning will not be effective if the right techniques are not employed properly. ML algorithms can generate insufficient classifiers when faced with imbalanced datasets. When trying to resolve certain business challenges with imbalanced data sets, the classifiers produced by standard ML algorithms might not deliver precise outcomes.

Thus, to address imbalanced datasets requires strategies like enhancing classification algorithms or balancing classes in the training data before providing the data as input to machine learning algorithms.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Originally posted here:
Decoding Practical Problems and Business Implications of Machine Learning - Analytics Insight

Artificial Intelligence and Machine Learning Industry 2020 Market Manufacturers Analysis, Share, Size, Growth, Trends and Research Report 2026 -…

Artificial Intelligence and Machine Learning Market 2020-2026 Industry research report covers the market landscape and its growth prospects over the coming years, the report also brief deals with the product life cycle, comparing it to the relevant products from across industries that had already been commercialized details the potential for various applications, discussing about recent product innovations and gives an overview on potential regional market.

Get Sample Copy of this Report https://www.orianresearch.com/request-sample/1432622

The report includes executive summary, global economic outlook and overview section that provide a coherent analysis on the Artificial Intelligence and Machine Learning market. Besides, the report in the market overview section delineates PLC analysis and PESTLE analysis to provide thorough analysis on the market. The overview section further delves into Porters Five Force analysis that helps in revealing the competitive scenario with regards to Artificial Intelligence and Machine Learning market revealing the probable scenario of the market.

Analysis of Artificial Intelligence and Machine Learning Market Key Manufacturers: IBM Corporation, BigML, Inc., SAP SE, SAS Institute Inc., Fair Isaac Corporation, Microsoft Corporation, Google, Inc., Baidu, Inc., Amazon Web Services Inc., Intel Corporation and Hewlett Packard Enterprise Development LP (HPE)

The report provides a detailed overview of the industry including both qualitative and quantitative information. It provides overview and forecast of the global Artificial Intelligence and Machine Learning market based on various segments. It also provides market size and forecast estimates from year 2020 to 2026 with respect to five major regions, namely; North America, Europe, Asia-Pacific (APAC), Middle East and Africa (MEA) and South & Central America. The Artificial Intelligence and Machine Learning market by each region is later sub-segmented by respective countries and segments. The report covers analysis and forecast of countries globally along with current trend and opportunities prevailing in the region.

Global Artificial Intelligence and Machine Learning Industry 2020 Market Research Report is spread across 101 pages and provides exclusive vital statistics, data, information, trends and competitive landscape details in this niche sector.

With tables and figures helping analyze worldwide Global Artificial Intelligence and Machine Learning Market, this research provides key statistics on the state of the industry and is a valuable source of guidance and direction for companies and individuals interested in the market.

At the same time, we classify different Artificial Intelligence and Machine Learning based on their definitions. Upstream raw materials, equipment and downstream consumers analysis is also carried out. What is more, the Artificial Intelligence and Machine Learning industry development trends and marketing channels are analyzed.

Market Segment by Type:

Hardware

Software

Services

Market Segment by Application:

BFSI

Healthcare and Life Sciences

Retail

Telecommunication

Government and Defense

Manufacturing

Energy and Utilities

Others

The report strongly emphasizes prominent participants of the Artificial Intelligence and Machine Learning Industry to provide a valuable source of guidance and direction to companies, executive officials, and potential investors interested in this market. The study focuses on significant factors relevant to industry participants such as manufacturing technology, latest advancements, product description, manufacturing capacities, sources of raw material, and profound business strategies.

Order a copy of Global Artificial Intelligence and Machine Learning Market Report 2020 @https://www.orianresearch.com/checkout/1432622

Finally, the feasibility of new investment projects is assessed, and overall research conclusions are offered. In a word, the report provides major statistics on the state of the industry and is a valuable source of guidance and direction for companies and individuals interested in the market.

Global Artificial Intelligence and Machine Learning 2020 to 2026 includes:

Trends in Artificial Intelligence and Machine Learning deal making in the industry

Analysis of Artificial Intelligence and Machine Learning deal structure

Access to headline, upfront, milestone and royalty data

Access to hundreds of Artificial Intelligence and Machine Learning contract documents

Comprehensive access to Artificial Intelligence and Machine Learning records

TOC of Artificial Intelligence and Machine Learning Market Report Includes:

1 Artificial Intelligence and Machine Learning Market Overview

2 Company Profiles

3 Global Artificial Intelligence and Machine Learning Market Competition, by Players

4 Global Artificial Intelligence and Machine Learning Market Size by Regions

5 North America Artificial Intelligence and Machine Learning Revenue by Countries

6 Europe Artificial Intelligence and Machine Learning Revenue by Countries

7 Asia-Pacific Artificial Intelligence and Machine Learning Revenue by Countries

8 South America Artificial Intelligence and Machine Learning Revenue by Countries

9 Middle East and Africa Revenue Artificial Intelligence and Machine Learning by Countries

10 Global Artificial Intelligence and Machine Learning Market Segment by Type

11 Global Artificial Intelligence and Machine Learning Market Segment by Application

12 Global Artificial Intelligence and Machine Learning Market Size Forecast (2020-2026)

13 Research Findings and Conclusion

14 Appendix

Customization Service of the Report:

Orian Research provides customization of reports as per your need. This report can be personalized to meet your requirements. Get in touch with our sales team, who will guarantee you to get a report that suits your necessities.

About Us:

Orian Research is one of the most comprehensive collections of market intelligence reports on the World Wide Web. Our reports repository boasts of over 500000+ industry and country research reports from over 100 top publishers. We continuously update our repository so as to provide our clients easy access to the worlds most complete and current database of expert insights on global industries, companies, and products. We also specialize in custom research in situations where our syndicate research offerings do not meet the specific requirements of our esteemed clients.

Contact Us:

Ruwin Mendez

Vice President Global Sales & Partner Relations

Orian Research Consultants

US: +1 (832) 380-8827 | UK: +44 0161-818-8027

Read the original here:
Artificial Intelligence and Machine Learning Industry 2020 Market Manufacturers Analysis, Share, Size, Growth, Trends and Research Report 2026 -...

Could this software help users trust machine learning decisions? – C4ISRNet

WASHINGTON - New software developed by BAE Systems could help the Department of Defense build confidence in decisions and intelligence produced by machine learning algorithms, the company claims.

BAE Systems said it recently delivered its new MindfuL software program to the Defense Advanced Research Projects Agency in a July 14 announcement. Developed in collaboration with the Massachusetts Institute of Technologys Computer Science and Artificial Intelligence Laboratory, the software is designed to increase transparency in machine learning systemsartificial intelligence algorithms that learn and change over time as they are fed ever more databy auditing them to provide insights about how it reached its decisions.

The technology that underpins machine learning and artificial intelligence applications is rapidly advancing, and now its time to ensure these systems can be integrated, utilized, and ultimately trusted in the field, said Chris Eisenbies, product line director of the cmpanys Autonomy, Control, and Estimation group. The MindfuL system stores relevant data in order to compare the current environment to past experiences and deliver findings that are easy to understand.

While machine learning algorithms show promise for DoD systems, determining how much users can trust their output remains a challenge. Intelligence officials have repeatedly noted that analysts cannot rely on black box artificial intelligence systems that simply produce a decision or piece of intelligencethey need to understand how the system came to that decision and what unseen biases (in the training data or otherwise) might be influencing that decision.

MindfuL is designed to help address that gap by providing more context around those outputs. For instance, the company says its program will issue statements such as The machine learning system has navigated obstacles in sunny, dry environments 1,000 times and completed the task with greater than 99 percent accuracy under similar conditions; or The machine learning system has only navigated obstacles in rain 100 times with 80 percent accuracy in similar conditions; manual override recommended. Those types of statements can help users evaluate how much confidence they should place in any individual decision produced by the system.

This is the first release of the MindfuL software as part of a $5 million, three-year contract under DARPAs Competency-Aware Machine Learning (CAML) program. BAE Systems plans to demonstrate their software in both simulation and in prototype hardware later this year.

Continue reading here:
Could this software help users trust machine learning decisions? - C4ISRNet