Category Archives: Machine Learning

Boosting Weather Prediction with Machine Learning – Eos

Today predictions of the next several days weather can be remarkably accurate, thanks to decades of development of equations that closely capture atmospheric processes. However, they are not perfect. Data-driven approaches that use machine learning and other artificial intelligence tools to learn from past weather patterns might provide even better forecasts, with lower computing costs.

Although there has been progress in developing machine learning approaches for weather forecasting, an easy method for comparing these approaches has been lacking. Now Rasp et al. present WeatherBench, a new data resource meant to serve as the first standard benchmark for making such comparisons. WeatherBench provides larger volume, diversity, and resolution of data than have been used in previous models.

These data are pulled from global weather estimates and observations captured over the past 40 years. The researchers have processed these data with an eye toward making them convenient for use in training, validating, and testing machine learningbased weather models. They have also proposed a standard metric for WeatherBench users to compare the accuracy of different models.

To encourage progress, the researchers challenge users of WeatherBench to accurately predict worldwide atmospheric pressure and temperature 3 and 5 days into the futuresimilar to tasks performed by traditional, equation-based forecasting models. WeatherBench data, code, and guides are publicly available online.

The researchers hope that WeatherBench will foster competition, collaboration, and advances in the field and that it will enable other scientists to create data-driven approaches that can supplement traditional approaches while also using computing power more efficiently. (Journal of Advances in Modeling Earth Systems (JAMES), https://doi.org/10.1029/2020MS002203, 2020)

Sarah Stanley, Science Writer

See the article here:
Boosting Weather Prediction with Machine Learning - Eos

Machine learning – it’s all about the data – KHL Group

When it comes to the construction industry machine learning means many things. However, at its core, it all comes back to one thing: data.

The more data that is produced through telematics, the more advanced artificial intelligence (AI) becomes, due to it having more data to learn from. The more complex the data the better for AI, and as AI becomes more advanced its decision-making improves. This means that construction is becoming more efficient thanks to a loop where data and AI are feeding into each other.

Machine learning is an application of AI that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. As Jim Coleman, director of global IP at Trimble says succinctly, Data is the fuel for AI.

Artificial intelligence

Coleman expands on that statement and the notion that AI and data are in a loop, helping each other to develop.

The more data we can get, the more problems we can solve and the more processing we can throw on top of that, the broader set of problems well be able to solve, he comments.

Theres a lot of work out there to be done at AI and it all centres around this notion of collecting data, organising the data and then mining and evaluating that data.

Karthik Venkatasubramanian, vice president of data and analytics at Oracle Construction and Engineering agrees that data is key, saying: Data is the lifeblood for any AI and machine learning strategy to work. Many construction businesses already have data available to them without realising it.

This data, arising from previous projects and activities, and collected over a number of years, can become the source of data that machine learning models require for training. Models can use this existing data repository to train on and then compare against a validation test before it is used for real world prediction scenarios.

There are countless examples of machine learning at work in construction with a large number of OEMs having their own programmes in place, not to mention whats being worked on by specialist technology companies.

One of these OEMs is USA-based John Deere. Andrew Kahler, a product marketing manager for the company says that machine learning has expanded rapidly over the past few years and has multiple applications.

Machine learning will allow key decision makers within the construction industry to manage all aspects of their jobs more easily, whether in a quarry, on a site development job, building a road, or in an underground application. Bigger picture, it will allow construction companies to function more efficiently and optimise resources, says Kahler.

He also makes the point that a key step in this process is the ability for smart construction machines to connect to a centralised, cloud-based system John Deere has its JDLink Dashboard, and most of the major OEMs have their own equivalent system.

The potential for machine learning to unlock new levels of intelligence and automation in the construction industry is somewhat limitless. However, it all depends on the quality and quantity of data were able to capture, and how well were able to put it to use though smart machines.

USA-based Built Robotics was founded in 2016 to address what they saw as gap in the market the lack of technology being used across construction sites, especially compared to other industries. The company upgrade construction equipment with AI guidance systems, enabling them to operate fully autonomously.

The company typically works with equipment comprising excavators, bulldozers, and skid steer loaders. The equipment can only work autonomously on certain repetitive tasks; for more complex tasks an operator is required.

Erol Ahmed, director of communications at Built Robotics says that founder and CEO Noah Ready-Campbell wanted to apply robotics to where it would be really helpful and have a lot of change and impact, and thus settled on the construction industry.

Ahmed says that the company are the only commercial autonomous heavy equipment and construction company available. He adds that the business which operates in the US and has recently launched operations in Australia is focused on automating specific workflows.

We want to automate specific tasks on the job site, get them working really well. Its not about developing some sort of all-encompassing robot that thinks and acts like a human and can do anything you tell it to. It is focusing on specific things, doing them well, helping them work in existing workflows. Construction sites are very complicated, so just automating one piece is very helpful and provides a lot of productivity savings.

Hydraulic system

Ahmed confirms that as long as the equipment has an electronically controlled hydraulic system converting a, for example, Caterpillar, Komatsu or a Volvo excavator isnt too different. There is obviously interest in the company as in September 2019 the company announced it had received US$33 million in investment, bringing its total funding up to US$48 million.

Of course, a large excavator or a mining truck at work without an operator is always going to catch the eye, and our attention and imagination. They are perhaps the most visual aspect of machine learning on a construction site, but there are a host of other examples that are working away in the background.

As Trimbles Coleman notes, I think one of the interesting things about good AI is you might not know whats even there, right? You just appreciate the fact that, all of a sudden, theres an increase in productivity.

AI is used in construction for specific tasks, such as informing an operator when a machine might fail or isnt being used productively to a broader and more macro sense. For instance, for contractors planning on how best to construct a project there is software with AI that can map out the most efficient processes.

The AI can make predictions about schedule delays and cost overruns. As there is often existing data on schedule and budget performance this can used to make predictions and these predictions will get better over time. As we said before; the more data that AI has, the smarter it becomes.

Venkatasubramanian from Oracle adds that smartification is happening in construction, saying that: Schedules and budgets are becoming smart by incorporating machine learning-driven recommendations.

Supply chain selection is becoming smart by using data across disparate systems and comparing performance. Risk planning is also getting smart by using machine learning to identify and quantify risks from the past that might have a bearing on the present.

There is no doubt that construction has been slower than other industries to adopt new technology, but this isnt just because of some deep-seated reluctance to new ideas.

For example, agriculture has a greater application of machine learning but it is easier for that sector to implement it every year the task for getting in the crops on a farm will be broadly similar.

New challenges

As John Downey, director of sales EMEA, Topcon Positioning Group, explains: With construction theres a slower adoption process because no two projects or indeed construction sites are the same, so the technology is always confronted with new challenges.

Downey adds that as machine learning develops it will work best with repetitive tasks like excavation, paving or milling but thinks that the potential goes beyond this.

As we move forward and AI continues to advance, well begin to apply it across all aspects of construction projects.

The potential applications are countless, and the enhanced efficiency, improved workflows and accelerated rate of industry it will bring are all within reach.

Automated construction equipment needs operators to oversee them as this sector develops it could be one person for every three or five machines, or more, it is currently unclear. With construction facing a skills shortage this is an exciting avenue. There is also AI which helps contractors to better plan, execute and monitor projects you dont need to have machine learning type intelligence to see the potential transformational benefits of this when multi-billion dollar projects are being planned and implemented

Read the rest here:
Machine learning - it's all about the data - KHL Group

Need a Hypothesis? This A.I. Has One – The New York Times

They found that the top 10 sets of attitudes linked to having strict ethical beliefs included views on religion, views about crime and confidence in political leadership. Two of those 10 stood out, the authors wrote: the belief that humanity has a bright future was associated with a strong ethical code, and the belief that humanity has a bleak future was associated with a looser one.

We wanted something we could manipulate, in a study, and that applied to the situation were in right now what does humanitys future look like? Dr. Savani said.

In a subsequent study of some 300 U.S. residents, conducted online, half of the participants were asked to read a relatively dire but accurate accounting of how the pandemic was proceeding: China had contained it, but not without severe measures and some luck; the northeastern U.S. had also contained it, but a second wave was underway and might be worse, and so on.

This group, after its reading assignment, was more likely to justify violations of Covid-19 etiquette, like hoarding groceries or going maskless, than the other participants, who had read an upbeat and equally accurate pandemic tale: China and other nations had contained outbreaks entirely, vaccines are on the way, and lockdowns and other measures have worked well.

In the context of the Covid-19 pandemic, the authors concluded, our findings suggest that if we want people to act in an ethical manner, we should give people reasons to be optimistic about the future of the epidemic through government and mass-media messaging, emphasizing the positives.

Thats far easier said than done. No psychology paper is going to drive national policies, at least not without replication and more evidence, outside experts said. But a natural test of the idea may be unfolding: Based on preliminary data, two vaccines now in development are around 95 percent effective, scientists reported this month. Will that optimistic news spur more-responsible behavior?

Our findings would suggest that people are likely to be more ethical in their day-to-day lives, like wearing masks, with the news of all the vaccines, Dr. Savani said in an email.

More here:
Need a Hypothesis? This A.I. Has One - The New York Times

Artificial Intelligence and Machine Learning Together To Reach the Culmination of Growth By 2023 – thepolicytimes.com

Artificial Intelligence (AI) is the technology that enables a machine to stimulate human behavior. It is one of the trending technologies and machine learning is its main subset. AI system completely deals with structured and unstructured data. Machine Learning (ML) is a subset of Artificial Intelligence and it explores the development of algorithms that learn from the given data. These kinds of algorithms are able to learn from the given data and teach themselves to adapt to new circumstances and perform certain tasks.

The Big Data augmenting the intelligence in machines

In many areas of research and industry, ML and AI are becoming dominant problem-solving techniques. A similar fundamental hypothesis is shared by both ML and AI; and computation is a better way to model intelligent behavior in machines. Computation does not reinforce learning methods and also does not search for probabilistic techniques. Big data is no fad. As the world is growing at an exponential rate, the size of the data collected across the globe is also growing. Data is becoming contextually relevant which is breaking new ground for ML and AI.

The need for AI and ML

Data is the lifeblood of all businesses. AI automates repetitive learning and analyzes more and deeper data using neural networks that have many hidden layers. In summary, the goal of AI is to create technology that allows machines to function in an intelligent manner. The difference between keeping up with the competition and falling further behind is actually been increased at a high scale by data-driven decisions. So, Machine learning can play a great role to unlock the value of customer data and also enact decisions that keep a company ahead of the competition.

Also read: Artificial Intelligence; Why Journalism Needs to be Human- Centric?

The balancing skills between AI and ML

As stated by Terry Simpson, technical evangelist at Nintex, the skill sets between AI and ML vary at an extreme level. On one hand, there is the technical developer who can execute a given task after been taking the desired outcome, and on the other hand, there is the business analyst who needs to point out that what the business actually needs and see the vision to automate it. Even more, organizations are starting to understand the ways that how AI and ML can have a positive strategic impact.

The PolicyTimes suggestions

Also read: Artificial Intelligence to conquer the world in 50 years

Summary

Article Name

Artificial Intelligence and Machine Learning Together To Reach the Culmination of Growth By 2023

Description

In the list of trending technologies, AI and ML are growing at a rapid rate. The implementations of artificial intelligence are holistic, so they are relying heavily on machine learning to learn patterns from vast data sets.

Author

TPT Bureau

Publisher Name

THE POLICY TIMES

Publisher Logo

More here:
Artificial Intelligence and Machine Learning Together To Reach the Culmination of Growth By 2023 - thepolicytimes.com

Machine Learning-Based Risk Assessment for Cancer Therapy-Related Cardiac Dysfunction in 4300 Longitudinal Oncology Patients – DocWire News

This article was originally published here

J Am Heart Assoc. 2020 Nov 26:e019628. doi: 10.1161/JAHA.120.019628. Online ahead of print.

ABSTRACT

Background The growing awareness of cardiovascular toxicity from cancer therapies has led to the emerging field of cardio-oncology, which centers on preventing, detecting, and treating patients with cardiac dysfunction before, during, or after cancer treatment. Early detection and prevention of cancer therapy-related cardiac dysfunction (CTRCD) play important roles in precision cardio-oncology. Methods and Results This retrospective study included 4309 cancer patients between 1997 and 2018 whose laboratory tests and cardiovascular echocardiographic variables were collected from the Cleveland Clinic institutional electronic medical record database (Epic Systems). Among these patients, 1560 (36%) were diagnosed with at least 1 type of CTRCD, and 838 (19%) developed CTRCD after cancer therapy (de novo). We posited that machine learning algorithms can be implemented to predict CTRCDs in cancer patients according to clinically relevant variables. Classification models were trained and evaluated for 6 types of cardiovascular outcomes, including coronary artery disease (area under the receiver operating characteristic curve [AUROC], 0.821; 95% CI, 0.815-0.826), atrial fibrillation (AUROC, 0.787; 95% CI, 0.782-0.792), heart failure (AUROC, 0.882; 95% CI, 0.878-0.887), stroke (AUROC, 0.660; 95% CI, 0.650-0.670), myocardial infarction (AUROC, 0.807; 95% CI, 0.799-0.816), and de novo CTRCD (AUROC, 0.802; 95% CI, 0.797-0.807). Model generalizability was further confirmed using time-split data. Model inspection revealed several clinically relevant variables significantly associated with CTRCDs, including age, hypertension, glucose levels, left ventricular ejection fraction, creatinine, and aspartate aminotransferase levels. Conclusions This study suggests that machine learning approaches offer powerful tools for cardiac risk stratification in oncology patients by utilizing large-scale, longitudinal patient data from healthcare systems.

PMID:33241727 | DOI:10.1161/JAHA.120.019628

Excerpt from:
Machine Learning-Based Risk Assessment for Cancer Therapy-Related Cardiac Dysfunction in 4300 Longitudinal Oncology Patients - DocWire News

Metal Geochemistry Meets Machine Learning in the North Atlantic – Hydro International

Surveying the seabed is still an enormous task. So far, only 20% of the regions under water have been mapped with echosounders. This refers only to the topography, not to the content; that is, the composition of the seafloor.

The existing sampling efforts are virtually just tiny pinpricks in the vast amount of uncertainty that has so far covered the seafloor, says Dr Timm Schning from the Deep-Sea Monitoring group of GEOMAR, who led an iAtlantic expedition aboard the German research vessel Maria S. Merian in autumn 2020. Over a period of four weeks, a team of geochemists and data scientists explored the seafloor of the North Atlantic using an innovative combination of mapping, direct sampling and novel data analysis methods.

The researchers had chosen two work areas: the Porcupine Abyssal Plain off Ireland, and the Iberian Abyssal Plain between the Portuguese mainland and the Azores. Different measuring methods were used. The seafloor was mapped regionally with the shipboard multibeam echosounder on the Merian research vessel. A towed camera system provided additional photos of the seafloor at selected positions, which will then be combined to create local, high-resolution maps. A TV-Multicorer was used selectively, with which several samples of the uppermost seafloor sediment layers are collected simultaneously.

The team aboard RV Maria S. Merian prepares to retrieve sediment samples from the multicorer. (Image courtesy T. Schning)

In this way, we not only obtained more data on the seafloor structure itself, but also on its composition at particularly interesting points, says Dr Schning. Using new data analysis methods, we eventually intend to extrapolate the results of the sample analyses to local photo maps. In turn, the findings from the photo mosaic maps will be extrapolated to the regions covered by the echosounder mapping by means of machine learning.

Overall, the trip was very successful for the team. In addition, they were able to assist international colleagues by salvaging an instrument belonging to the UKs National Oceanography Centre: during a storm offshore Ireland, a large measuring buoy from the Porcupine Abyssal Plain Observatory had broken loose from its mooring, which was recovered by the Merian and brought back to Emden, Germany. It will be returned to the UK by land much to the relief and gratitude of its owners.

Now the team is busy publishing all acquired digital data according to FAIR standards, and all data will be made available to the international research community.

You can read the expedition blog at http://www.oceanblogs.org/msm96/.

Rescue mission: successful recovery of the UK's PAP mooring buoy onto the back deck of the Maria S. Merian.

Continue reading here:
Metal Geochemistry Meets Machine Learning in the North Atlantic - Hydro International

Postdoctoral Research Associate in Computer Vision and Machine Learning job with DURHAM UNIVERSITY | 235683 – Times Higher Education (THE)

Department of Computer Science

Grade 7:-33,797 - 35,845Fixed Term-Full TimeContract Duration:24 monthsContracted Hours per Week:35Closing Date:28-Dec-2020, 7:59:00 AM

Durham University

Durham University is one of the world's top universities with strengths across the Arts and Humanities, Sciences and Social Sciences. We are home to some of the most talented scholars and researchers from around the world who are tackling global issues and making a difference to people's lives.

The University sits in a beautiful historic city where it shares ownership of a UNESCO World Heritage Site with Durham Cathedral, the greatest Romanesque building in Western Europe. A collegiate University, Durham recruits outstanding students from across the world and offers an unmatched wider student experience.

Less than 3 hours north of London, and an hour and a half south of Edinburgh, County Durham is a region steeped in history and natural beauty. The Durham Dales, including the North Pennines Area of Outstanding Natural Beauty, are home to breathtaking scenery and attractions. Durham offers an excellent choice of city, suburban and rural residential locations. The University provides a range of benefits including pension and childcare benefits and the Universitys Relocation Manager can assist with potential schooling requirements.

Durham University seeks to promote and maintain an inclusive and supportive environment for work and study that assists all members of our University community to reach their full potential. Diversity brings strength and we welcome applications from across the international, national and regional communities that we work with and serve.

The Department

The Department of Computer Science is rapidly expanding it will more than double in size over the next 10 years from 18 to approximately 40 staff. A new building for the department (joint with Mathematical Sciences) will be built to house the expanded Department, and is expected to be completed in 2021. The current Department has research strengths in (1) algorithms and complexity, (2) computer vision, imaging, and visualisation and (3) high-performance computing, cloud computing, and simulation. We work closely with industry and government departments.

Research-led teaching is a key strength of the Department, which came 5th in the Complete University Guide. The department offers BSc and MEng undergraduate degrees and is currently redeveloping its interdisciplinary taught postgraduate degrees. The size of its student cohort has more than trebled in the past five years. The Department has an exceptionally strong External Advisory Board that provides strategic support for developing research and education, consisting of high-profile industrialists and academics.

Computer Science is one of the very best UK Computer Science Departments with an outstanding reputation for excellence in teaching, research and employability of our students.

The Role

We are seeking a full-time Postdoctoral Research Associate (PDRA) to join Prof. Toby Breckon's research team at Durham University. The post is funded, for an initial fixed-term period of 24months, by an ongoing portfolio of research work primarily spanning aspects of automatic object detection and classification for wide-area visual surveillance (in collaboration with a large industrial partner) in addition to use in aviation security (in collaboration with UK and US government) and sensing for future autonomous vehicles (in collaboration with a number of industrial collaborators).

The researcher will have the opportunity to work on common themes of machine learning research with applications across several funded work streams within the group. They will consider the use of cutting-edge deep learning algorithms for image classification and generalized data understanding tasks (object detection, human pose and behaviour understanding, and materials discrimination), in addition to integrated aspects of visual tracking and stereo vision across a range of image modalities. Specifically, they will investigate novel aspects of automatic adaptability of contemporary machine learning approaches as an aspect of these tasks. They will develop software algorithms, manage their own academic research in addition to project delivery to a range of external industrial and government collaborators.

In addition to published research output, the candidate can expect their research to have significant impact across a range of industrial/governmental collaborators and form a major innovation contributor to future visual surveillance and vehicle autonomy applications.

The post the offers an outstanding opportunity to gain a strong research track record in an exciting and fast-moving area of applied computer vision and machine learning whilst working in an environment with high levels of external collaboration and industrial research impact.

Further details on the research portfolio can be found on the following website:

Prof. Toby Breckon, publications and demos:https://www.durham.ac.uk/toby.breckon/

Responsibilities:

ThisPostdoctoral Research Associate (PDRA) post at Durham University requires an enthusiastic researcher with expertise in the development of computer vision, image processing and/or machine learning techniques.The project work with external collaborators requires someone who can develop robust, well-documented code efficiently and have an ability to work with exotic sensing hardware as required. Researchers lacking evidence of code development in a delivery environment, or strong potential to work as part of a multidisciplinary team spanning multiple organisations are unlikely to be successful.It is fixed term for 24 months due to external funding.

While the post is based for the full period in Durham, it will be necessary for the researcher to travel for meetings and/or system trials as part of the project. There will also be the opportunity for the researcher to attend national and international conferences to present the work, and there will be opportunities to gain experience of teaching at undergraduate level. The researcher will join the Innovative Computing Research Group within the Department.

The post-holder is employed to work on research/a research project which will be led by another colleague. Whilst this means that the post-holder will not be carrying out independent research in his/her own right, the expectation is that they will contribute to the advancement of the project, through the development of their own research ideas/adaptation and development of research protocols.

Successful applicants will, ideally, be in post byJanuary 2021.

How to Apply

For informal enquiries please contactProf. Toby Breckon,toby.breckon@durham.ac.uk.All enquiries will be treated in the strictest confidence.

We prefer to receive applications online via the Durham University Vacancies Site.https://www.dur.ac.uk/jobs/. As part of the application process, you should provide details of 3 (preferably academic/research) referees and the details of your current line manager so that we may seek an employment reference.

Applications are particularly welcome from women and black and minority ethnic candidates, who are under-represented in academic posts in theUniversity.

What to Submit

All applicants are asked to submit:

Next Steps

The assessment for the post will includeformal interview and a presentation of recent research results.Shortlisted candidates will be invited for interview and assessment (Date TBC)

The Requirements

Essential:

Qualifications

Experience

Skills

Desirable:

Experience

Skills

DBS Requirement:Not Applicable.

Read the original post:
Postdoctoral Research Associate in Computer Vision and Machine Learning job with DURHAM UNIVERSITY | 235683 - Times Higher Education (THE)

The 12 Coolest Machine-Learning Startups Of 2020 – CRN

Learning Curve

Artificial intelligence has been a hot technology area in recent years and machine learning, a subset of AI, is one of the most important segments of the whole AI arena.

Machine learning is the development of intelligent algorithms and statistical models that improve software through experience without the need to explicitly code those improvements. A predictive analysis application, for example, can become more accurate over time through the use of machine learning.

But machine learning has its challenges. Developing machine-learning models and systems requires a confluence of data science, data engineering and development skills. Obtaining and managing the data needed to develop and train machine-learning models is a significant task. And implementing machine-learning technology within real-world production systems can be a major hurdle.

Heres a look at a dozen startup companies, some that have been around for a few years and some just getting off the ground, that are addressing the challenges associated with machine learning.

AI.Reverie

Top Executive: Daeil Kim, Co-Founder, CEO

Headquarters: New York

AI.Reverie develops AI and machine -earning technology for data generation, data labeling and data enhancement tasks for the advancement of computer vision. The companys simulation platform is used to help acquire, curate and annotate the large amounts of data needed to train computer vision algorithms and improve AI applications.

In October AI.Reverie was named a Gartner Cool Vendor in AI core technologies.

Anodot

Top Executive: David Drai, Co-Founder, CEO

Headquarters: Redwood City, Calif.

Anodots Deep 360 autonomous business monitoring platform uses machine learning to continuously monitor business metrics, detect significant anomalies and help forecast business performance.

Anodots algorithms have a contextual understanding of business metrics, providing real-time alerts that help users cut incident costs by as much as 80 percent.

Anodot has been granted patents for technology and algorithms in such areas as anomaly score, seasonality and correlation. Earlier this year the company raised $35 million in Series C funding, bringing its total funding to $62.5 million.

BigML

Top Executive: Francisco Martin, Co-Founder, CEO

Headquarters: Corvallis, Ore.

BigML offers a comprehensive, managed machine-learning platform for easily building and sharing datasets and data models, and making highly automated, data-driven decisions. The companys programmable, scalable machine -earning platform automates classification, regression, time series forecasting, cluster analysis, anomaly detection, association discovery and topic modeling tasks.

The BigML Preferred Partner Program supports referral partners and partners that sell BigML and oversee implementation projects. Partner A1 Digital, for example, has developed a retail application on the BigML platform that helps retailers predict sales cannibalizationwhen promotions or other marketing activity for one product can lead to reduced demand for other products.

StormForge

Top Executive: Matt Provo, Founder, CEO

Headquarters: Cambridge, Mass.

StormForge provides machine learning-based, cloud-native application testing and performance optimization software that helps organizations optimize application performance in Kubernetes.

StormForge was founded under the name Carbon Relay and developed its Red Sky Ops tools that DevOps teams use to manage a large variety of application configurations in Kubernetes, automatically tuning them for optimized performance no matter what IT environment theyre operating in.

This week the company acquired German company Stormforger and its performance testing-as-a-platform technology. The company has rebranded as StormForge and renamed its integrated product the StormForge Platform, a comprehensive system for DevOps and IT professionals that can proactively and automatically test, analyze, configure, optimize and release containerized applications.

In February the company said that it had raised $63 million in a funding round from Insight Partners.

Comet.ML

Top Executive: Gideon Mendels, Co-Founder, CEO

Headquarters: New York

Comet.ML provides a cloud-hosted machine-learning platform for building reliable machine-learning models that help data scientists and AI teams track datasets, code changes, experimentation history and production models.

Launched in 2017, Comet.ML has raised $6.8 million in venture financing, including $4.5 million in April 2020.

Dataiku

Top Executive: Florian Douetteau, Co-Founder, CEO

Headquarters: New York

Dataikus goal with its Dataiku DSS (Data Science Studio) platform is to move AI and machine-learning use beyond lab experiments into widespread use within data-driven businesses. Dataiku DSS is used by data analysts and data scientists for a range of machine-learning, data science and data analysis tasks.

In August Dataiku raised an impressive $100 million in a Series D round of funding, bringing its total financing to $247 million.

Dataikus partner ecosystem includes analytics consultants, service partners, technology partners and VARs.

DotData

Top Executive: Ryohei Fujimaki, Founder, CEO

Headquarters: San Mateo, Calif.

DotData says its DotData Enterprise machine-learning and data science platform is capable of reducing AI and business intelligence development projects from months to days. The companys goal is to make data science processes simple enough that almost anyone, not just data scientists, can benefit from them.

The DotData platform is based on the companys AutoML 2.0 engine that performs full-cycle automation of machine-learning and data science tasks. In July the company debuted DotData Stream, a containerized AI/ML model that enables real-time predictive capabilities.

Eightfold.AI

Top Executive: Ashutosh Garg, Co-Founder, CEO

Headquarters: Mountain View, Calif.

Eightfold.AI develops the Talent Intelligence Platform, a human resource management system that utilizes AI deep learning and machine-learning technology for talent acquisition, management, development, experience and diversity. The Eightfold system, for example, uses AI and ML to better match candidate skills with job requirements and improves employee diversity by reducing unconscious bias.

In late October Eightfold.AI announced a $125 million Series round of financing, putting the startups value at more than $1 billion.

H2O.ai

Top Executive: Sri Ambati, Co-Founder, CEO

Headquarters: Mountain View, Calif.

H2O.ai wants to democratize the use of artificial intelligence for a wide range of users.

The companys H2O open-source AI and machine-learning platform, H2O AI Driverless automatic machine-learning software, H20 MLOps and other tools are used to deploy AI-based applications in financial services, insurance, health care, telecommunications, retail, pharmaceutical and digital marketing.

H2O.ai recently teamed up with data science platform developer KNIME to integrate Driverless AI for AutoMl with KNIME Server for workflow management across the entire data science life cyclefrom data access to optimization and deployment.

Iguazio

Top Executive: Asaf Somekh, Co-Founder, CEO

Headquarters: New York

The Iguazio Data Science Platform for real-time machine learning applications automates and accelerates machine-learning workflow pipelines, helping businesses develop, deploy and manage AI applications at scale that improve business outcomeswhat the company calls MLOps.

In early 2020 Iguazio raised $24 million in new financing, bringing its total funding to $72 million.

OctoML

Top Executive: Luis Ceze, Co-Founder, CEO

Headquarters: Seattle

OctoMLs Software-as-a-Service Octomizer makes it easier for businesses and organizations to put deep learning models into production more quickly on different CPU and GPU hardware, including at the edge and in the cloud.

OctoML was founded by the team that developed the Apache TVM machine-learning compiler stack project at the University of Washingtons Paul G. Allen School of Computer Science & Engineering. OctoMLs Octomizer is based on the TVM stack.

Tecton

Top Executive: Mike Del Balso, Co-Founder, CEO

Headquarters: San Francisco

Tecton just emerged from stealth in April 2020 with its data platform for machine learning that enables data scientists to turn raw data into production-ready machine-learning features. The startups technology is designed to help businesses and organizations harness and refine vast amounts of data into the predictive signals that feed machine-learning models.

The companys three founders: CEO Mike Del Balso, CTO Kevin Stumpf and Engineering Vice President Jeremy Hermann previously worked together at Uber where they developed the companys Michaelangelo machine-learning platform the ride-sharing company used to scale its operations to thousands of production models serving millions of transactions per second, according to Tecton.

The company started with $25 million in seed and Series A funding co-led by Andreessen Horowitz and Sequoia.

More here:
The 12 Coolest Machine-Learning Startups Of 2020 - CRN

Utilizing machine learning to uncover the right content at KMWorld Connect 2020 – KMWorld Magazine

At KMWorld Connect 2020 David Seuss, CEO, Northern Light, Sid Probstein, CTO, Keeeb, and Tom Barfield, chief solution architect, Keeb discussed Machine Learning & KM.

KMWorld Connect, November 16-19, and its co-located events, covers future-focused strategies, technologies, and tools to help organizations transform for positive outcomes.

Machine learning can assist KM activities in many ways. Seuss discussed using a semantic analysis of keywords in social posts about a topic of interest to yield clear guidance as to which terms have actual business relevance and are therefore worth investing in.

What are we hearing from our users? Seuss asked. The users hate the business research process.

By using AstraZeneca as an example, Seuss started the analysis of the companys conference presentations. By looking at the topics, Diabetes sank lower as a focus of AstraZenicas focus.

When looking at their twitter account, themes included oncology, COVID-19, and environmental issues. Not one reference was made to diabetes, according to Seuss.

Social media is where the energy of the company is first expressed, Seuss said.

An instant news analysis using text analytics tells us the same story: no mention of diabetes products, clinical trials, marketing, etc.

AI-based automated insight extraction from 250 AstraZeneca oncolcogy conference presentations gives insight into R&D focus.

Let the machine read the content and tell you what it thinks is important, Seuss said.

You can do that with a semantic graph of all the ideas in the conference presentations. Semantic graphs look for relationships between ideas and measure the number and strength of the relationships. Google search results are a real-world example of this in action.

We are approaching the era when users will no longer search for information, they will expect the machine to analyze and then summarize for them what they need to know, Seuss said. Machine-based techniques will change everything.

Probstein and Barfield addressed new approaches to integrate knowledge sharing into work. They looked at collaborative information curation so end users help identify the best content, allowing KM teams to focus on the most strategic knowledge challenges as well as the pragmatic application of AI through text analytics to improve both curation and findability and improve performance.

The super silo is on the rise, Probstein said. It stores files, logs, customer/sales and can be highly variable. He looked at search results for how COVID-19 is having an impact on businesses.

Not only are there many search engines, each one is different, Probstein said.

Probstein said Keeeb can help with this problem. The solution can search through a variety of data sources to find the right information.

One search, a few seconds, one pane of glass, Probstein said. Once you solve the search problem, now you can look through the documents.

Knowledge isnt always a whole document, it can be a few paragraphs or an image, which can then be captured and shared through Keeeb.

AI and machine learning can enable search to be integrated with existing tools or any system. Companies should give end-users simple approaches to organize with content-augmented with AI-benefitting themselves and others, Barfield said.

More here:
Utilizing machine learning to uncover the right content at KMWorld Connect 2020 - KMWorld Magazine

The way we train AI is fundamentally flawed – MIT Technology Review

For example, they trained 50 versions of an image recognition model on ImageNet, a dataset of images of everyday objects. The only difference between training runs were the random values assigned to the neural network at the start. Yet despite all 50 models scoring more or less the same in the training testsuggesting that they were equally accuratetheir performance varied wildly in the stress test.

The stress test used ImageNet-C, a dataset of images from ImageNet that have been pixelated or had their brightness and contrast altered, and ObjectNet, a dataset of images of everyday objects in unusual poses, such as chairs on their backs, upside-down teapots, and T-shirts hanging from hooks. Some of the 50 models did well with pixelated images, some did well with the unusual poses; some did much better overall than others. But as far as the standard training process was concerned, they were all the same.

The researchers carried out similar experiments with two different NLP systems, and three medical AIs for predicting eye disease from retinal scans, cancer from skin lesions, and kidney failure from patient records. Every system had the same problem: models that should have been equally accurate performed differently when tested with real-world data, such as different retinal scans or skin types.

We might need to rethink how we evaluate neural networks, says Rohrer. It pokes some significant holes in the fundamental assumptions we've been making.

DAmour agrees. The biggest, immediate takeaway is that we need to be doing a lot more testing, he says. That wont be easy, however. The stress tests were tailored specifically to each task, using data taken from the real world or data that mimicked the real world. This is not always available.

Some stress tests are also at odds with each other: models that were good at recognizing pixelated images were often bad at recognizing images with high contrast, for example. It might not always be possible to train a single model that passes all stress tests.

One option is to design an additional stage to the training and testing process, in which many models are produced at once instead of just one. These competing models can then be tested again on specific real-world tasks to select the best one for the job.

Thats a lot of work. But for a company like Google, which builds and deploys big models, it could be worth it, says Yannic Kilcher, a machine-learning researcher at ETH Zurich. Google could offer 50 different versions of an NLP model and application developers could pick the one that worked best for them, he says.

DAmour and his colleagues dont yet have a fix but are exploring ways to improve the training process. We need to get better at specifying exactly what our requirements are for our models, he says. Because often what ends up happening is that we discover these requirements only after the model has failed out in the world.

Getting a fix is vital if AI is to have as much impact outside the lab as it is having inside. When AI underperforms in the real-world it makes people less willing to want to use it, says co-author Katherine Heller, who works at Google on AI for healthcare: We've lost a lot of trust when it comes to the killer applications, thats important trust that we want to regain.

Read the original post:
The way we train AI is fundamentally flawed - MIT Technology Review