Category Archives: Machine Learning

Nudges and machine learning triples advanced care conversations – Penn Today

An electronic nudge to clinicianstriggered by an algorithm that used machine learning methods to flag patients with cancer who would most benefit from a conversation around end-of-life goalstripled the rate of those discussions. This is according to a new prospective, randomized study of nearly 15,000 patients from Penn Medicine and published in JAMA Oncology.

Early and frequent conversations with patients suffering from serious illness, particularly cancer, have been shown to increase satisfaction, quality of life, and care thats consistent with their values and goals. However, many do not get the opportunity to have those discussions with a physician or loved ones because their disease has progressed too far and theyre too ill.

Within and outside of cancer, this is one of the first real-time applications of a machine learning algorithm paired with a prompt to actually help influence clinicians to initiate these discussions in a timely manner, before something unfortunate may happen, says co-lead author Ravi B. Parikh, an assistant professor of medical ethics and health policy and medicine in the Perelman School of Medicine and a staff physician at the Corporal Michael J. Crescenz VA Medical Center. And its not just high-risk patients. It nearly doubled the number of conversations for patients who werent flaggedwhich tells us its eliciting a positive cultural change across the clinics to have more of these talks.

Christopher Manz, of the Dana Farber Cancer Institute, who was a fellow in the Penn Center for Cancer Care Innovation at the time of the study, serves as co-lead author.

In a separate JAMA Oncology study, the research team validated the Penn Medicine-developed machine learning tools effectiveness at predicting short-term mortality in patients in real-time using clinical data from the electronic health record. The algorithm considers more than 500 variablesage, hospitalizations, and co-morbidities, for examplefrom patient records, all the way up until their appointment. Thats one of the advantages of using the EHR to identify patients who may benefit from a timely conversation.

Read more at Penn Medicine News.

Read the original:
Nudges and machine learning triples advanced care conversations - Penn Today

Revolutionizing IoT with Machine Learning at the Edge | Perceive’s Steve Teig – IoT For All

In episode 88 of the IoT For All Podcast, Perceive Founder and CEO Steve Teig joins us to talk about how Perceive is bringing the next wave of intelligence to IoT through machine learning at the edge. Steve shares how Perceive developed Ergo, their chip announced back in March, and how these new machine learning capabilities will transform consumer IoT.

Steve Teig is an award-winning technologist, entrepreneur, and inventor on 388 US patents. Hes been the CTO of three EDA software companies, two biotech companies, and a semiconductor company of these, two went public during his tenure, two were acquired, and one is a Fortune 500 company. As the CEO and Founder of Perceive, Steve is leading a team building solutions and transformative machine learning technology for consumer edge devices.

To start the episode, Steve gave us some background on how Perceive got started. While serving as CTO of Xperi, Steve worked with a wide array of imaging and audio products and saw an opportunity in making the edge smart by leveraging machine learning at the edge. What if you could make gadgets themselves intelligent? Steve asked, thats what motivated me to pursue it technically and then commercially with Perceive.

At its core, Perceive builds chips and machine learning software for edge inference, providing data center class accuracy at the low power that edge devices, like IoT, require. The kinds of applications we go after, Steve said, are from doorbell cameras to home security cameras, to toys, to phones wherever you have a sensor, it would be cool to make that sensor understand its environment without sending data to the cloud.

Of the current solutions for device intelligence, Steve said you have two options and neither of them are ideal: first, you can send all of the data your sensor collects to someone elses cloud, giving up your privacy; or second, you can have a tiny chip that, while low power enough for your device, doesnt provide the computing power to provide answers you can actually trust.

We fix that problem by providing the kind of sophistication you would expect from the big cloud providers, but low enough power that you can run it at the edge, Steve said, saying that their chip is 20 to 100 times more power efficient than anything else currently in the market.

Steve also spoke to some of the use cases that Ergo enables. Currently, the main applications are doorbell cameras, home security cameras, and appliances. As we look forward, Steve said, being able to put really serious contextual awareness into gadgets opens up all kinds of applications. One of the examples he gave was a microwave that could identify both the user and the food to be heated, and adjust its settings to match that users preferences. Another example would be a robot vacuum cleaner that you could ask to find your shoes.

Changing gears, Steve shared Perceives philosophy on machine learning, saying that because they were looking to make massive improvements they had to start fresh. We had to start with the math. We really started from first principles. That philosophy has led to a number of new and proprietary techniques, both on the software and hardware side.

Moving more into the industry at large, Steve shared some observations in the smart home space during the pandemic. Those observations highlighted two somewhat conflicting viewpoints while there has been a broader interest in smart home technology, with people spending more time at home, people have also become more sensitive about their privacy. Steve also shared how Ergo handles data, in order to meet these security and privacy concerns.

To close out the episode, Steve shared some of the challenges his team faced while developing Ergo and what those challenges meant as he built out the team itself. He also shared some of his thoughts on the future of the smart home and consumer IoT space, with the introduction of these new machine learning capabilities.

Interested in connecting with Steve? Reach out to him on Linkedin!

About Perceive: Steve Teig, founder and CEO of Perceive, drove the creation of the company in 2018 while CTO of its parent company and investor, Xperi. Launching Perceive, Steve and his team had the ambitious goal of enabling state-of-the-art inference inside edge devices running at extremely low power. Adopting an entirely new perspective on machine learning and neural networks allowed Steve and his team to very quickly build and deploy the software, tools, and inference processor Ergo that make the complete Perceive solution.

(00:50) Intro to Steve

(01:25) How did you come to found Perceive?

(02:30) What does Perceive do? Whats your role in the IoT space?

(03:37) What makes your offering unique to the market?

(04:49) Could you share any use cases?

(09:41) How would you describe your philosophy when it comes to machine learning?

(11:37) What is Ergo and what does it do?

(12:39) What does a typical customer engagement look like?

(14:57) Have you seen any change in demand due to the pandemic?

(20:47) What challenges have you encountered building Perceive and Ergo?

(22:24) Where do you see the market going for smart home devices?

Read this article:
Revolutionizing IoT with Machine Learning at the Edge | Perceive's Steve Teig - IoT For All

Mastercard Says its AI and Machine Learning Solutions Aim to Stop Fraudulent Activites which have Increased Significantly due to COVID – Crowdfund…

Ajay Bhalla, President, Cyber and Intelligence Solutions, Mastercard, notes that artificial intelligence (AI) algorithms are part of the payment companys first line of defense in protecting over 75 billion transactions that Mastercard processes on its network every year.

Bhalia recently revealed the different ways that Mastercard applies its AI expertise to solve some of the most pressing global challenges from cybersecurity to healthcare and the impact the COVID-19 pandemic has had on the way we conduct our lives and interact with those around us.

Cybersecurity fraud rates have reached record highs, with nearly 50% of businesses now claiming that they may have been targeted by cybercriminals during the past two years. Fraudulent activities carried out via the Internet may have increased significantly due to the Coronavirus crisis, because many more people are conducting transactions online.

Mastercard aims to protect consumers from becoming a victim of online fraud. The payments company has added AI-based algorithms to its networks multi-layered security strategy. This allows Mastercards network to support a coordinated set of AI-enabled services to act within milliseconds to potential online security threats. Last year, Mastercard reportedly helped save around $20 billion of fraud via its AI-enhanced systems (which include SafetyNet, Decision Intelligence and Threat Scan)

In statements shared with Arab News, Bhalla noted:

One of the impacts of this pandemic is the rapid migration to digital technologies. Recent data shows that we vaulted five years forward in digital adoption, both consumer and business, in a matter of eight weeks. Whether its online shopping, contactless payments or banks transitioning to remote sales and service teams, this trend is here to stay it is not the new normal, it is the next normal.

Bhalia also mentioned that with many more consumers interacting and performing transactions via the Internet, were now creating large amounts of data. He revealed that, by 2025, well be creating approximately 463 exabytes of data per day and this number is going to keep increasing rapidly.

He further noted that more professionals are now working from the comfort of their home and that this may have also opened new doors for cybercriminals and hackers.

He remarked:

The current crisis is breeding fear, anxiety and stress, with people understandably worried about their health, safety, family and jobs. Unfortunately, that creates a fertile breeding ground for criminals preying on those insecurities, resulting in more cyberattacks and fraud.

He confirmed that Mastercards NuData tech has seen cyberattacks increase in volume and their level of sophistication has also increased, with around one in every three online attacks now being able to closely emulate human behavior.

Bhalla claims that Mastercard has made considerable investments in AI for over a decade and it has also added AI capabilities to all key parts of its business operations.

He also noted:

Our AI and machine learning solutions stop fraud, reduce credit risk, fight financial crime, prevent health care fraud and so much more. In health care, were working with organizations on cyber assessments to help safeguard their cyber systems, staff and patients at this challenging time. In retail, criminals are increasingly targeting digital channels as we shift to shopping online.

He revealed that the Card Not Present fraud currently accounts for about 90% of all fraudulent activities carried out via online platforms. This type of fraud accounted for 75% of all Internet fraud before COVID, Bhalia said. He claims that Mastercards AI was able to rapidly learn this new behavior and changed its scoring to reflect the new pattern, delivering a stronger performance during the pandemic.

Read more:
Mastercard Says its AI and Machine Learning Solutions Aim to Stop Fraudulent Activites which have Increased Significantly due to COVID - Crowdfund...

Abstract Perspective: Long-term PM2.5 Exposure and the Clinical Application of Machine Learning for Predicting Incident Atrial Fibrillation – DocWire…

READ THE ORIGINAL ABSTRACT HERE.

Although atrial fibrillation (AF) often leads to complications such as stroke in patients without an awareness of such preexisting diseases, electrocardiogram screening is not sufficient to detect AF in the general population. Some scoring systems for predicting incident AF have been introduced, including the CHADS2, CHA2DS2-VASc, and HATCH scores; however, their prediction accuracies are not sufficient for wide application. Although epidemiological studies have suggested that an elevated level of ambient particulate matter <2.5 m in aerodynamic diameter (PM2.5) is consistently associated with adverse cardiac events and arrhythmias, including AF, the role of PM2.5 on incident AF remains to be investigated.

We also published about the association between PM2.5 and the incident AF in the general population previously, we then analyzed with machine learning methods in this study and it showed the improved accuracies for predicting incident AF in the general population after applying averaged PM2.5 concentration compared to the existing risk scoring systems.

And finally, we developed the online individual risk calculator for predicting incident AF with applying averaged PM2.5 concentration (https://ml4everyoneisk2.shinyapps.io/RiskCalcAFPM25_ISK). This study was performed with South Korean general population exposed to high levels of air pollution. Further external validation is warranted especially in Western countries affected by low levels of air pollution.

-Dr. Im-Soo Kim, co-author

The rest is here:
Abstract Perspective: Long-term PM2.5 Exposure and the Clinical Application of Machine Learning for Predicting Incident Atrial Fibrillation - DocWire...

Machine-Learning Inference Chip Travels to the Edge – Electronic Design

Flex Logix has unveiled its InferX X1 machine-learning (ML) inference system, which is packed into a 54-mm2 chip. The X1 incorporates 64 1D Tensor processing units (TPUs) linked by the XFLX interconnect (Fig. 1). The dual 1-MB level 2 SRAM holds activations while the level 1 SRAM holds the weights for the next layer of computation. An on-chip FPGA provides additional customization capabilities. Theres also a 4-MB level 3 SRAM, LPDDR4 interface, and x4 PCI Express (PCIe) interface.

1. The InferX X1s XFLX interconnect links the 1D TPU engines with memory and each other.

The company chose to implement one-dimensional Tensor processors (Fig. 2), which can be combined to handle two- and three-dimensional tensors. The units support a high-precision Winograd acceleration option. This approach is more flexible and delivers a high level of system utilization.

2. The InferX X1 is built around one-dimensional Tensor processors that provide extreme flexibility and high utilization.

The simplified concept and underlying embedded FPGA (eFPGA) architectural approach allows the system to be reconfigured rapidly and enables layers to be fused together. This means that intermediate results can be given to the next layer without having to store them in memory, which slows down the overall system (Fig. 3). Moving data around ML hardware is often hidden, but it can have a significant impact on system performance.

3. Fast reconfiguration and support for soft logic within a process flow can eliminate the need to store intermediate results.

The inclusion of an eFPGA and a simplified, reconfigurable TPU architecture makes it possible for Flex Logix to provide a more adaptable ML solution. It can handle a standard confv2d model as well as a depth-wise conv2d model.

The chips are available separately or on a half-height, half-length PCIe board (Fig. 4). The PCIe board includes a x8 PCIe interface. The X1P1 board has a single chip while the X1P4 board incorporates four chips. Both plug into a x8 PCIe slot. The reason for going that route rather than a x16 for the X1P4 is because server motherboards typically have more x8 slots than x16 and the throughput difference for ML applications is minimal. As a result, more boards can be packed into a server. The X1P1 is only $499, while the X1P4 goes for $999.

4. The InferX X1 is available on a PCIe board. The X1P1 has a single chip, while the X1P4 includes four chips.

The X1M M.2 version is expected to arrive soon. The 22- 80-mm module has a x4 PCIe interface and will be available in 2021. It targets embedded servers, PCs, and laptops.

Read more:
Machine-Learning Inference Chip Travels to the Edge - Electronic Design

Machine Learning Data Catalog Software Market share forecast to witness considerable growth from 2020 to 2025 | By Top Leading Vendors IBM, Alation,…

Latest research report, titled Global Machine Learning Data Catalog Software Market Insights, Forecast to 2025. this report included a special section on the Impact of COVID-19. Also, Machine Learning Data Catalog Software Market (By major Key Players, By Types, By Applications, and Leading Regions) Segments outlook, Business assessment, Competition scenario and Trends .The report also gives 360-degree overview of the competitive landscape of the industries. SWOT analysis has been used to understand the strength, weaknesses, opportunities, and threats in front of the businesses. Moreover, it offers highly accurate estimations on the CAGR, market share, and market size of key regions and countries. Players can use this study to explore untapped Machine Learning Data Catalog Software markets to extend their reach and create sales opportunities.

Top Key players profiled in the report include:IBM, Alation, Oracle, Cloudera, Unifi, Anzo Smart Data Lake (ASDL), Collibra, Informatica, Hortonworks, Reltio, Talend and More

To Get PDF Sample Copy of the Report(with covid 19 Impact Analysis): https://www.globmarketreports.com/request-sample/9714

Machine Learning Data Catalog Software market competitive landscape offers data information and details by companies. Its provides a complete analysis and precise statistics on revenue by the major players participants for the period 2020-2025. The report also illustrates minute details in the Machine Learning Data Catalog Software market governing micro and macroeconomic factors that seem to have a dominant and long-term impact, directing the course of popular trends in the global Machine Learning Data Catalog Software market.

Market Type SegmentationCloud BasedWeb Based

Industry SegmentationLarge EnterprisesSMEs

Regions Covered in the Global Machine Learning Data Catalog Software Market: The Middle East and Africa (GCC Countries and Egypt) North America (the United States, Mexico, and Canada) South America (Brazil etc.) Europe (Turkey, Germany, Russia UK, Italy, France, etc.) Asia-Pacific (Vietnam, China, Malaysia, Japan, Philippines, Korea, Thailand, India, Indonesia, and Australia)

Years Considered to Estimate the Market Size:History Year: 2015-2019Base Year: 2019Estimated Year: 2020Forecast Year: 2020-2025

Get Chance of up to 30% Extra Discount @ https://www.globmarketreports.com/request-discount/9714

Some Major TOC Points:

For More Information with including full TOC: https://www.globmarketreports.com/industry-reports/9714/Machine-Learning-Data-Catalog-Software-market

Customization of the Report:Glob Market Reports provides customization of reports as per your need. This report can be personalized to meet your requirements. Get in touch with our sales team, who will guarantee you to get a report that suits your necessities.

Get Customization of the [emailprotected]:https://www.globmarketreports.com/report/request-customization/9714/Machine-Learning-Data-Catalog-Software-market

Get in Touch with Us :Mr. Marcus KelCall: +1 915 229 3004 (U.S)+44 7452 242832 (U.K)Email: [emailprotected]

More here:
Machine Learning Data Catalog Software Market share forecast to witness considerable growth from 2020 to 2025 | By Top Leading Vendors IBM, Alation,...

AI and machine learning: a gift, and a curse, for cybersecurity – Healthcare IT News

The Universal Health Services attack this past month has brought renewed attention to the threat of ransomware faced by health systems and what hospitals can do to protect themselves against a similar incident.

Security experts say that the attack, beyond being one of the most significant ransomware incidents in healthcare history, may also be emblematic of the ways machine learning and artificial intelligence are being leveraged by bad actors.

With some kinds of "early worms," said Greg Foss, senior cybersecurity strategist at VMware Carbon Black, "we saw [cybercriminals] performing these automated actions, and taking information from their environment and using it to spread and pivot automatically; identifying information of value; and using that to exfiltrate."

The complexity of performing these actions in a new environment relies on "using AI and ML at its core," said Foss.

Once access is gained to a system, he continued, much malware doesn't require much user interference.But although AI and ML can be used to compromise systems' security, Foss said, they can also be used to defend it.

"AI and ML are something that contributes to security in multiple different ways," he said. "It's not something that's been explored, evenuntil just recently."

One effective strategy involves user and entity behavior analytics, said Foss: essentially when a system analyzes an individual's typical behavior and flags deviations from that behavior.

For example, a human resource representative abruptly running commands on their host is abnormal behavior and might indicate a breach, he said.

AI and ML can also be used to detect subtle patterns of behavior among attackers, he said. Given that phishing emails often play on a would-be victim's emotions playing up the urgency of a message to compel someone to click on a link Foss noted that automated sentiment analysis can help flag if a message seems abnormally angry.

He also noted that email structures themselves can be a so-called tell: Bad actors may rely on a go-to structure or template to try to provoke responses, even the content itself changes.

Or, if someone is trying to siphon off earnings or medication particularly relevant in a healthcare setting AI and ML can help work in conjunction with a supply chain to point out aberrations.

Of course, Foss cautioned, AI isn't a foolproof bulwark against attacks. It's subject to the same biases as its creators, and "those little subtleties of how these algorithms work allow them to be poisoned as well," he said. In other words, it, like other technology, can be a double-edged sword.

Layered security controls, robust email filtering solutions, data control and network visibility also play a vital role in keeping health systems safe.

At the end of the day, human engineering is one of the most important tools: training employees to recognize suspicious behavior and implement strong security responses.

Using AI and ML "is only starting to scratch the surface," he said.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.

Read more:
AI and machine learning: a gift, and a curse, for cybersecurity - Healthcare IT News

Teaming Up with Arm, NXP Ups Its Place in the Machine Learning Industry – News – All About Circuits

One of the most popular topics in the technology industry, even for electrical engineers, is machine learning. The newest company to make headlines in the field is NXP Semiconductors withtwo big announcements today.

Looking to further establish its place in the machine learning industry, NXP has made two strategic partnerships, one with Arm and one with Canadia-based Au-Zone. All About Circuits had a sit down with executives at NXP to understand what the news really means.

On the hardware side of things, NXP announced today that it has been collaborating with Arm as the lead technology partner on thenew ArmEthos-U65 microNPU (neural processing unit). This technology partnership allows NXP to integrate the Ethos-U65 microNPU into its next generation of i.MX applications processors with the hopes of delivering energy-efficient, cost-effective ML solutions.

NXP is particularly excited about this partnership becausethis new microNPU is able to maintain the MCU-class power efficiency of the Ethos-U55, but is capable of being used in systems with higher performance Cortex-A-based SoCs.

Some standout features of the Ethos-U65 includemodel compression, on-the-fly weight decompression, and optimization strategies for DRAM and SRAM.

Whats particularly unique about this SoC is that the NPU works alongside a Cortex-M based processor. In our interview, Ben Eckermann, Senior Principal Engineer andSystems Architect at NXP Semiconductors, explained why this feature is advantageous.

Eckermann explains, What's key here is that, similar to the U-55, [the Ethos-U65]doesn't attempt to do everything as one standalone black box. It relies on the Cortex-M processor sitting beside it."

He continues, "The Cortex-M processor is able to handle any network operators that either occur so infrequently that there's no point in dedicating hardware resources in the U-65 to it or some that just don't provide you enough bang for yourbuck, where some things can be done efficiently on the CPU like the very last layers of a NN.

On the software side of things, NXP today announced that it has established an exclusive partnership with Au-Zone to expand NXPs eIQmachine learning (ML) software development environment.

What NXP was really after was Au-Zones DeepViewML Tool Suite, which is said to augment eIQ with an intuitive, graphical user interface (GUI) and workflow. The hope is that this added functionality will make the development, training, and deployment of NN models and ML workloads straightforward and easy for designers of all experience levels.

The tool includes features to prune, quantize, validate, and deploy public or proprietary NN models on NXP devices.

Together, Au-Zone and NXP look to optimize NNs for NXP-based SoCs, providing developers with run-time insights on NN model architectures, system parameters, and run-time performance.

A key feature of this run-time inference engine is that it optimizes the system memory usage and data movement uniquely for each SoC architecture.

Gowri Chindalore, head of NXP's business and technology strategy for edge processing, claims that this feature offerscustomers a double optimization," optimizing both the neural network and then further optimizing for the specific hardware.

With the introduction of the Arm Ethos U-65 microNPU, NXP will be able to provide new functionality and energy savings in future lines of i.MX application processors. This may make way for more powerful and low-energy designs for IoT and other edge applications.

Introducing Au-Zones DeepView Tool Suite will also benefit design engineers becausethe training, optimization, and deployment of NNs will not only be made more simple but will also be optimized for the specific hardware they are running on.

This too should only benefit future developments in IoT and edge applications on NXP-based SoCs.

Read more:
Teaming Up with Arm, NXP Ups Its Place in the Machine Learning Industry - News - All About Circuits

Soleadify secures seed funding for database that uses machine learning to track 40M businesses – TechCrunch

Usually, databases about companies have to be painstakingly updated by humans. Soleadify is a startup that uses machine learning to create profiles for businesses in any industry. The first of the companys products is a business search engine that keeps over 40 million business profiles updated, currently used by hundreds of companies in the USA, Europe and Asia for sales and marketing activities.

Its now secured $1.5 million in seed-round funding from European venture firms GapMinder Venture Partners and DayOne Capital, as well as several prominent business angels, through Seedblink, an equity crowdfunding platform based out of Bucharest, Romania.

The company plans to use the funds to further improve their technology, build partnerships and expand their marketing capabilities.

On top of Soleadifys data, they build solutions for prospecting, market research, customer segmentation and industry monitoring.

The way its done is by frequently scanning billions of webpages, identifying and classifying relevant data points and creating connections between them. The result is a database of business data, which is normally only available through laborious, manual research.

View original post here:
Soleadify secures seed funding for database that uses machine learning to track 40M businesses - TechCrunch

Machine Learning Capabilities Come to the Majority of Open Source Databases with MindsDB AI-Tables – PR Web

MindsDB

BERKELEY, Calif. (PRWEB) October 20, 2020

MindsDB, the open source AI layer for existing databases, today announced official integrations with open source relational databases PostgreSQL and MySQL. These join a growing list of integrations with community-driven databases including MariaDB and Clickhouse to bring the machine learning capabilities of MindsDB to over 55% of open source databases.

MindsDB brings machine learning to those who work with data to allow users to create and deploy ML models using standard SQL queries and increase AI projects efficiency. Through the use of AI-Tables, database users can apply machine learning models straight from their database and automatically generate predictions as simple as querying a table.

PostgreSQL is a powerful, open source object-relational database system with over 30 years of active community development. The database has a strong reputation for reliability, feature robustness, and performance. MySQL, owned by Oracle, is one of the most popular open source databases, trusted by organizations such as Facebook, Google, and Adobe. Together, the two represent 45% of the active open source database market.

The announcement was made as part of MindsDBs presentation during Percona Live Online 2020, the largest annual open source database conference.

Bringing machine learning resources to the open source database community is a huge part of our mission to democratize machine learning, said MindsDB co-founder, Adam Carrigan. Staying connected to this community has helped us identify the main challenges of users that know their data best and give them machine learning tools to help them solve those problems.

Bringing Machine Learning to the Database

I am excited to see MindsDB providing the power of machine learning, without leaving the convenience of SQL. There has been significant demand in the community for machine learning tools that work with on-premises data, can be run by the average database user, and are delivered cost-effectively, said Peter Zaitsev, CEO of Percona. As the open source database community gathers every year to share their knowledge at Percona Live, it is extremely exciting to see companies launch new solutions, like MindsDB with AI-Tables, that can expand what open source databases deliver to users.

With the newly announced MindsDB integrations, MySQL and PostgreSQL users can use AI-Tables to learn and make predictions from their data with no machine learning experiences. Users can now run a simple SQL query to deploy automated machine learning models directly inside their database.

The key to the MindsDB tool is the use of virtual AI-Tables which allow any user to easily train and test machine learning models with basic SQL as if they were standard database tables.

Community-focused Innovation

As more open source database users get hold of machine learning tools, it will be very exciting to see what the community will produce and how well benefit from AI and ML going forward, said OpenOcean Founder and General Partner Patrik Backman. Now is a good time for every database developer to get their hands dirty and try out with their own data how the MindsDB integration practically works.

While MindsDB is now available in over half of open source databases, we will continue to work towards our goal of democratizing machine learning for every database user, said Carrigan.

About MindsDB

MindsDB helps organizations to turn data into business predictions by adding machine learning capabilities to their databases. MindsDB provides an AI layer for existing databases that allows organizations to effortlessly and cost-effectively develop, train, and deploy state-of-the-art ML models using standard SQL queries to get accurate business predictions. Follow the company on LinkedIn, Twitter and Facebook and visit the blog for additional resources.

Share article on social media or email:

Originally posted here:
Machine Learning Capabilities Come to the Majority of Open Source Databases with MindsDB AI-Tables - PR Web