Category Archives: Machine Learning

4 different applications of machine learning that are revolutionizing society as we know it – KnowTechie

Machine Learning (ML) is an up-and-coming concept in the field of artificial intelligence. It involves a combination of algorithms and models that computers use to carry out specific tasks.

Under ML, computers dont need explicit instructions to perform the tasks that humans want them to. Instead, they use sample data to make predictions or decisions without being programmed to carry out an assignment.

This revolutionary concept is used in all types of industries. Its changing the way we use the Internet. It has also altered how we conduct business, as companies like BairesDev create custom software so businesses can take advantage of ML. Check out the following 4 different applications of machine learning that are revolutionizing our society.

Most people arent strangers to the luxury of virtual personal assistants. While some of these technologies havent been released within the last year, they are continually being updated to meet human needs.

Machine learning is an important element of virtual personal assistants. ML allows these assistants to better collect information that a user provides them. Later on, a user can view results from a virtual assistant that is more customized to them.

Virtual assistants are built into platforms, including the Google Home and Amazon Echo smart speakers. Consumers also have access to these assistants on their smartphones. Virtual assistants like Siri and Bixby all allow you to navigate your phone via voice commands.

Machine learning in manufacturing increases both production speed and workforce productivity. By incorporating ML into manufacturing machines, companies have lowered downtime and overall labor costs.

One company thats using ML in its manufacturing process is General Electric. General Electric manufactures a variety of products ranging from home appliances to large industrial equipment. The company uses a Brilliant Manufacturing Suite to link every part of the manufacturing process into one global system.

GE has over 500 factories located in countries all around the world. The company still has a long way to go in converting them all into smart factories, but its taking large strides to get there. Some other manufacturing companies getting on board with ML include Siemens, KUKA, and Fanuc.

We all have used social media at one point or another. Its addictive nature tends to rein us in for hours at a time. Social media captures users attention through machine learning. ML lets Facebook and other platforms customize your news feed and display effective ads.

Another example of ML on social media is the use of facial recognition. When you upload a picture, and Facebook recognizes your friends faces, machine learning is in effect. From there, Facebook will use facial recognition to connect you with others on the platform. This leads to better overall user experience.

The final application of ML we will discuss is its presence in online customer support chats. Not all companies want to hire a live person to answer customer inquiries. It requires time and resources to train someone to become an expert on all aspects of a company.

A popular alternative has become the implementation of live chatbots. These bots extract website content via machine learning. From there, they use the information to answer customers live questions. With time, chatbots improve the quality of their answers. Their versatile algorithms help them better understand customers questions as they answer more of them.

There are many more practical applications of machine learning to be discovered. However, we hope that this list has opened your eyes to this field of AI. As you have witnessed, ML can improve the quality of our work and social lives! Its a fascinating concept that has a lot of room for growth.

Have any thoughts on this? Let us know down below in the comments or carry the discussion over to ourTwitterorFacebook.

View post:
4 different applications of machine learning that are revolutionizing society as we know it - KnowTechie

Demystifying the world of deep networks – MIT News

Introductory statistics courses teach us that, when fitting a model to some data, we should have more data than free parameters to avoid the danger of overfitting fitting noisy data too closely, and thereby failing to fit new data. It is surprising, then, that in modern deep learning the practice is to have orders of magnitude more parameters than data. Despite this, deep networks show good predictive performance, and in fact do better the more parameters they have. Why would that be?

It has been known for some time that good performance in machine learning comes from controlling the complexity of networks, which is not just a simple function of the number of free parameters. The complexity of a classifier, such as a neural network, depends on measuring the size of the space of functions that this network represents, with multiple technical measures previously suggested: VapnikChervonenkis dimension, covering numbers, or Rademacher complexity, to name a few. Complexity, as measured by these notions, can be controlled during the learning process by imposing a constraint on the norm of the parameters in short, on how big they can get. The surprising fact is that no such explicit constraint seems to be needed in training deep networks. Does deep learning lie outside of the classical learning theory? Do we need to rethink the foundations?

In a new Nature Communications paper, Complexity Control by Gradient Descent in Deep Networks, a team from the Center for Brains, Minds, and Machines led by Director Tomaso Poggio, the Eugene McDermott Professor in the MIT Department of Brain and Cognitive Sciences, has shed some light on this puzzle by addressing the most practical and successful applications of modern deep learning: classification problems.

For classification problems, we observe that in fact the parameters of the model do not seem to converge, but rather grow in size indefinitely during gradient descent. However, in classification problems only the normalized parameters matter i.e., the direction they define, not their size, says co-author and MIT PhD candidate Qianli Liao. The not-so-obvious thing we showed is that the commonly used gradient descent on the unnormalized parameters induces the desired complexity control on the normalized ones.

We have known for some time in the case of regression for shallow linear networks, such as kernel machines, that iterations of gradient descent provide an implicit, vanishing regularization effect, Poggio says. In fact, in this simple case we probably know that we get the best-behaving maximum-margin, minimum-norm solution. The question we asked ourselves, then, was: Can something similar happen for deep networks?

The researchers found that it does. As co-author and MIT postdoc Andrzej Banburski explains, Understanding convergence in deep networks shows that there are clear directions for improving our algorithms. In fact, we have already seen hints that controlling the rate at which these unnormalized parameters diverge allows us to find better performing solutions and find them faster.

What does this mean for machine learning? There is no magic behind deep networks. The same theory behind all linear models is at play here as well. This work suggests ways to improve deep networks, making them more accurate and faster to train.

Go here to read the rest:
Demystifying the world of deep networks - MIT News

America Must Shape the World’s AI Norms or Dictators Will – Defense One

Four former U.S. defense secretaries issue a warning about China and a wake-up call to Americans on artificial intelligence.

As Secretaries of Defense, we anticipated and addressed threats to our nation, sought strategic opportunities, exercised authority, direction, and control over the U.S. military, and executed many other tasks in order to protect the American people and our way of life. During our combined service leading the Department of Defense, we navigated historical inflection points the end of the Cold War and its aftermath, the War on Terror, and the reemergence of great powercompetition.

Now, based on our collective experience, we believe the development and application of artificial intelligence and machine learning will dramatically affect every part of the Department of Defense, and will play as prominent a role in our countrys future as the many strategic shifts we witnessed while inoffice.

The digital revolution is changing our society at an unprecedented rate.Nearly 60 years passed between the construction of the first railroads in the United States and the completion of the First Transcontinental Railroad.Smartphones were introduced just 20 years ago and have already changed how we manage our finances, connect with family members, and conduct our dailylives.

AI will have just as significant an impact on our national defense, and likely in even less time. Its effects, however, will extend beyond the military to the rest of American society. AI has already changed health care, business practices, and law enforcement, and its impact will onlyincrease.

Subscribe

Receive daily email updates:

Subscribe to the Defense One daily.

Be the first to receive updates.

As this AI-driven transformation occurs, we must keep our democratic values firmly in mind and at the center of any dialogue about AI. Developers embed their values into their products whether they mean to or not. Social media platforms make tradeoffs between free speech and protection from harassment. Smartphone companies choose whether or not to develop operating systems that block the activation of cameras and microphones without users permission. Just as surely, AI developed by authoritarian governments and the companies they finance will reflect authoritarianvalues.

However, if designed with American values, AI can empower individuals by freeing them from mundane tasks, by increasing access to information, and by helping optimize decisions, all while respecting individual liberties and fundamental human rights. Weve seen examples of how other AI designs will empower governments, particularly authoritarian governments. Their AI designs value those in power more than their citizens, by amplifying sensors that monitor populations, valuing state access to data more than privacy, and creating automation that empowers central decision makers. Just as a highway system that centralizes major cities naturally increase the cities importance, societies using a network architecture and series of AI models with embedded authoritarian values will begin to reflect those samevalues.

Americans must ensure we deliberately and carefully embed our values into the technology that is already shaping our world. To do so, we need to lead the world in AI research and development, provide commercial and public systems to the world that reflect democratic values, and lead the global conversation on AI standards in concert with our allies, especially regarding AIs use inwar.

The American people need to play an active role by contacting their representatives, participating in public forums, and shaping private sector decisions. The DoD must invest in research and development in fields with few incentives for private sector investment. It must also lead the world in establishing standards for the ethical and safe use of AI by ensuring AI does not increase the risk of escalation and behaves as its users intend during militaryoperations.

The United States has allowed China to begin shaping the conversation about norms. The Defense Department this week issued ethics guidelines for artificial intelligence, but its only a start.If we do not correct this deficiency, we cannot guarantee that the technology that shapes the world our children and grandchildren will live in will reinforce rather than threaten the freedoms we haveenjoyed.

The government is already working to ensure the United States and our allies lead the world in the development and use of AI. Departments and agencies have launched critical AI initiatives.Congress is playing a prominent role by developing a broad array of legislative initiatives, including the creation of the National Security Commission on Artificial Intelligence. We urge our fellow Americans in academia, industry, and from across the country to educate themselves about whats at stake, and to work together to understand the opportunities and challenges associated with this emergingtechnology.

Our country entered the First World War to help decide the great question of that time: Will humanity make a world safe for democracy? We didnt seek a world filled with democracies, or even led by democracies, just a world safe for democracies to exist. Today weve come far, but we must not lose sight of the threats our country and its leaders understood more than a century ago: that the world is not a safe place for nations committed to individual liberties and other fundamental human rights by default, that global stability is not the norm, and that regimes that value their own power over the freedom and rights of their people would persist. AI will play an important role in every Americans future. If we do not lead its development and ensure it reflects our values, authoritarian regimes will ensure it reflects theirs.This is not just an issue of technology, it is an issue of nationalsecurity.

Go here to read the rest:
America Must Shape the World's AI Norms or Dictators Will - Defense One

Google recognizes machine learning and computer systems experts with Faculty Research Award – U of T Engineering News

U of T Engineering professors Scott Sanner (MIE) and Vaughn Betz (ECE) are among this years recipients of the Google Faculty Research Award. The program supports world-class research in computer science, engineering and related fields, and facilitates collaboration between researchers at Google and universities.

Only 15 per cent of applicants receive funding. This year, Google received more than 900 proposals from 50 countries and more than 330 universities worldwide.

Given the high selectivity of this program, it is a tremendous accomplishment for professors Sanner and Betz to receive Google Faculty Research Awards, says Professor Ramin Farnood, Vice-Dean of Research, U of T Engineering. It is a testament to the calibre of their work that they are being recognized amongst the very best institutions in the world.

Sanner joins a list of researchers from Stanford University, the Massachusetts Institute of Technology (MIT) and Carnegie Mellon University to be awarded in the Machine Learning category. His team will use the funding to develop more personalized and interactive conversational assistants by leveraging recent advances in deep learning.

Although Siri, Alexa and Google Assistant have become useful tools for consumers, Sanner points out that they currently do not provide highly personalized recommendations for questions such as, What movie should I see tonight?

These systems usually cant handle rich, natural language interactions like, Can you give me something a little lighter? in response to a recommendation to see Goodfellas, says Sanner.

Though it might seem that voice-based assistants are on the brink of achieving those capabilities, Sanner says its more complex than most imagine.

Personalized recommendations pose a style of interaction that is very different from the rule-based template and curated web-search technology that largely powers the existing conversational assistants of today.

Getting Siri or Alexa to understand how natural language in human interactions should influence future personalized recommendations means relying on machine learning and deep learning, as opposed to rules and web search.

To date, few researchers have investigated how these various technologies can dovetail to power interactive, conversation-based recommendations, adds Sanner.

For Betz who was awarded in the Systems category alongside researchers from Harvard University, University of Glasgow and Cornell University the funding will go towards making computer-aided design (CAD) tools to significantly speed up the programming and manufacturing of field-programmable gate arrays (FPGA).

FPGAs are computer chips that can be reprogrammed to be a large variety of circuits, and are used in thousands of todays electronic systems, from MRI machines to cellphone towers to automotive electronics.

As we continue to implement extremely complicated systems and larger designs in FPGAs, current CAD tools can take hours or even days to complete, causing major productivity bottlenecks for the engineers doing these designs, says Betz.

Betzs team is looking to not only make the CAD tools faster in producing FPGA chips, they want to ensure the tool is general enough that it can efficiently target a wide variety of chips. Their project, Verilog-to-Routing (VTR), will be open source to enable other researchers to build upon their infrastructure.

Faster tools lead to more productive engineers and hence, better electronic systems, says Betz.

Its great to receive this funding, he adds. I know Google funds a very wide variety of research, so it is very competitive. This award is a great validation of the project and helps us expand the scope of our work.

Visit link:
Google recognizes machine learning and computer systems experts with Faculty Research Award - U of T Engineering News

Brain wiring could be behind learning difficulties, say experts – The Guardian

Learning difficulties are not linked to differences in particular brain regions, but in how the brain is wired, research suggests.

According to figures from the Department for Education, 14.9% of all pupils in England about 1.3 million children had special educational needs in January 2019, with 271,200 having difficulties that required support beyond typical special needs provision. Dyslexia, attention deficit hyperactivity disorder (ADHD), autism and dyspraxia are among conditions linked to learning difficulties.

Now experts say different learning difficulties are not specific to particular diagnoses, nor are they linked to particular regions of the brain as has previously been thought. Instead the team, from the University of Cambridge, say learning difficulties appear to be associated with differences in the way connections in the brain are organised.

Dr Roma Siugzdaite, a co-author of the study, said it was time to rethink how children with learning difficulties were labelled.

We know that children with the same diagnoses can have very different profiles of problems, and our data suggest that this is because the labels we use do not map on to the reasons why children are struggling in other words, diagnoses do not map on to underlying neural differences, she said. Labelling difficulties is useful for practical reasons, and can be helpful for parents, but the current system is too simple.

Writing in the journal Current Biology, the team report how they made their discovery using a type of artificial intelligence called machine learning, which picks up on patterns within data.

The team drew on data from 479 children, 337 of whom had learning difficulties regarding performance in areas such as vocabulary, listening skills and problem-solving.

These data were presented to a machine learning system, which produced six chief categories reflecting the childrens cognitive abilities. The team found only 31% of children in the category reflecting the best performance were those with learning difficulties, while 97% of children in the category reflecting the poorest performance had learning difficulties.

Further work showed the system accurately assigned children into a wide range of categories relating to their cognitive abilities. However, the team found no link between these categories and particular diagnoses such as dyslexia, autism or ADHD.

Having particular diagnoses doesnt tell you about the kind of cognitive profile the children have, said Dr Duncan Astle, another author of the study.

Whilst diagnoses might be important, interventions should look beyond the label, he added, noting children with different diagnoses may benefit from similar interventions while those with the same diagnosis may need different forms of support.

The researchers then extracted information from brain scans of the children and fed it into a machine learning system. This generated 15 chief categories based on the structure of brain regions.

However, the team found that predictions of the cognitive abilities of a child were only about 4% better when based on their brain scans than by relying on guesswork alone.

There is a whole literature of people saying: This brain structure is related to this cognitive difficulty in kids who struggle, and this brain structure related to that cognitive difficulty, said Astle. However, he added, the new study suggested that was not the case.

The team then turned to another feature of the brain: its wiring. Using data from 205 children, the team found all showed similar efficiency of communication across the brain, with certain areas, known as hubs, showing many connections.

However, the children with learning difficulties showed different levels of connections in these hubs than those without. To explore whether this was important, the team turned to computer modelling, revealing the better the childrens cognitive abilities, the greater the drop in brain efficiency if the hubs were lost.

The hubbiness of a childs brain was a strong predictor of their cognitive profiles, said Siugzdaite . Childrens whose brains used hubs had higher cognitive abilities. We observed that in the case of the children who are struggling at school, they dont rely too much on these hubs.

Siugzdaite said the study raised further questions, including what biological or environmental factors could affect the development of such hubs, and whether some hubs were more important for particular cognitive skills.

However, the study has limitations, including that the team did not look at other issues, such as social behaviour, which may be linked to different diagnoses and brain structure.

Dr Tomoki Arichi from the Centre for the Developing Brain at Kings College London, who was not involved in the research, said the study added to a growing body of evidence that learning difficulties are better understood by looking at the skills people struggle with, rather than focusing on particular diagnoses.

Arichi said the research offered good evidence that how connections in the brain are organised is important in learning difficulties, but added: Understanding how this actually develops and then causes difficulties is still extremely complex, however. It is still possible that what they are seeing is a consequence rather than a cause, or is just a snapshot of an effect that is changing through childhood.

Read more:
Brain wiring could be behind learning difficulties, say experts - The Guardian

Interest in machine learning and AI up, though slowing, one platform reports – HR Dive

Dive Brief:

As technologies such as AI and machine learning revolutionize the workplace, learning and development is coming to the forefront of talent management. Preparing workers for AI and automation will lead learning trends in 2020, according to a November 2019 Udemy report. While many workplaces will train employees to sharpen their tech skills, the report said, learning professionals will also need to focus on soft skills and skills related to project management, risk management and change management.

About 120 million workers around the world will need access to retraining opportunities, a need at least partly driven by AI and automation, according to a report from IBM. This need vastly outpaces the number of organizations equipped with resources that suffice for such an effort, however.

Platforms such as O'Reilly may aid in filling this gap. Third-party training programs are growing in popularity with seemingly positive results. Managers may prefer coders with training from a boot camp, for example, a recent report from HackerRank found. But there has been at least one report that external L&D programs boast false results;New York Magazine reported Lambda School, "a 'boot camp' for people who want to quickly learn how to code," has inflated the number of job placements secured by its graduates.

Originally posted here:
Interest in machine learning and AI up, though slowing, one platform reports - HR Dive

DrChrono teams up with Cognition IP on machine learning patents, insurer deal brings Pager to South America and more digital health deals -…

DrChrono, a digital health company that works with EHRs, is teaming up with Cognition IP to help it draft five patent applications. The new patents will primarily be focused on machine learning technology. The pair are also going to be working on pushing through two patents that have been stalled for years.

It was crucial to work with a partner that had a deep understanding of healthcare and machine learning,Daniel Kivatinos, cofounder and COO of DrChrono, said in a statement. We wanted to ensure that our intellectual property was patented and protected and Cognition IP helped us do just that by successfully expanding our portfolio.Investing in machine learning is critical to the future of ourhealthcare platform used by thousands of medical practices.

Virtual care company Pager announced that it is launching in South America. It inked a deal with insurer Seguros SURA in Colombia. As part of the deal, Pager will provide its members with a virtual care team.Patients will be able to chat, call and video chat their doctor or care team. The services include triage, telemedicine, appointment setups, transportationand follow up care. The service is already available in all 50 US states but this deal marks the first international launch for the company.

Convenient and cost-effective access to healthcare is a global issue Walter Jin, chairman and CEO of Pager, said in a statement."The partnership between Seguros SURA Colombias efficient health clinics and Pagers technology platform will showcase the next generation model for the future of healthcare. Pager and Seguros Sura Colombia collaboration shows that our consumer-first approach to healthcare transcends geographical borders.

Yale School of Medicine is teaming up with Foretell Reality in an effort to measure the effects of VR therapy on cancer patients. Specifically, the group is looking at the levels of anxiety and depression in cancer patients.

A major factor in this study is the convenience for patients to enter a VR-based chat room, something that is especially useful for people with rare diseases or those who live in rural areas Dr. Asher Marks, sssistant professor of pediatrics (Hematology / Oncology) and director of pediatric Neuro-Oncology, said in a statement. The VR technology offered by Foretell Reality allows users to jointly partake as avatars in a shared experience which cannot be replicated over a conference call or video chat. Additionally, patients in the study are presented the option to remain anonymous during the VR group session, giving them a unique opportunity to communicate with others in a way they may otherwise not be comfortable with.

UPMC Health Plan will be employing VirtualHealths HELIOS care management and coordination platform for one of its programs focused on long-term services and supports for patients eligible for both Medicare and Medicaid, the companies announced. The tool automates caseload assignments and task management for care managers, while also assisting with assessment, planning and authorization.

Forward-looking health plans understand that HELIOS will position them for success through a true whole-person, member-centered approach to care management, Adam Sabloff, CEO and founder of VirtualHealth, said in a statement. We are honored that UPMC Health Plan plans to utilize HELIOS to enhance its members care and experiences in its Community HealthChoices program.

Read more here:
DrChrono teams up with Cognition IP on machine learning patents, insurer deal brings Pager to South America and more digital health deals -...

How AI and Machine Learning is Transforming the Organic Farming Industry | Quantzig’s Recent Article Offers Detailed Insights – Business Wire

LONDON--(BUSINESS WIRE)--Quantzig, a leading analytics advisory firm that delivers customized analytics solutions, has announced the completion of its new article that illustrates the role of big data in smart farming. This article also sheds light on the importance of analytics and machine learning in driving improvements within the organic farming industry.

The growing use of commercial farming methodologies has caused a major tradeoff in the quality of the food being produced. Large scale farming practices combined with unstructured supply chain networks have not only diminished nutrition but have led to a rise in waste and food safety & contamination risks. Today big data in smart farming has gained immense popularity as it plays a crucial role in driving improvements across segments. However, with access to several data sets from disparate sources data management issues have started to be a major concern for players in the organic farming industry. We at Quantzig, understand the challenges facing this sector, which is why weve developed solutions that focus on leveraging big data in smart farming. Big data in smart farming also plays a key role in helping businesses build a data-driven business culture by offering access to data-driven insights to various segments within the organization.

At Quantzig, we believe that organizations must leverage big data analytics to tackle the complexities in todays data sets. Request a FREE proposal for in-depth insights on our big data analytics solutions portfolio.

Benefits of Leveraging Big Data in Smart Farming

Adhere to food safety regulations

Big data in smart farming empowers businesses to prevent food spoilage and maintain food quality by enabling them to instantly access details on food contamination. The collection and analysis of data that offers insights into humidity, temperature, and chemicals will help also them gain a better picture of the quality of food being produced.

Big data in smart farming plays a crucial role in helping businesses identify and capitalize on new opportunities. Request a free demo to know more about our big data analytics solutions.

Improve operations equipment management

Applying big data-based insights to improve machine performance can help the organic farming industry to effectively track and monitor issues around machine failure and downtime. This not only aids in the smooth delivery of the produce but helps prevent issues that may hinder their ability to drive performance.

We can help you drive profitable growth by guiding you throughout your analytics journey. Contact our analytics experts to learn more about big data in smart farming.

Predict yield by analyzing several key factors

Big data within the organic farming industry acts as a powerful tool that helps transform every facet of an organization. It also enables businesses to predict their yield by deploying mathematical models and machine learning to analyze data around leaf biomass index, chemicals, and weather conditions.

Embedding big data analytics into your business processes can be quite challenging, but it helps you maximize the value of data by analyzing data from various sources to create customized reports. The goal is to help decision-makers within the organic farming sector to understand and analyze data to make crucial decisions and warn them about changes that go beyond a defined threshold.

Would you like to learn more about the future of the organic farming industry? Read the complete article here: https://bit.ly/386Kqdb

About Quantzig

Quantzig is a global analytics and advisory firm with offices in the US, UK, Canada, China, and India. For more than 15 years, we have assisted our clients across the globe with end-to-end data modeling capabilities to leverage analytics for prudent decision making. Today, our firm consists of 120+ clients, including 45 Fortune 500 companies. For more information on our engagement policies and pricing plans, visit: https://www.quantzig.com/request-for-proposal

Original post:
How AI and Machine Learning is Transforming the Organic Farming Industry | Quantzig's Recent Article Offers Detailed Insights - Business Wire

Machine Learning on the Edge, Hold the Code – Datanami

(Dmitriy Rybin/Shutterstock)

Many companies are scrambling to find machine learning engineers who can build smart applications that run on edge devices, like mobile phones. One company thats attacking the problem in a broad way is Qeexo, which sells an AutoML platform for building and deploying ML applications to microcontrollers without writing a line of code.

Qeexo emerged from Carnegie Mellon University in 2012, just at the dawn of the big data age. According to Sang Won Lee, the companys co-founder and CEO, the original plan called for Qeexo to be a machine learning application company.

The company landed a big fish, the Chinese mobile phone manufacturer Huawei, right out the gate. Huawei liked the ML-based finger-gesture application that Qeexo (pronounced Key-tzo) developed, and the company wanted Qeexo to ensure that it could run across all of its phone lines. That was a good news-bad news situation, Lee says.

Our first commercial implementation with Huawei kept the whole company in China for two months, to finish one model with one hardware variant, Lee tells Datanami. We came back and it was difficult to keep the morale high for our ML engineers because nobody wanted to constantly go abroad to do this type of repetitive implementation.

Qeexos AutoML solution handles many aspects of ML model development and deployment for customers

It quickly dawned on Lee that, with more ML models and more hardware types, the amount of manual work would quickly get out of hand. That led him to the idea of developing an automated machine learning, or AutoML, platform that could automatically generate ML models based on the data presented to it, automatically flash it to a group of pre-selected microcontrollers.

Lee and his team of developers, which is led by CTO and co-founder Chris Harrison (who is an assistant professor at Carnegie Mellon University), developed the offering nearly five years ago, and the company has been using it ever since for its own ML services engagements.

Huawei continues to utilize Qeexos AutoML solution to generate ML applications for its handsets. In 2018, we completed 57 projects for Hauwei, and most of the projects were completed by our field engineers just using the AutoML platform, without the help of ML engineers in the US, Lee says.

Sang Won Lee is the co-founder and CEO of Qeexo

In October 2019, Qeexo released its AutoML offering as a stand-alone software offering. The product automates many steps in the ML process, from building models from collected data, comparing performance of those models, and then deploying the finished model to a microcontroller all without requiring the user to write any code.

The offering has built-in support for the most popular ML algorithms, including random forest, gradient boosted machine, and linear regression, among others. Users can also select deep learning models, like convolutional neural networks, but many microcontrollers lack the memory to handle those libraries, Lee says.

Qeexos AutoML solution automatically handles many of the engineering tasks that would otherwise require the skills of a highly trained ML engineer, including feature selection and hyperparameter optimization. These feature are built into the Qeexo offering, which also sports a built-in C compiler and generates binary code that can be deployed to microcontrollers, such as those from Renasas Electronics.

Lee says ML engineers might be able to get a little more efficiency by developing their own ML libraries, but that it wont be worth the effort for many users. There are always more improvements that you can get with ML experts digging into it and doing the research, he says. But this is giving you the convenience of being able to build a commercially viable solution without having to write a single line of code.

Today Qeexo announced its new AWS solution. Instead of training a model on a laptop, customers can use now AWS resources to train their model. It also announced more ML algorithms, including deep learning algorithms and traditional algorithms. The visualization that Qeexo provides have also been enhanced to give the user the ability to better spot outliers and trends in data. Support for microphone data has been supported. And it also added support for the Renesas RA Family of Cortex-M MCUs, which are geared toward low-power IoT edge devices.

Having Huawei as a client certainly gives Qeexo some experience with scalability under its belt. But the Mountain View, California-based company is bullish on the potential for a new class of application developers to get started using its software to imbue everyday devices with the intelligence of data.

What we really want to tell the market is that even or those microcontrollers that are already out and that have very limited memory resource and processing power, you can still have a commercially viable ML solution running on it, if you use the right tool, Lee says. You dont want to neglect all the sensor data thats connected to the microcontroller. We can provide a tool that you can use to build intelligence that can be embedded into those tools.

Related Items:

On the Radar: Cortex Labs, RadicalBit, Qeexo

AutoML Tools Emerge as Data Science Difference Makers

Cloud Tools Rev Up AI Dev Platforms

Read more from the original source:
Machine Learning on the Edge, Hold the Code - Datanami

Using machine learning to solve real-world data problems for scientists and engineers – WSU News

Graduate students Aryan Deshwal and Syrine Belakaria presenting at the NeurIPS conference.

Many researchers in artificial intelligence and machine learning aim to develop computer programs that can sift through huge amounts of data, learn from it, and guide future decisions.

But, what if the many data options are hugely expensive and difficult to acquire and one has to decide which data is best to spend your money on? What direction should scientists head with their next experiment?

A WSU research team is taking a different angle in machine learning research in what they say can be of great practical use, especially, to engineers and scientists.

Computer science graduate students Syrine Belakaria and Aryan Deshwal recently presented their research at major international artificial intelligence and machine learning conferences, including the 2019 Conference on Neural Information Processing Systems (NeurIPs) in Vancouver, Canada and the 2020 Association for the Advancement of Artificial Intelligence Conference in New York. The NeurIPS conference is the premier machine learning conference in the world with more than 14,000 attendees. Belakaria and Dewshwal are advised by Jana Doppa, the George and Joan Berry Chair Assistant Professor in WSUs School of Electrical Engineering and Computer Science.

The group will also present their collaborative work with computer engineering researchers at the Design, Automation and Test in Europe Conference (DATE-2020) in Grenoble, France.

The groups research is based on Doppas 2019 NSF Early Career Award and focuses on developing general-purpose learning and reasoning computer algorithms to support engineers and scientists in their efforts to optimize the way they conduct complex experiments. They are working to combine domain knowledge from engineers and scientists with data from past experiments to select future experiments, so that researchers can minimize the number of experiments needed to find near-optimal designs.

Doppas team has analyzed and experimentally evaluated the algorithms for diverse applications in electronic design automation, such as for analog circuit design, manycore chip design, or tuning compiler settings, and in materials science, such as for designing shape memory alloys and piezo-electric materials. They also proposed two algorithms to optimize multiple objectives with minimal experiments and have developed the first theoretical analysis for multi-objective setting. They also developed a novel learning to search framework to optimize combinatorial structures, which is very challenging when compared to continuous design spaces.

The common theme behind this work is better uncertainty management to select the sequence of experiments, Doppa said.

While Belakaria and Deshwal have contributed important research innovations in the field, they also have gained valuable learning opportunities during their studies. While at NeurIPS, the students had the chance to network with leaders in the field of machine learning as well as attend a special session for women and those who are underrepresented in the field. The session had more than one thousand attendees.

The conference gave the students a chance to see the real-world applications of AI, said Deshwal, and meeting professionals who are using machine learning to solve challenges in medical and science fields. A large number of prominent companies, such as Uber, Google, Facebook, and Amazon, sent representatives to the conference and hosted events.

Belakaria, who is originally from Tunisia, said it was amazing to attend a roundtable and sit down with female leaders in the explosive and competitive field. She appreciated getting advice on how many women in the field are finding success while balancing their work and personal lives.

Both she and Deshwal expressed appreciation for a supportive lab that has provided mentoring and has encouraged their growth and success.

Research is as emotional as it is academic, and our camaraderie helps us a lot, said Deshwal, originally from India. When you have a field that is moving so quickly such as machine learning, having people who are supportive is so important.

Being in a community where you feel safe, respected, and valued for your scientific contribution is very crucial for women in science. We feel welcome in the machine learning community, added Belakaria.

With WSU since 2014, Doppa is part of a major expansion on the part of the university to meet the growing demands in the fields of electrical engineering and computer science. Since 2015, research expenditures in WSUs School of Electrical Engineering and Computer Science have nearly doubled, as have the number of graduates from its computer science program.

See the original post:
Using machine learning to solve real-world data problems for scientists and engineers - WSU News