Category Archives: Machine Learning
Why AI might be the most effective weapon we have to fight COVID-19 – The Next Web
If not the most deadly, the novel coronavirus (COVID-19) is one of the most contagious diseases to have hit our green planet in the past decades. In little over three months since the virus was first spotted in mainland China, it has spread to more than 90 countries, infected more than 185,000 people, and taken more than 3,500 lives.
As governments and health organizations scramble to contain the spread of coronavirus, they need all the help they can get, including from artificial intelligence. Though current AI technologies arefar from replicating human intelligence, they are proving to be very helpful in tracking the outbreak, diagnosing patients, disinfecting areas, and speeding up the process of finding a cure for COVID-19.
Data science and machine learning might be two of the most effective weapons we have in the fight against the coronavirus outbreak.
Just before the turn of the year, BlueDot, an artificial intelligence platform that tracks infectious diseases around the world, flagged a cluster of unusual pneumonia cases happening around a market in Wuhan, China. Nine days later, the World Health Organization (WHO)released a statementdeclaring the discovery of a novel coronavirus in a hospitalized person with pneumonia in Wuhan.
BlueDot usesnatural language processingandmachine learning algorithmsto peruse information from hundreds of sources for early signs of infectious epidemics. The AI looks at statements from health organizations, commercial flights, livestock health reports, climate data from satellites, and news reports. With so much data being generated on coronavirus every day, the AI algorithms can help home in on the bits that can provide pertinent information on the spread of the virus. It can also find important correlations between data points, such as the movement patterns of the people who are living in the areas most affected by the virus.
The company also employs dozens of experts who specialize in a range of disciplines including geographic information systems, spatial analytics, data visualization, computer sciences, as well as medical experts in clinical infectious diseases, travel and tropical medicine, and public health. The experts review the information that has been flagged by the AI and send out reports on their findings.
Combined with the assistance of human experts, BlueDots AI can not only predict the start of an epidemic, but also forecast how it will spread. In the case of COVID-19, the AI successfully identified the cities where the virus would be transferred to after it surfaced in Wuhan. Machine learning algorithms studying travel patterns were able to predict where the people who had contracted coronavirus were likely to travel.
Coronavirus (COVID-19) (Image source:NIAID)
You have probably seen the COVID-19 screenings at border crossings and airports. Health officers use thermometer guns and visually check travelers for signs of fever, coughing, and breathing difficulties.
Now,computer vision algorithmscan perform the same at large scale. An AI system developed by Chinese tech giant Baidu uses cameras equipped with computer vision and infrared sensors to predict peoples temperatures in public areas. The system can screen up to 200 people per minute and detect their temperature within a range of 0.5 degrees Celsius. The AI flags anyone who has a temperature above 37.3 degrees. The technology is now in use in Beijings Qinghe Railway Station.
Alibaba, another Chinese tech giant, has developed an AI system that candetect coronavirus in chest CT scans. According to the researchers who developed the system, the AI has a 96-percent accuracy. The AI was trained on data from 5,000 coronavirus cases and can perform the test in 20 seconds as opposed to the 15 minutes it takes a human expert to diagnose patients. It can also tell the difference between coronavirus and ordinary viral pneumonia. The algorithm can give a boost to the medical centers that are already under a lot of pressure to screen patients for COVID-19 infection. The system is reportedly being adopted in 100 hospitals in China.
A separate AI developed by researchers from Renmin Hospital of Wuhan University, Wuhan EndoAngel Medical Technology Company, and the China University of Geosciences purportedly shows 95-percent accuracy on detecting COVID-19 in chest CT scans. The system is adeep learning algorithmtrained on 45,000 anonymized CT scans. According to a preprint paperpublished on medRxiv, the AIs performance is comparable to expert radiologists.
One of the main ways to prevent the spread of the novel coronavirus is to reduce contact between infected patients and people who have not contracted the virus. To this end, several companies and organizations have engaged in efforts to automate some of the procedures that previously required health workers and medical staff to interact with patients.
Chinese firms are using drones and robots to perform contactless delivery and to spray disinfectants in public areas to minimize the risk of cross-infection. Other robots are checking people for fever and other COVID-19 symptoms and dispensing free hand sanitizer foam and gel.
Inside hospitals, robots are delivering food and medicine to patients and disinfecting their rooms to obviate the need for the presence of nurses. Other robots are busy cooking rice without human supervision, reducing the number of staff required to run the facility.
In Seattle, doctors used a robot to communicate with and treat patients remotely to minimize exposure of medical staff to infected people.
At the end of the day, the war on the novel coronavirus is not over until we develop a vaccine that can immunize everyone against the virus. But developing new drugs and medicine is a very lengthy and costly process. It can cost more than a billion dollars and take up to 12 years. Thats the kind of timeframe we dont have as the virus continues to spread at an accelerating pace.
Fortunately, AI can help speed up the process. DeepMind, the AI research lab acquired by Google in 2014, recently declared that it has used deep learning to find new information about the structure of proteins associated with COVID-19. This is a process that could have taken many more months.
Understanding protein structures can provide important clues to the coronavirus vaccine formula. DeepMind is one of several organizations who are engaged in the race to unlock the coronavirus vaccine. It has leveraged the result of decades of machine learning progress as well as research on protein folding.
Its important to note that our structure prediction system is still in development and we cant be certain of the accuracy of the structures we are providing, although we are confident that the system is more accurate than our earlier CASP13 system, DeepMinds researchers wroteon the AI labs website. We confirmed that our system provided an accurate prediction for the experimentally determined SARS-CoV-2 spike protein structure shared in the Protein Data Bank, and this gave us confidence that our model predictions on other proteins may be useful.
Although its too early to tell whether were headed in the right direction, the efforts are commendable. Every day saved in finding the coronavirus vaccine can save hundredsor thousandsof lives.
This story is republished fromTechTalks, the blog that explores how technology is solving problems and creating new ones. Like them onFacebookhere and follow them down here:
Published March 21, 2020 17:00 UTC
Continued here:
Why AI might be the most effective weapon we have to fight COVID-19 - The Next Web
Emerging Trend of Machine Learning in Retail Market 2019 by Company, Regions, Type and Application, Forecast to 2024 – Bandera County Courier
The latest report titled, Global Machine Learning in Retail Market 2019 by Company, Regions, Type and Application, Forecast to 2024 unveils the value at which the Machine Learning in Retail industry is anticipated to grow during the forecast period, 2019 to 2024. The report estimates CAGR analysis, competitive strategies, growth factors and regional outlook 2024. The report is a rich source of an exhaustive study of the driving elements, limiting components, and different market changes. It states market structure and then further forecasts several segments and sub-segments of the global market. The market study is provided on the basis of type, application, manufacturer as well as geography. Different elements such as opportunities, drivers, restraints, and challenges, market situation, market share, growth rate, future trends, risks, entry limits, sales channels, distributors are analyzed and examined within this report.
Exploring The Growth Rate Over A Period:
Business owners want to expand their business can refer to this report as it includes data regarding the rise in sales within a given consumer base for the forecast period, 2019 to 2024. The research analysts have mentioned a comparison between the Machine Learning in Retail market growth rate and product sales to allow business owners to discover the success or failure of a specific product or service. They have also added the driving factors such as demographics and revenue generated from other products to offer a better analysis of products and services by owners.
DOWNLOAD FREE SAMPLE REPORT: https://www.magnifierresearch.com/report-detail/7570/request-sample
Top industry players assessment: IBM, Microsoft, Amazon Web Services, Oracle, SAP, Intel, NVIDIA, Google, Sentient Technologies, Salesforce, ViSenze,
Product type assessment based on the following types: Cloud Based, On-Premises
Application assessment based on application mentioned below: Online, Offline
Leading market regions covered in the report are: North America (United States, Canada and Mexico), Europe (Germany, France, UK, Russia and Italy), Asia-Pacific (China, Japan, Korea, India and Southeast Asia), South America (Brazil, Argentina, Colombia), Middle East and Africa (Saudi Arabia, UAE, Egypt, Nigeria and South Africa)
Main Features Covered In Global Machine Learning in Retail Market 2019 Report:
ACCESS FULL REPORT: https://www.magnifierresearch.com/report/global-machine-learning-in-retail-market-2019-by-7570.html
Moreover in the report, supply chain analysis, regional marketing type analysis, international trade type analysis by the market as well as consumer analysis of Machine Learning in Retail market has been covered. Further, it determines the manufacturing plants and technical data analysis, capacity, and commercial production date, R&D Status, manufacturing area distribution, technology source, and raw materials sources analysis. It also depicts to depict sales, merchants, brokers, wholesalers, research findings and conclusion, and information sources.
Customization of the Report:This report can be customized to meet the clients requirements. Please connect with our sales team (sales@magnifierresearch.com), who will ensure that you get a report that suits your needs. You can also get in touch with our executives on +1-201-465-4211 to share your research requirements.
See the original post here:
Emerging Trend of Machine Learning in Retail Market 2019 by Company, Regions, Type and Application, Forecast to 2024 - Bandera County Courier
Keeping Machine Learning Algorithms Humble and Honest in the Ethics-First Era – Datamation
By Davide Zilli, Client Services Director at Mind Foundry
Today in so many industries, from manufacturing and life sciences to financial services and retail, we rely on algorithms to conduct large-scale machine learning analysis. They are hugely effective for problem-solving and beneficial for augmenting human expertise within an organization. But they are now under the spotlight for many reasons and regulation is on the horizon, with Gartner projecting four of the G7 countries will establish dedicated associations to oversee AI and ML design by 2023. It remains vital that we understand their reasoning and decision-making process at every step.
Algorithms need to be fully transparent in their decisions, easily validated and monitored by a human expert. Machine learning tools must introduce this full accountability to evolve beyond unexplainable black box solutions and eliminate the easy excuse of the algorithm made me do it!"
Bias can be introduced into the machine learning process as early as the initial data upload and review stages. There are hundreds of parameters to take into consideration during data preparation, so it can often be difficult to strike a balance between removing bias and retaining useful data.
Gender for example might be a useful parameter when looking to identify specific disease risks or health threats, but using gender in many other scenarios is completely unacceptable if it risks introducing bias and, in turn, discrimination. Machine learning models will inevitably exploit any parameters such as gender in data sets they have access to, so it is vital for users to understand the steps taken for a model to reach a specific conclusion.
Removing the complexity of the data science procedure will help users discover and address bias faster and better understand the expected accuracy and outcomes of deploying a particular model.
Machine learning tools with built-in explainability allow users to demonstrate the reasoning behind applying ML to a tackle a specific problem, and ultimately justify the outcome. First steps towards this explainability would be features in the ML tool to enable the visual inspection of data with the platform alerting users to potential bias during preparation and metrics on model accuracy and health, including the ability to visualize what the model is doing.
Beyond this, ML platforms can take transparency further by introducing full user visibility, tracking each step through a consistent audit trail. This records how and when data sets have been imported, prepared and manipulated during the data science process. It also helps ensure compliance with national and industry regulations such as the European Unions GDPR right to explanation clause and helps effectively demonstrate transparency to consumers.
There is a further advantage here of allowing users to quickly replicate the same preparation and deployment steps, guaranteeing the same results from the same data particularly vital for achieving time efficiencies on repetitive tasks. We find for example in the Life Sciences sector, users are particularly keen on replicability and visibility for ML where it becomes an important facility in areas such as clinical trials and drug discovery.
There are so many different model types that it can be a challenge to select and deploy the best model for a task. Deep neural network models, for example, are inherently less transparent than probabilistic methods, which typically operate in a more honest and transparent manner.
Heres where many machine learning tools fall short. Theyre fully automated with no opportunity to review and select the most appropriate model. This may help users rapidly prepare data and deploy a machine learning model, but it provides little to no prospect of visual inspection to identify data and model issues.
An effective ML platform must be able to help identify and advise on resolving possible bias in a model during the preparation stage, and provide support through to creation where it will visualize what the chosen model is doing and provide accuracy metrics and then on to deployment, where it will evaluate model certainty and provide alerts when a model requires retraining.
Building greater visibility into data preparation and model deployment, we should look towards ML platforms that incorporate testing features, where users can test a new data set and receive best scores of the model performance. This helps identify bias and make changes to the model accordingly.
During model deployment, the most effective platforms will also extract extra features from data that are otherwise difficult to identify and help the user understand what is going on with the data at a granular level, beyond the most obvious insights.
The end goal is to put power directly into the hands of the users, enabling them to actively explore, visualize and manipulate data at each step, rather than simply delegating to an ML tool and risking the introduction of bias.
The introduction of explainability and enhanced governance into ML platforms is an important step towards ethical machine learning deployments, but we can and should go further.
Researchers and solution vendors hold a responsibility as ML educators to inform users of the use and abuses of bias in machine learning. We need to encourage businesses in this field to set up dedicated education programs on machine learning including specific modules that cover ethics and bias, explaining how users can identify and in turn tackle or outright avoid the dangers.
Raising awareness in this manner will be a key step towards establishing trust for AI and ML in sensitive deployments such as medical diagnoses, financial decision-making and criminal sentencing.
AI and machine learning offer truly limitless potential to transform the way we work, learn and tackle problems across a range of industriesbut ensuring these operations are conducted in an open and unbiased manner is paramount to winning and retaining both consumer and corporate trust in these applications.
The end goal is truly humble, honest algorithms that work for us and enable us to make unbiased, categorical predictions and consistently provide context, explainability and accuracy insights.
Recent research shows that 84% of CEOs agree that AI-based decisions must be explainable in order to be trusted. The time is ripe to embrace AI and ML solutions with baked in transparency.
About the author:
Davide Zilli, Client Services Director at Mind Foundry
Artificial Intelligence and RPA: Keys to Digital Transformation
FEATURE|ByJames Maguire, March 18, 2020
Robotic Process Automation: Pros and Cons
ARTIFICIAL INTELLIGENCE|ByJames Maguire, March 16, 2020
Using AI and Automation in Your Business
ARTIFICIAL INTELLIGENCE|ByJames Maguire, March 13, 2020
IBM's Prototype AutoML Could Vastly Improve AI Responses To Pandemics
FEATURE|ByRob Enderle, March 13, 2020
How 5G Will Enable The First General Purpose AI
ARTIFICIAL INTELLIGENCE|ByRob Enderle, February 28, 2020
Artificial Intelligence, Smart Robots and Conscious Computers: Is Your Business Ready?
ARTIFICIAL INTELLIGENCE|ByJames Maguire, February 13, 2020
Datamation's Emerging Tech Podcast and Webcast
ARTIFICIAL INTELLIGENCE|ByJames Maguire, February 11, 2020
The Human-Emulating Quantum AI Coming This Decade
FEATURE|ByRob Enderle, January 30, 2020
How to Get Started with Artificial Intelligence
FEATURE|ByJames Maguire, January 29, 2020
Top Machine Learning Services in the Cloud
ARTIFICIAL INTELLIGENCE|BySean Michael Kerner, January 29, 2020
Quantum Computing: The Biggest Announcement from CES
ARTIFICIAL INTELLIGENCE|ByRob Enderle, January 10, 2020
The Artificial Intelligence Index: AI Hiring, Data, Trends
FEATURE|ByJames Maguire, January 07, 2020
Artificial Intelligence in 2020: Urgency and Pragmatism
ARTIFICIAL INTELLIGENCE|ByJames Maguire, December 20, 2019
Intel Buys Habana And Gets Serious About Deep Learning AI
FEATURE|ByRob Enderle, December 17, 2019
Qualcomm And Rethinking the PC And Smartphone
ARTIFICIAL INTELLIGENCE|ByRob Enderle, December 06, 2019
Machine Learning in 2020
FEATURE|ByJames Maguire, December 06, 2019
Three Tactics Hi-Tech Companies Can Leverage to Drive Growth
FEATURE|ByGuest Author, November 11, 2019
Could IBM Watson Fix Facebook's 'Truth Problem'?
ARTIFICIAL INTELLIGENCE|ByRob Enderle, November 04, 2019
How Artificial Intelligence is Changing Healthcare
ARTIFICIAL INTELLIGENCE|ByJames Maguire, October 09, 2019
Artificial Intelligence Trends: Expert Insight on AI and ML Trends
ARTIFICIAL INTELLIGENCE|ByJames Maguire, September 17, 2019
Read this article:
Keeping Machine Learning Algorithms Humble and Honest in the Ethics-First Era - Datamation
FYI: You can trick image-recog AI into, say, mixing up cats and dogs by abusing scaling code to poison training data – The Register
Boffins in Germany have devised a technique to subvert neural network frameworks so they misidentify images without any telltale signs of tampering.
Erwin Quiring, David Klein, Daniel Arp, Martin Johns, and Konrad Rieck, computer scientists at TU Braunschweig, describe their attack in a pair of papers, slated for presentation at technical conferences in May and in August this year events that may or may not take place given the COVID-19 global health crisis.
The papers, titled "Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning" [PDF] and "Backdooring and Poisoning Neural Networks with Image-Scaling Attacks [PDF]," explore how the preprocessing phase involved in machine learning presents an opportunity to fiddle with neural network training in a way that isn't easily detected. The idea being: secretly poison the training data so that the software later makes bad decisions and predictions.
This example image, provided by the academics, of a cat has been modified so that when downscaled by an AI framework for training, it turns into a dog, thus muddying the training dataset
There have been numerous research projects that have demonstrated that neural networks can be manipulated to return incorrect results, but the researchers say such interventions can be spotted at training or test time through auditing.
"Our findings show that an adversary can significantly conceal image manipulations of current backdoor attacks and clean-label attacks without an impact on their overall attack success rate," explained Quiring and Rieck in the Backdooring paper. "Moreover, we demonstrate that defenses designed to detect image scaling attacks fail in the poisoning scenario."
Their key insight is that algorithms used by AI frameworks for image scaling a common preprocessing step to resize images in a dataset so they all have the same dimensions do not treat every pixel equally. Instead, these algorithms, in the imaging libraries of Caffe's OpenCV, TensorFlow's tf.image, and PyTorch's Pillow, specifically, consider only a third of the pixels to compute scaling.
"This imbalanced influence of the source pixels provides a perfect ground for image-scaling attacks," the academics explained. "The adversary only needs to modify those pixels with high weights to control the scaling and can leave the rest of the image untouched."
On their explanatory website, the eggheads show how they were able to modify a source image of a cat, without any visible sign of alteration, to make TensorFlow's nearest scaling algorithm output a dog.
This sort of poisoning attack during the training of machine learning systems can result in unexpected output and incorrect classifier labels. Adversarial examples can have a similar effect, the researchers say, but these work against one machine learning model.
Image scaling attacks "are model-independent and do not depend on knowledge of the learning model, features or training data," the researchers explained. "The attacks are effective even if neural networks were robust against adversarial examples, as the downscaling can create a perfect image of the target class."
The attack has implications for facial recognition systems in that it could allow a person to be identified as someone else. It could also be used to meddle with machine learning classifiers such that a neural network in a self-driving car could be made to see an arbitrary object as something else, like a stop sign.
To mitigate the risk of such attacks, the boffins say the area scaling capability implemented in many scaling libraries can help, as can Pillow's scaling algorithms (so long as it's not Pillow's nearest scaling scheme). They also discuss a defense technique that involves image reconstruction.
The researchers plan to publish their code and data set on May 1, 2020. They say their work shows the need for more robust defenses against image-scaling attacks and they observe that other types of data that get scaled like audio and video may be vulnerable to similar manipulation in the context of machine learning.
Sponsored: Webcast: Why you need managed detection and response
Follow this link:
FYI: You can trick image-recog AI into, say, mixing up cats and dogs by abusing scaling code to poison training data - The Register
3 global manufacturing brands at the forefront of AI and ML – JAXenter
If you are a major manufacturer in 2020 and you have employed the likes of Deloitte, McKinsey or PWC, it is safe to assume that they have advised you to invest big in artificial intelligence and machine learning.
According to reports by Deloitte and McKinsey, machine learning improves product quality and has the potential to double cash flow. Lets take a look at three global manufacturers who are already on board.
SEE ALSO: Introduction to machine learning in Node.js
Siemens is the largest industrial manufacturer in Europe, and whether they are putting together planes, trains or automobiles, their goal is to solve production challenges efficiently and sustainably. One of the ways they are able to do this is by using machine learning (ML) to enhance additive manufacturing, otherwise known as AM.
The process involves putting together parts that make objects from 3D model data. The idea is to streamline the manufacturing process into one printing stage. Machine learning plays a crucial part in achieving this goal.
Lets take a look at the recent creation of the AM Path Optimizer, part of its NX software offering. Its designed to eliminate overheating during production, an issue that stands in the way of the industrialization of AM. According to Siemens, the path optimizer combines simulation technology and ML to analyze a full job file minutes before execution on the machine. With this they hope to achieve reduced scrap and increased production yields. In short, they want to minimize trial and error and get it right the first time around.
Although still in the beta stage, the AM Path Optimizer has had some early adopters. TRUMPF, a German industrial machine manufacturing company based in Stuttgart, has been singing its praises, pointing to improved geometrical accuracy, more homogenous surface quality and a significant reduction in the scrap rate expected.
Machine learning and artificial intelligence do not just influence how companies manufacture but also help them decide what they manufacture. American packaged-food company ConAgra is one such company. They are using AI to identify consumer preferences.
The vegan market, for example, is growing rapidly: by 2026 it is projected to be worth just over $24 billion (the vegan cheese market alone will be worth $4 billion). And ConAgra, despite being over a century old, is aware of consumer preferences moving towards healthier options and away from things like processed meat. This awareness comes in part from their AI platform, which analyses data from social media and consumer food purchasing behavior.
This has led the company to produce alternative meat products like veggie burgers and even cauliflower rice. Its also helped speed up the manufacturing process, so rather than planning for next year, they can design, make, and release a new product in as little as a few weeks.
The major appliance manufacturer Bosch is a great believer in AI and has committed substantial resources to making it a central part of its business. In 2016, it launched a $30,000 competition on Kaggle, an online community of data scientists and machine learning practitioners. Competitors were asked to predict internal failures, with the aim of improving Bosch production line performance.
They described the assembly process as much like a souffle, delicious, delicate and a challenge to prepare; if it comes out of the oven sunken, you are going to retrace your steps to see where things went wrong. In order to identify and predict where its souffles go wrong, Bosch records data at every step of the manufacturing process and assembly line.
This is where the Kagglers come in. With access to advanced data analytics and using thousands of tests and measurements for each component on the assembly line, the winners Ash and Beluga were able to so solve internal failures using their own fault detection method.
In 2017, the Bosch Center for AI was founded with the tagline Solutions created for life. This is part of a broader effort to put AI and machine learning at the heart of the business. What they are working on now is reducing reliance on human expert knowledge base and deploying AI algorithms in safety-critical applications.
More recently, Bosch has been working on preventing increasingly advanced hackers from compromising their cars. According to CTO Michael Bolle: In the area of machine learning and AI, products and machines learn from data, and so the data itself can be part of the attack surface.
SEE ALSO: How machine learning is changing business communications
What Bosch, ConAgra, and Siemens realize is that their business is increasingly reliant on data, and the best way to harness that data is to invest heavily in AI and ML. According to McKinsey, not investing in AI or ML is not really an option, especially if you are a manufacturer with heavy assets: Manufacturers with heavy assets that are unable to read, interpret, and use their own machine-generated data to improve performance by addressing the changing needs of customers and suppliers will quickly lose out to their competitors or be acquired.
Go here to see the original:
3 global manufacturing brands at the forefront of AI and ML - JAXenter
Startup Spotlight: Forestry Machine Learning wants to help clients use artificial intelligence to improve business – Richmond.com
With businesses everywhere being disrupted by the coronavirus outbreak, it seems like a tough time to be an entrepreneur starting a new venture.
Yet the co-founders of the Richmond-based startup company Forestry Machine Learning say they are keeping a positive long-term outlook.
The startup specializes in helping clients implement a cutting-edge type of artificial intelligence called machine learning to improve their business strategies and operations, and the co-founders say they foresee demand only increasing for that service.
It is an interesting time to be launching a company, said David Der, the startups CEO. Co-founder Brian Forrester is chief revenue officer.
Overall, I am optimistic, Der said. Sure, there might be some setbacks nobody is really taking in-person meetings right now but a lot of the value we can deliver can be done virtually anyway.
Our sales strategy remains the same, he said. We are still prospecting and in business development stages, full speed ahead.
Machine learning is a subset of artificial intelligence that involves using computer algorithms to quickly analyze large amounts of data and learn from it. The tools can be used to make better predictions about how people and systems behave.
The Forestry part of the companys name is a nod to lingo within the artificial intelligence industry.
Machine learning, artificial intelligence, and the larger ecosystem around that, is really just coming of age, said Forrester, who is also co-founder of Workshop Digital, a Richmond-based digital marketing firm where he continues to work.
For the last three or four years, we have had access to more data than we have ever had before, Forrester said. Computing power has caught up to be able to process that. A lot of the companies I work with over 100 companies across the U.S. and Canada are still trying to figure out how to leverage that data to inform business strategy, reduce risk and increase profitability.
Machine learning can be used to improve financial forecasting, cybersecurity and fraud prevention, among other things, said Der, who brings to the startup a background in computer science.
Der was among a group of co-founders of Notch, a technology consulting company founded in Richmond in 2014 that specialized in data engineering and machine learning. In late 2017, Notch was acquired by financial services giant Capital One Financial Corp.
Der said he left Capital One in December after a two-year commitment and started working on creating the new business.
Entrepreneurship is really a passion of mine, Der said. In a way, we are picking up the torch where Notch left off two years ago. I also want to bring to the table my experience now from the financial services industry.
While machine learning can be utilized by many organizations, Der said the startup is targeting three primary industries: financial services, health care and digital marketing.
The goal of machine learning in digital marketing is to deliver the right message to the right person through the right medium at the right time, Der said.
Forrester brings deep experience in digital marketing through his company, Digital Workshop.
I have spent 11 years building a company, and we have been fairly successful, Forrester said. My role in this company [Forestry] is to build our sales and marketing strategy as we grow and follow Davids lead.
Will Loving and Scott Walker, both with Richmond-based Consult360, also are investing partners in the startup.
Forrester said he has experience navigating a startup during a time of economic disruption.
I dont think the problems that machine learning is trying to solve are going to go away just because of this, he said, referring to the coronavirus disruptions. In fact, they are more pervasive now than ever. Leveraging more computing power to tackle bigger problems is not going to go away.
View original post here:
Startup Spotlight: Forestry Machine Learning wants to help clients use artificial intelligence to improve business - Richmond.com
Are machine-learning-based automation tools good enough for storage management and other areas of IT? Let us know – The Register
Reader survey We hear a lot these days about IT automation. Yet whether it's labelled intelligent infrastructure, AIOps, self-driving IT, or even private cloud, the aim is the same.
And that aim is: to use the likes of machine learning, workflow automation, and infrastructure-as-code to automatically make changes in real-time, eliminating as much as possible of the manual drudgery associated with routine IT administration.
Are the latest AI/ML-powered intelligent automation solutions trustworthy and ready for mainstream deployment, particularly in areas such as storage management?
Should we go ahead and implement the technology now on offer?
This controversial topic is the subject of our latest reader survey, and we are eager to hear your views.
Please complete our short survey, here.
As always, your responses will be anonymous and your privacy assured.
Sponsored: Practical tips for Office 365 tenant-to-tenant migration
Go here to see the original:
Are machine-learning-based automation tools good enough for storage management and other areas of IT? Let us know - The Register
Proof in the power of data – PES Media
Engineers at the AMRC have researched the use of the cloud to capture data from machine tools with Tier 2 member Amido
Cloud data solutions being trialled at the University of Sheffield Advanced Manufacturing Research Centre (AMRC) could provide a secure and cost-effective way for SME manufacturers to explore how machine learning and Industry 4.0 technologies can boost their productivity.
Jon Stammers, AMRC technical fellow in the process monitoring and control team, says: Data is available on every shopfloor but a lot of time it isnt being captured due to lack of connectivity, and therefore cannot be analysed. If the cloud can capture and analyse that data then the possibilities are massive.
Engineers in the AMRCs Machining Group have researched the use of the cloud to capture data from machine tools with new Tier Two member Amido, an independent technical consultancy specialising in assembling, integrating and building cloud-native solutions.
Mr Stammers adds: Typically we would have a laptop sat next to a machine tool capturing its data; a researcher might do some analysis on that laptop and share the data on our internal file system or on a USB stick. There is a lot of data generated on the shopfloor and it is our job to capture it, but there are plenty of unanswered questions about the analysis process and the cloud has a lot to bring to that.
In the trial, data from two CNC machines in the AMRCs Factory of the Future: a Starrag STC 1250 and a DMG Mori DMU 40 eVo, was transferred to the Microsoft Azure Data Lake cloud service and converted into a parquet format, which allowed Amido to run a series of complex queries over a long period of time.
Steve Jones, engagement director at Amido, explains handling those high volumes of data is exactly what the cloud was designed for: Moving the data from the manufacturing process into the cloud means it can be stored securely and then structured for analysis. The data cant be intercepted in transit and it is immediately encrypted by Microsoft Azure.
Security is one of the huge benefits of cloud technology, Mr Stammers comments. When we ask companies to share their data for a project, it is usually rejected because they dont want their data going offsite. Part of the work were doing with Amido is to demonstrate that we can anonymise data and move it off site securely.
In addition to the security of the cloud, Mr Jones says transferring data into a data lake means large amounts can be stored for faster querying and machine learning.
One of the problems of a traditional database is when you add more data, you impact the ability for the query to return the answers to the questions you put in; by restructuring into a parquet format you limit that reduction in performance. Some of the queries that were taking one of the engineers up to 12 minutes to run on the local database, took us just 12 seconds using Microsoft Azure.
It was always our intention to run machine learning against this data to detect anomalies. A reading in the event data that stands out may help predict maintenance of a machine tool or prevent the failure of a part.
Storing data in the cloud is extremely inexpensive and that is why, according to software engineer in the process monitoring and control team Seun Ojo, cloud technology is a viable option for SMEs working with the AMRC, part of the High Value Manufacturing (HVM) Catapult.
He says: SMEs are typically aware of Industry 4.0 but concerned about the return on investment. Fortunately, cloud infrastructure is hosted externally and provided on a pay-per-use basis. Therefore, businesses may now access data capture, storage and analytics tools at a reduced cost.
Mr Jones adds: Businesses can easily hire a graphics processing unit (GPU) for an hour or a quantum computer for a day to do some really complicated processing and you can do all this on a pay-as-you-go basis.
The bar to entry to doing machine learning has never been lower. Ten years ago, only data scientists had the skills to do this kind of analysis but the tools available from cloud platforms like Microsoft Azure and Google Cloud now put a lot of power into the hands of inexpert users.
Mr Jones says the trials being done with Amido could feed into research being done by the AMRC into non-geometric validation.
He concludes: Rather than measuring the length and breadth of a finished part to validate that it has been machined correctly; I want to see engineers use data to determine the quality of a job.
That could be really powerful and if successful would make the process of manufacturing much quicker. That shows the value of data in manufacturing today.
AMRCwww.amrc.co.uk
Amidowww.amido.com
Michael Tyrrell
Digital Coordinator
Subscribe to our FREE Newsletter
See the rest here:
Proof in the power of data - PES Media
Innovative AI and Machine-Learning Technology That Detects Emotion Wins Top Award – Express Computer
CampaignTester was awarded Best Application of Artificial Intelligence to Optimize Creative at the 2020 Campaigns & Elections Reed Awards.
CampaignTester is a cutting-edge mobile-based platform that utilizes emotion analytics and machine learning to detect a users emotion and engagement level while watching video content. Their proprietary platform aims to deliver key audience insights for organizations to validate, revise and perfect their video content messaging.
Campaigns & Elections Reed Award winners represent the best-of-the-best in the political campaign and advocacy industries. The 2020 Reed Awards honored winners across 16 distinct category groups, representing the different specialisms of the political campaign industry, with distinct category groups for International (non-US) work, and Grassroots Advocacy work.
It was particularly meaningful being recognized among some of the finest marketers and technologists in the world. Bill Lickson, CampaignTesters Chief Operating Officer affirmed. I was thrilled and honored to accept this prestigious award on behalf of our entire talented team.
Aaron Itzkowitz, Chief Executive Officer and Founder of CampaignTester added, This award is a great start to what looks to be a wonderful year for our client-partners and our company. While our technology was recognized for excellence in political marketing, our technology is for any industry that uses video in marketing
If you have an interesting article / experience / case study to share, please get in touch with us at [emailprotected]
Get real time updates directly on you device, subscribe now.
Express Computer is one of India's most respected IT media brands and has been in publication for 24 years running. We cover enterprise technology in all its flavours, including processors, storage, networking, wireless, business applications, cloud computing, analytics, green initiatives and anything that can help companies make the most of their ICT investments. Additionally, we also report on the fast emerging realm of eGovernance in India.
View original post here:
Innovative AI and Machine-Learning Technology That Detects Emotion Wins Top Award - Express Computer
Answering the Question Why: Explainable AI – AiThority
The statistical branch of Artificial Intelligence has enamored organizations across industries, spurred an immense amount of capital dedicated to its technologies, and entranced numerous media outlets for the past couple of years. All of this attention, however, will ultimately prove unwarranted unless organizations, data scientists, and various vendors can answer one simple question: can they provide Explainable AI?
Although the ability to explain the results of Machine Learning modelsand produce consistent results from themhas never been easy, a number of emergent techniques have recently appeared to open the proverbial black box rendering these models so difficult to explain.
One of the most useful involves modeling real-world events with the adaptive schema of knowledge graphs and, via Machine Learning, gleaning whether theyre related and how frequently they take place together.
When the knowledge graph environment becomes endowed with an additional temporal dimension that organizations can traverse forwards and backwards with dynamic visualizations, they can understand what actually triggered these events, how one affected others, and the critical aspect of causation necessary for Explainable AI.
Investments in AI may well hinge upon such visual methods for demonstrating causation between events analyzed by Machine Learning.
Read more: How to Make AI Work in Extreme Conditions?
As Judea Pearls renowned The Book of Why affirms, one of the cardinal statistical concepts upon which Machine Learning is based is that correlation isnt tantamount to causation. Part of the pressing need for Explainable AI today is that in the zeal to operationalize these technologies, many users are mistaking correlation for causationwhich is perhaps understandable because aspects of correlation can prove useful for determining causation. In ascending order of importance, an abridged hierarchy of statistical concepts contributing to Explainable AI involves:
Causation is the foundation of Explainable AI. It enables organizations to understand that when given X, they can predict the likelihood of Y. In aircraft repairs, for example, causation between events might empower organizations to know that when a specific part in an engine fails, theres a greater probability for having to replace cooling system infrastructure.
Theres an undeniable temporal element of causation readily illustrated in knowledge graphs so when depicting real-world events, organizations can ascertain which took place first and how it might have affected others. This added temporal dimension is critical in establishing causation between events, such as patients having both HIV and bipolar disorder. In this domain, deep neural networks and other black-box Machine Learning approaches can pinpoint any number of interesting patterns, such as the fact that theres a high co-occurrence of these conditions in patients.
When modeling these events in graph settings alongside other relevant eventslike what erratic decisions individual bi-polar patients made relating to their sexual or substance abuse activitiesthey might differentiate various aspects of correlation. However, the ability to dynamically visualize the sequence of those events to see which took place before what and how that contributed to other events is indispensable to finding causation.
The flexibility of the knowledge graph schema enables organizations to specify the start and end time of events. When leveraging speech recognition technologies in contact centers for Sales opportunities, organizations can model when agents mentioned certain Sales products, how long they talked about them, and the same information for customers. Visual graph mechanisms can depict these events sequentially, so organizations can see which led to what. Without this temporal method, organizations can leverage Machine Learning to specify co-occurrence and correlation between products.
Nevertheless, the ability to traverse these events at various points in time allows them to see which products, services, or customer prototypes generate interest in other offerings. This causation is determinate for increasing the accuracy of machine learning predictions about how to boost sales with this information. As valuable as this capacity is, the more meritorious quality of such causation is that the explanation for these predictions is not only perfectly clear but also able to be visualized.
Causation is the basis for understanding the predictions of Machine Learning models. Knowledge graphs have visualizations enabling organizations to go back and forth in time to see which events are causative to others. This capability is vital to solving the issue of Explainable AI.
Read more: Is Artificial Intelligence the Next Stepping Stone for Web Designers?
More:
Answering the Question Why: Explainable AI - AiThority