Category Archives: Machine Learning

Stockholm-based medtech AlgoDx raises a 600K to save lives with its machine learning diagnostics – EU-Startups

Swedish startup AlgoDx, which focuses on supporting disease detection and prediction with machine learning algorithms, has closed a 600K seed round. The round was led by Nascent Invest, with participation from angel investors Fredrik Sjdin and Tomas Mora-Morrison, co-founder of Cambio Healthcare Systems.

Founded in 2018, uses artificial intelligence and machine learning to increase efficiency and save time for healthcare professionals. Today, many healthcare processes require manual input and analyses, which is time consuming, expensive and leaving room for human errors. For example, in sepsis treatment, time factor is critical, as the cornerstones of intervention are early and appropriate antibiotics together with source control and fluid administration. Current detection methods for sepsis are incapable of early prediction. Thats where AlgoDx comes in.

AlgoDxs first product, ExPRESS, has been developed to autonomously predict sepsis in hospitalized patients using data from electronic healthcare records. Reliable early prediction can mean the difference between life and death for patients that develop sepsis.

We invested in AlgoDx because we believe that the team has a strong competitive edge within machine learning and a profound understanding of the clinical validation required to bring products to market in areas with unmet medical need, says Erik Gozzi, CEO at Nascent Invest.

The fresh funds will be used to further develop its first product ExPRESS, specifically by further scaling clinical validation and showing the benefits of autonomous sepsis risk monitoring in patients being treated at Intensive Care Units.

This seed round will allow us to continue the clinical validation of our sepsis prediction algorithm as planned. We are very proud to be supported by investors with a commercial outlook and a long-term investment horizon, says David Becedas, CEO at AlgoDx.

We are at the commencement of a new age where machine learning approaches will enable earlier and more accurate detection and prediction of disease. The founding team at AlgoDx understands that clinical rigor is essential in order to bring machine learning solutions to market with integrations into electronic healthcare record systems, says Tomas Mora-Morrison, who will also chair the companys new Board.

See the original post:
Stockholm-based medtech AlgoDx raises a 600K to save lives with its machine learning diagnostics - EU-Startups

Artificial intelligence and machine learning for data centres and edge computing to feature at Datacloud Congress 2020 in Monaco – Data Economy

Vertiv EMEA president Giordano Albertazzi looks back on data center expansion in the Nordics and the regions role as an efficient best execution venue for the future.

At the start of the new year its natural to look to thefuture. But its also worth taking some time to think back to the past.

Last year was not only another period of strong data center growthglobally, and in the Nordic region specifically, but also the end of a decadeof sustained digital transformation.

There have been dramatic shifts over the last ten years butthe growth in hyperscale facilities is one of the most defining and one withwhich the Nordic region is very well acquainted.

According to figures from industry analysts Synergy Researchthe total number of hyperscale sites has tripled since 2013 and there are nowmore than 500 such facilities worldwide

And it seems that growth shows no signs of abating. Accordingto Synergy, in addition to the 504 current hyperscale data centers, a further151 that are at various stages of planning or building.

A good numberof those sites will be sited in the Nordics if recent history is anything to goby. The region has already seen significant investment from cloud andhyperscale operators such as Facebook, AWS and Apple. Google was also one ofthe early entrants and invested $800 million inits Hamina, Finland facility in 2010. It recently announced plans to invest a further $600 million in an expansion ofthat site.

I was lucky enough to speak at the recent DataCloud Nordicsevent at the end of last year. My presentation preceded Googles country manager,Google Cloud, Denmark and Finland, Peter Harden, who described the companysgrowth plans for the region. Hamina, Finland is one of Googles mostsustainable facilities thanks in no small part to its Nordics location whichenables 100% renewable energy and innovative sea water cooling.

Continuing that theme of sustainability, if the last decadehas been about keeping pace with data demand, then the next ten years will beabout continued expansion but importantly efficient growth in the right locations,using the right technology and infrastructure. The scale of growth beingpredicted billions of new edge devices for example will necessitate asustainable approach.

That future we at Vertiv, and others, believe will be basedaround putting workloads where they make most sense from a cost, risk, latency,security and efficiency perspective. Or as industry analysts 451 Research putsit: TheBest Execution Venue (BEV). (a slightlyunwieldy term but an accurate one). BEV refers to the specific ITinfrastructure an app or workload should run on cloud, on-premise or at theedge for example but could also equally apply to geographic location of datacenters.

In that BEV future, the Nordics will become increasingly important for hosting a variety of workloads but the sweet-spot could be those that are less latency sensitive high performance compute (HPC) for example and can therefore benefit from the stable, renewable and cheap power as well as the abundance of free cooling. Several new sub-sea cables coming online over the near future will also address some of the connectivity issues the region has faced.

Newsletter

Time is precious, but news has no time. Sign up today to receive daily free updates in your email box from the Data Economy Newsroom.

A recent study by the Nordic Council of Ministers estimatesthat approximately EUR 2.2 bn. have been invested in the Nordics on initiateddata centre construction works over the last 12 to 18 months (2018). Mainlywithin hyperscale and cloud infrastructure. This number could exceed EUR 4 bn.annually within the next five to seven years because of increasing marketdemand and a pipeline of planned future projects.

Vertiv recently conducted some forward-looking research thatappears to reinforce the Nordics future potential. Vertiv first conducted itsData Center 2025 research back in 2014 to understand where the industry thoughtit was headed. In 2019, weupdated that study to find out how attitudes had shifted in the interveningfive years a half way point if you will be between 2014 and 2025.

The survey of more than 800 data center experts covers a range of technology areas but lets focus on a few that are important and relevant to the Nordics.

We mentioned the edge a little earlier when talking about BEV.Vertiv has identified fourkey edge archetypes that cover the edge use cases that our experts believewill drive edge deployments in the future. According to the 2025 research, ofthose participants who have edge sites today, or expect to have edge sites in2025, 53% expect the number of edge sites they support to grow by at least 100%with 20% expecting an increase of 400% or more.

So along with providing a great venue for future colo and cloud growth, the Nordics, like other regions, is also likely to see strong edge growth. That edge demand will require not only new data center form-factors such as prefabricated modular (PFM) data center designs but also monitoring and management software and specialist services.

Another challenge around edge compute, and the core for thatmatter, is energy availability and increasingly, access to clean, renewableenergy.

The results of the 2025 research revealed that respondentsare perhaps more realistic and pragmatic about the importance and access toclean power than back in 2014. Participants in the original survey projected22% of data center power would come from solar and an additional 12% from windby 2025. Thats a little more than one-third of data center power from thesetwo renewable sources, which seemed like an unrealistic projection at the time.

This years numbers for solar and wind (13% and 8% respectively) seem more realistic. However, importantly for Nordics countries with an abundance of hydropower, participants in this years survey expect hydro to be the largest energy source for data centers in 2025.

The data center 2025 research, also looked at one of theother big drivers for building capacity in the Nordics: access to efficientcooling.

According to the 2025 survey, around 42% of respondentsexpect future cooling requirements to be met by mechanical cooling systems. Liquidcooling and outside air also saw growth from 20% in 2014 to 22% in 2019, likelydriven by the more extreme rack densities being observed today. This growth inthe use of outside air obviously benefits temperate locations like the Nordics.

In summary, if the last ten years have been about simplykeeping up with data center demand, the next ten years will be about addingpurposeful capacity in the most efficient, sustainable and cost-effective way:the right data center type, thermal and power equipment, and location for theright workloads.

If the past is anything to go by, the Nordics will have an important role to play in that future.

Read the latest from the Data Economy Newsroom:

The rest is here:
Artificial intelligence and machine learning for data centres and edge computing to feature at Datacloud Congress 2020 in Monaco - Data Economy

Sony Music and The Orchard held a machine-learning hackathon – Music Ally

Sony Music, The Orchard and Amazon Web Services held a machine-learning focused hackathon last week. Music Ally couldnt make the event, so we asked if Sony could give us some information about the winning hacks. The label group came back with a full report on the event, quotesnall. So, in its own words, heres what went down at Music ML 2020.

Music ML 2020 is the second annual hackathon held by Sony Music, The Orchard and Amazon Web Services (AWS) and the first to be expanded out of the US. The Orchard office hosted the New York competition and Sony Music simultaneously hosted in London over the course of three days. The hackathons primary purpose is to allow cross-functional teams of industry professionals to create business solutions at the intersection of music and machine-learning.

Each team was tasked with creating a proof-of-concept for a new machine-learning solution that would positively impact the business. At The Orchard, teams worked daily to connect fans to songs, albums, and videos from their favourite artists across the globe.

The hackathon focused on improving the quality of content releases and making them more discoverable, using machine learning as a toolset to support creative decisions. The opening keynote by Jacob Fowler, Chief Technology Officer at The Orchard, challenged competitors to inspire, create, and push the boundaries of innovation.

In each location the teams were comprised of technical staff from Sony Music, The Orchard UK, and solutions architects from AWS. BRIT-nominated producer DJ Fresh, who also works as a machine-learning engineer, joined the British hackathon to lend his expertise.On the competitions final day, the UK teams presented their projects followed by the New York teams.

Five executives made up the UK judging panel. From Sony Music UK: Cassandra Gracey, President, 4th Floor Creative Group, Michael Hanson, Head of Digital, Columbia Records and Olivier Parfait, Director, Global Business Development & Digital Strategy. From The Orchard UK: Chris Manning, General Manager, UK & EU and Joe Andrews, Senior Director, International Sales and Marketing.

The US judges were Jacob Fowler, CTO, The Orchard; Rachel Stoewer, VP, Artist and Label Services, The Orchard; Devki Patel, Strategy and Finance; and Chris Frankenberg, VP of Emerging Technology, Global Digital Business, Sony Music.

In London the winning team design-trained a machine learning model to understand and monitor user behaviour and interactions with brands and artists. This concept put a powerful and user-friendly tool in the hands of marketers, empowering them to rapidly create retargeting lists comprised of high-value users.

This is particularly valuable for breaking new artists, as the model can identify high-value users with only a small amount of input data. The tool can also be used by sync and brand partnership teams, enabling them to actively pitch brands on potential deals.

In New York, the winning team produced a model that uses machine learning to auto-generate artwork for artist and label merchandising. This innovative design created interesting and creative opportunities for artists and labels to utilise existing content on various platforms.

Lewis Donovan, lead web developer & Music ML project lead for Sony Music UK, said:

After three intense days in London the teams produced a tool that analyses data to inform A&R decisions, a spike analysis tool and an artist/brand/user affinity tool. Achieving this is just three days, by teams with varying degrees of coding experience illustrates the energy, creativity and ambition for innovation here. Every participant in the UK event had the opportunity to collaborate with new people, learn new skills, and help create a new business application using machine-learning.

DJ Fresh, Ministry of Sound artist and machine learning engineer, said:

The way we consume media now is three dimensional, as an artist I want to learn how to use that palette. The last few years, being an all-or-nothing kind of guy, I devoted myself to AI and trained as a machine-learning engineer, so when Sony Music asked me to get involved I thought it sounded like an interesting challenge.

Ive always been involved with tech in the early 2000s I started a site that was the biggest music forum on the net at the time, a place for the drum and bass scene to network.Now Im working on a new app called Golddust (Golddust.io), and with my new single Drive out now, its a very creative time for me.

Cassandra Gracey, President, 4th Floor Creative and Music ML judge said:

I loved seeing what could be achieved in such a short space of time, looking at ways to boost our business using various machine learning applications and our owned data. I look forward to the winning teams programme being finished so I can start using it!

Sony also noted that the winning UK team includedSony Music UKs web developer Lewis Donovan, Francesca Lamaina and Josh Rubner, AWS Andrew Morrow, and (via a Hamburg dial-in) The Orchards Dino Celotti.The winning New York team was The Orchards Anthony Khoudary, Jinny Yang, and Peter Iannone, and AWS team members Julia Soscia, Rahul Popat, and Alex Jestin Taylor.

Stuart Dredge

'); // Defining the Status message element var statusdiv=$('#comment-status'); commentform.submit(function(){ // Serialize and store form data var formdata=commentform.serialize(); //Add a status message statusdiv.html('

Processing...

'); //Extract action URL from commentform var formurl=commentform.attr('action'); //Post Form with data $.ajax({ type: 'post', url: formurl, data: formdata, error: function(XMLHttpRequest, textStatus, errorThrown){ statusdiv.html('

Thanks for your comment. We appreciate your response.

'); commentform.find('textarea[name=comment]').val(''); } }); return false; }); });

Read this article:
Sony Music and The Orchard held a machine-learning hackathon - Music Ally

Keeping machine learning algorithms humble and honest in the ‘ethics-first’ era – TechNative

AI and machine learning (ML) applications have been at the centre of several high-profile controversies, witness the recent Apple Cardcredit limit differencesand Amazonsrecruitment toolbias.

Mind Foundry has been a pioneer in the development and use of humble and honest algorithms from the very beginning of its applications development. As Davide Zilli, Client Services Director atMind Foundryexplains, baked in transparency and explainability will be vital in winning the fight against biased algorithms and inspiring greater trust in AI and ML solutions.

Today in so many industries, from manufacturing and life sciences to financial services and retail, we rely on algorithms to conduct large-scale machine learning analysis. They are hugely effective for problem-solving and beneficial for augmenting human expertise within an organisation. But they are now under the spotlight for many reasons and regulation is on the horizon, withGartner projectingfour of the G7 countries will establish dedicated associations to oversee AI and ML design by 2023. It remains vital that we understand their reasoning and decision-making process at every step.

Algorithms need to be fully transparent in their decisions, easily validated and monitored by a human expert. Machine learning tools must introduce this full accountability to evolve beyond unexplainable black box solutions and eliminate the easy excuse of the algorithm made me do it!

Bias can be introduced into the machine learning process as early as the initial data upload and review stages. There are hundreds of parameters to take into consideration during data preparation, so it can often be difficult to strike a balance between removing bias and retaining useful data.

Gender for example might be a useful parameter when looking to identify specific disease risks or health threats, but using gender in many other scenarios is completely unacceptable if it risks introducing bias and, in turn, discrimination. Machine learning models will inevitably exploit any parameters such as gender in data sets they have access to, so it is vital for users to understand the steps taken for a model to reach a specific conclusion.

Removing the complexity of the data science procedure will help users discover and address bias faster and better understand the expected accuracy and outcomes of deploying a particular model.

Machine learning tools with built-in explainability allow users to demonstrate the reasoning behind applying ML to a tackle a specific problem, and ultimately justify the outcome. First steps towards this explainability would be features in the ML tool to enable the visual inspection of data with the platform alerting users to potential bias during preparation and metrics on model accuracy and health, including the ability to visualise what the model is doing.

Beyond this, ML platforms can take transparency further by introducing full user visibility, tracking each step through a consistent audit trail. This records how and when data sets have been imported, prepared and manipulated during the data science process. It also helps ensure compliance with national and industry regulations such as the European Unions GDPRright to explanation clause and helps effectively demonstrate transparency to consumers.

There is a further advantage here of allowing users to quickly replicate the same preparation and deployment steps, guaranteeing the same results from the same data particularly vital for achieving time efficiencies on repetitive tasks. We find for example in the Life Sciences sector, users are particularly keen on replicability and visibility for ML where it becomes an important facility in areas such as clinical trials and drug discovery.

There are so many different model types that it can be a challenge to select and deploy the best model for a task. Deep neural network models, for example, are inherently less transparent than probabilistic methods, which typically operate in a more honest and transparent manner.

Heres where many machine learning tools fall short. Theyre fully automated with no opportunity to review and select the most appropriate model. This may help users rapidly prepare data and deploy a machine learning model, but it provides little to no prospect of visual inspection to identify data and model issues.

An effective ML platform must be able to help identify and advise on resolving possible bias in a model during the preparation stage, and provide support through to creation where it will visualise what the chosen model is doing and provide accuracy metrics and then on to deployment, where it will evaluate model certainty and provide alerts when a model requires retraining.

Building greater visibility into data preparation and model deployment, we should look towards ML platforms that incorporate testing features, where users can test a new data set and receive best scores of the model performance. This helps identify bias and make changes to the model accordingly.

During model deployment, the most effective platforms will also extract extra features from data that are otherwise difficult to identify and help the user understand what is going on with the data at a granular level, beyond the most obvious insights.

The end goal is to put power directly into the hands of the users, enabling them to actively explore, visualise and manipulate data at each step, rather than simply delegating to an ML tool and risking the introduction of bias.

The introduction of explainability and enhanced governance into ML platforms is an important step towards ethical machine learning deployments, but we can and should go further.

Researchers and solution vendors hold a responsibility as ML educators to inform users of the use and abuses of bias in machine learning. We need to encourage businesses in this field to set up dedicated education programmes on machine learning including specific modules that cover ethics and bias, explaining how users can identify and in turn tackle or outright avoid the dangers.

Raising awareness in this manner will be a key step towards establishing trust for AI and ML in sensitive deployments such as medical diagnoses, financial decision-making and criminal sentencing.

AI and machine learning offer truly limitless potential to transform the way we work, learn and tackle problems across a range of industriesbut ensuring these operations are conducted in an open and unbiased manner is paramount to winning and retaining both consumer and corporate trust in these applications.

The end goal is truly humble, honest algorithms that work for us and enable us to make unbiased, categorical predictions and consistently provide context, explainability and accuracy insights.

Recent research shows that84% of CEOsagree that AI-based decisions must be explainable in order to be trusted. The time is ripe to embrace AI and ML solutions with baked in transparency.

Featured image: MaZi

See the rest here:
Keeping machine learning algorithms humble and honest in the 'ethics-first' era - TechNative

A 5-Year Vision for Artificial Intelligence in Higher Ed – EdTech Magazine: Focus on Higher Education

The Historical Hype Cycle of AI

Before talking about the current and projected impact of AI in education and other industries, Ramsey explained the concept of the AI winter.

He showed a graph on the historical hype cycle of AI that featured peaks and drops over a 70-year period.

There was a big peak in the mid-1960s, when there was an emergence of symbolic AI research and new insights into the possibility of training two-layer neural networks. A resurgence came in the 1980s with the invention of certain algorithms for training three-plus layer neural networks.

The graph showed a drop in the mid-1990s, as the computational horsepower and data did not exist to develop real-world applications for AI a situation he calls an AI winter. We are in the middle of another resurgence today, he said.

There has been a huge increase in the amount of data and computer power that we have available, sparking research, Ramsey said. People have been able to start inventing algorithms and training not just three-layer neural networks but a 100-layer one.

The question now is where we will go next, he said. His answer? We will sustain progress, leading to true or strong AI the point at which a machines intellectual capability is functionally equal to a humans.

The number of researchers working on this, the amount of money thats being spent on this and the amount of research publications its all growing, he said. And where Google is right now is on a plateau of productivity because were using AI in everything that we do, at scale.

MORE ON EDTECH:Learn how data-powered AI tools are helping universities drive enrollment and streamline operations.

During his presentation, Ramsey showed an infographic that featured what machine learning could look like across a students journey through higher education, starting from their college search and ending with employment.

For example, he said, colleges and universities can apply machine learning when targeting quality prospective students to attend their schools. They can even automate call center operations to make contacting prospective students more efficient and deploy AI-driven assistants to engage with applicants in a personalized way, he said.

Once students are enrolled, they can also useAI chatbotsto improve student support services, assisting new students in their adjustment to college. They can leverage adaptive learning technology topredictperformance as they choose a path through school, and they can tailor material to their knowledge levels and learning styles.

For example, a machine learning algorithm helped educators at Ivy Tech Community College in Indianapolis identify at-risk students and provide early intervention, Ramsey said.

Ivy Tech shifted toGoogleCloud Platform, which allowed the school to manage 12 million data points from student interactions and develop aflexible AI engineto analyze student engagement and success. For instance, a student who stops logging in to their learning management system or showing up to class would be flagged as needing assistance.

The predictions were 83 percent accurate, Ramsey said. It worked quite well, and they were actually able to save students from dropping out, which makes a big difference because their funding is based on how many students they have, he said.

As students near graduation and start their job searches, schools can also use AI to understand career trends and match them to a students competencies and skills. Machine learning can be used to better understand job listings and a jobseekers intent, matching candidates to their ideal jobs more quickly.

At the end of the day, what were doing with these technologies is trying to understand who we are and how our minds work, Ramsey said. Once we fully understand that, we can build machines that function in the same way, and the possibilities are endless.

Excerpt from:
A 5-Year Vision for Artificial Intelligence in Higher Ed - EdTech Magazine: Focus on Higher Education

Workday, Machine Learning, and the Future of Enterprise Applications – Cloud Wars

That technological sophistication starts at the top. A few months ago, in an exclusive interview, Workday CEO Aneel Bhusri described himself as the companys Pied Piper of ML for his passionate advocacy about a technology that he believes will be even more disruptive than the cloud.

In his own understated but high-impact way, Workday cofounder and CEO Aneel Bhusri has become one of the worlds most-bullish evangelists for the extraordinary power and potential of machine learning.

Weve always talked about predictive analytics but theyre now a realityand its really a reality, Bhusri said in a recent exclusive interview.

Its what weve dreamed about for a long time. But we never actually got there because the technologies werent therebut now theyre here.

And Bhusri is making sure that Workdaywhich is on the verge of posting its first billion-dollar quarteris at the forefront in giving corporate customers the full benefits of MLs transformative capabilities.

Machine learning is just so profound, right? Its impacting all of our lives in so many ways, Bhusri said when I brought up his comment that ML will be even more disruptive than the cloud.

Internally I described my role to the company as the pied piper of machine learning, he said with a chuckle.And I asked every employee in the company to buy the bookPrediction Machinesand charge it back to Workday because we all have to get comfortable with this new world and be able to succeed in it and be able to talk to our customers about it.

It looks like one of the ways Bhusri is helping Workdays entire workforce to get comfortable with this new world is by letting them know that hes driving the conversation for that conversion.

For me theres actually something very gratifying when I can say, okay, not going to try to get the engineers to work on five different things, says Bhusri, who refers to himself self-effacingly as a products guy.

So every time I see one of our engineers or developers, I ask, what are you doing on machine learning? Or what do you think about machine learning? And what should we be doing with machine learning?

Pretty soon theyre all saying, Okay, before I meet with Aneel, I know hes going to ask about machine learning so I should have my act together, Bhusri said.It gets everybody on the same pagepeople are excited.

At least so far, Workdays customers have been eager to share that excitement and allow Workday to help them build their digital futures.

More here:
Workday, Machine Learning, and the Future of Enterprise Applications - Cloud Wars

ML Based Threat Analytics Tools: Ensuring a Secure Network and Improved Cybersecurity Posture – AiThority

IDC White Paper Sponsored by LinkShadow an Innovative Cybersecurity Organization.

LinkShadow Next-Generation Cybersecurity Analytics supports IDC with their recent white paper that talks about how the adoption of machine learning-based threat analytics tools will be critical for organizations in the coming years.

Today, the ever-evolving technologies advanced security analytics platforms gather data from different sources be it internal network traffic or the security solutions implemented to cover all the gaps and help visualize threats at different stages to provide organizations with a complete overview to ease response to threats.

Recommended AI News:Top 11 Use Cases of RPA in the Insurance Industry

This IDC white paper will go into the details about using Machine Learning to advance Threat Hunting capabilities which will help in complementing the security tools already in place. This will enhance the overall security infrastructure and give security teams an edge against advanced threats.

LinkShadow is a Next-Generation Cybersecurity Platform with Behavior Analytics and extensive Machine Learning capabilities to detect both cyber and internal threats.LinkShadow has a wide range of solutions that focuses on every level of your security team whether C level Management with the CXO Managerial Dashboards, visualization & VR. Security Analysts with the advanced machine learning algorithm use cases & SOC team with threats prioritization.

Recommended AI News: Artificial Intelligence Startup Labelbox Closes $25 Million in Series B Funding

Being ahead of adversaries and preempting their next step is a top priority for any security team and to be successful in this task, organizations have to be equipped with the right investigating and threat hunting tools. This IDC report is a valuable source of information on the adoption of enhanced threat intelligence and advanced analytics capabilities. LinkShadows core value proposition is threat hunting with the use of machine learning that can defeat the next generation of cybercriminalsand gives you a complete view of your network and can prioritize response to incidents or threats based on the severity of a risk, saidFadi Sharaf, sales director, LinkShadow.

Recommended AI News: RevOps Heroes: Behind the Masks of SaaS Revenue Operations Functions

Read more:
ML Based Threat Analytics Tools: Ensuring a Secure Network and Improved Cybersecurity Posture - AiThority

Why 2020 will be the Year of Automated Machine Learning – Gigabit Magazine – Technology News, Magazine and Website

As the fuel that powers their ongoing digital transformation efforts, businesses everywhere are looking for ways to derive as much insight as possible from their data. The accompanying increased demand for advanced predictive and prescriptive analytics has, in turn, led to a call for more data scientists proficient with the latest artificial intelligence (AI) and machine learning (ML) tools.

But such highly-skilled data scientists are expensive and in short supply. In fact, theyre such a precious resource that the phenomenon of the citizen data scientist has recently arisen to help close the skills gap. A complementary role, rather than a direct replacement, citizen data scientists lack specific advanced data science expertise. However, they are capable of generating models using state-of-the-art diagnostic and predictive analytics. And this capability is partly due to the advent of accessible new technologies such as automated machine learning (AutoML) that now automate many of the tasks once performed by data scientists.

Algorithms and automation

According to a recent Harvard Business Review article, Organisations have shifted towards amplifying predictive power by coupling big data with complex automated machine learning. AutoML, which uses machine learning to generate better machine learning, is advertised as affording opportunities to democratise machine learning by allowing firms with limited data science expertise to develop analytical pipelines capable of solving sophisticated business problems.

Comprising a set of algorithms that automate the writing of other ML algorithms, AutoML automates the end-to-end process of applying ML to real-world problems. By way of illustration, a standard ML pipeline is made up of the following: data pre-processing, feature extraction, feature selection, feature engineering, algorithm selection, and hyper-parameter tuning. But the considerable expertise and time it takes to implement these steps means theres a high barrier to entry.

AutoML removes some of these constraints. Not only does it significantly reduce the time it would typically take to implement an ML process under human supervision, it can also often improve the accuracy of the model in comparison to hand-crafted models, trained and deployed by humans. In doing so, it offers organisations a gateway into ML, as well as freeing up the time of ML engineers and data practitioners, allowing them to focus on higher-order challenges.

SEE ALSO:

Overcoming scalability problems

The trend for combining ML with Big Data for advanced data analytics began back in 2012, when deep learning became the dominant approach to solving ML problems. This approach heralded the generation of a wealth of new software, tooling, and techniques that altered both the workload and the workflow associated with ML on a large scale. Entirely new ML toolsets, such as TensorFlow and PyTorch were created, and people increasingly began to engage more with graphics processing units (GPUs) to accelerate their work.

Until this point, companies efforts had been hindered by the scalability problems associated with running ML algorithms on huge datasets. Now, though, they were able to overcome these issues. By quickly developing sophisticated internal tooling capable of building world-class AI applications, the BigTech powerhouses soon overtook their Fortune 500 peers when it came to realising the benefits of smarter data-driven decision-making and applications.

Insight, innovation and data-driven decisions

AutoML represents the next stage in MLs evolution, promising to help non-tech companies access the capabilities they need to quickly and cheaply build ML applications.

In 2018, for example, Google launched its Cloud AutoML. Based on Neural Architecture Search (NAS) and transfer learning, it was described by Google executives as having the potential to make AI experts even more productive, advance new fields in AI, and help less-skilled engineers build powerful AI systems they previously only dreamed of.

The one downside to Googles AutoML is that its a proprietary algorithm. There are, however, a number of alternative open-source AutoML libraries such as AutoKeras, developed by researchers at Texas University and used to power the NAS algorithm.

Technological breakthroughs such as these have given companies the capability to easily build production-ready models without the need for expensive human resources. By leveraging AI, ML, and deep learning capabilities, AutoML gives businesses across all industries the opportunity to benefit from data-driven applications powered by statistical models - even when advanced data science expertise is scarce.

With organisations increasingly reliant on civilian data scientists, 2020 is likely to be the year that enterprise adoption of AutoML will start to become mainstream. Its ease of access will compel business leaders to finally open the black box of ML, thereby elevating their knowledge of its processes and capabilities. AI and ML tools and practices will become ever more ingrained in businesses everyday thinking and operations as they become more empowered to identify those projects whose invaluable insight will drive better decision-making and innovation.

By Senthil Ravindran, EVP and global head of cloud transformation and digital innovation, Virtusa

Read more:
Why 2020 will be the Year of Automated Machine Learning - Gigabit Magazine - Technology News, Magazine and Website

Machine Learning: Real-life applications and it’s significance in Data Science – Techstory

Do you know how Google Maps predicts traffic? Are you amused by how Amazon Prime or Netflix subscribes to you just the movie you would watch? We all know it must be some approach of Artificial Intelligence. Machine Learning involves algorithms and statistical models to perform tasks. This same approach is used to find faces in Facebook and detect cancer too. A Machine Learning course can educate in the development and application of such models.

Artificial Intelligence mimics human intelligence. Machine Learning is one of the significant branches of it. There is an ongoing and increasing need for its development.

Tasks as simple as Spam detection in Gmail illustrates its significance in our day-to-day lives. That is why the roles of Data scientists are in demand to yield more productivity at present. An aspiring data scientist can learn to develop algorithms and apply such by availing Machine Learning certification.

Machine learning as a subset of Artificial Intelligence, is applied for varied purposes. There is a misconception that applying Machine Learning algorithms would need a prior mathematical knowledge. But, a Machine Learning Online course would suggest otherwise. On contrary to the popular approach of studying, here top-to-bottom approach is involved. An aspiring data scientist, a business person or anyone can learn how to apply statistical models for various purposes. Here, is a list of some well-known applications of Machine Learning.

Microsofts research lab uses Machine Learning to study cancer. This helps in Individualized oncological treatment and detailed progress reports generation. The data engineers apply pattern recognition, Natural Language Processing and Computer vision algorithms to work through large data. This aids oncologists to conduct precise and breakthrough tests.

Likewise, machine learning is applied in biomedical engineering. This has led to automation of diagnostic tools. Such tools are used in detecting neurological and psychiatric disorders of many sorts.

We all have had a conversation with Siri or Alexa. They use speech recognition to input our requests. Machine Learning is applied here to auto generate responses based on previous data. Hello Barbie is the Siri version for the kids to play with. It uses advanced analytics, machine learning and Natural language processing to respond. This is the first AI enabled toy which could lead to more such inventions.

Google uses Machine Learning statistical models to acquire inputs. The statistical models collect details such as distance from the start point to the endpoint, duration and bus schedules. Such historical data is rescheduled and reused. Machine Learning algorithms are developed with the objective of data prediction. They recognise the pattern between such inputs and predict approximate time delays.

Another well-known application of Google, Google translate involves Machine Learning. Deep learning aids in learning language rules through recorded conversations. Neural networks such as Long-short term memory networks aids in long-term information updates and learning. Recurrent Neural networks identify the sequences of learning. Even bi-lingual processing is made feasible nowadays.

Facebook uses image recognition and computer vision to detect images. Such images are fed as inputs. The statistical models developed using Machine Learning maps any information associated with these images. Facebook generates automated captions for images. These captions are meant to provide directions for visually impaired people. This innovation of Facebook has nudged Data engineers to come up with other such valuable real-time applications.

The aim here is to increase the possibility of the customer, watching a movie recommendation. It is achieved by studying the previous thumbnails. An algorithm is developed to study these thumbnails and derive recommendation results. Every image of available movies has separate thumbnails. A recommendation is generated by pattern recognition among the numerical data. The thumbnails are assigned individual numerical values.

Tesla uses computer vision, data prediction, and path planning for this purpose. The machine learning practices applied makes the innovation stand-out. The deep neural networks work with trained data and generate instructions. Many technological advancements such as changing lanes are instructed based on imitation learning.

Gmail, Yahoo mail and Outlook engage machine learning techniques such as neural networks. These networks detect patterns in historical data. They train on received data about spamming messages and phishing messages. It is noted that these spam filters provide 99.9 percent accuracy.

As people grow more health conscious, the development of fitness monitoring applications are on the rise. Being on top of the market, Fitbit ensures its productivity by the employment of machine learning methods. The trained machine learning models predicts user activities. This is achieved through data pre-processing, data processing and data partitioning. There is a need to improve the application in terms of additional purposes.

The above mentioned applications are like the tip of an iceberg. Machine learning being a subset of Artificial Intelligence finds its necessity in many other streams of daily activities.

comments

Go here to read the rest:
Machine Learning: Real-life applications and it's significance in Data Science - Techstory

This AI Researcher Thinks We Have It All Wrong – Forbes

Dr. Luis Perez-Breva

Luis Perez-Breva is an MIT professor and the faculty director of innovation teams at the MIT School or Engineering. He is also an entrepreneur and part of The Martin Trust Center for MIT Entrepreneurship. Luis works to see how we can use technology to make our lives better and also on how we can work to get new technology out into the world. On a recent AI Today podcast, Professor Perez-Breva managed to get us to think deeply into our understanding of both artificial intelligence and machine learning.

Are we too focused on data?

Anyone who has been following artificial intelligence and machine learning knows the vital centrality of data. Without data, we cant train machine learning models. And without machine learning models, we dont have a way for systems to learn from experience. Surely, data needs to be the center of our attention to make AI systems a reality.

However, Dr. Perez-Breva thinks that we are overly focusing on data and perhaps that extensive focus is causing goals for machine learning and AI to go astray. According to Luis, so much focus is put into obtaining data that we judge how good a machine learning system is by how much data was collected, how large the neural network is, and how much training data was used. When you collect a lot of data you are using that data to build systems that are primarily driven by statistics. Luis says that we latch onto statistics when we feed AI so much data, and that we ascribe to systems intelligence, when in reality, all we have done is created large probabilistic systems that by virtue of large data sets exhibit things we ascribe to intelligence. He says that when our systems arent learning as we want, the primary gut reaction is to give these AI system more data so that we dont have to think as much about the hard parts about generalization and intelligence.

Many would argue that there are some areas where you do need data to help teach AI. Computers are better able to learn image recognition and similar tasks by having more data. The more data, the better the networks, and the more accurate the results. On the podcast, Luis asked whether deep learning is great enough that this works or if we have a big enough data set that image recognition now works. Basically: is it the algorithm or just the sheer quantity of data that is making this work?

Rather, what Luis argues is that if we can find a better way to structure the system as a whole, then the AI system should be able to reason through problems, even with very limited data. Luis compares using machine learning in every application to the retail world. He talks about how physical stores are seeing the success in online stores and trying to copy on that success. One of the ways they are doing this is by using apps to navigate stores. Luis mentioned that he visited a Target where he had to use his phone to navigate the store which was harder than being able to look at signs. Having a human to ask questions and talk to is both faster and part of the experience of being in a brick and mortar retail location. Luis says he would much rather have a human to interact with at one of these locations than a computer.

Is the problem deep learning?

He compares this to machine learning by saying that machine learning has a very narrow application. If you try to apply machine learning to every aspect of AI that you will end up with issues like he did at the Target. Basically looking at neural networks as a hammer and every AI problem as a nail. No one technology or solution works for every application. Perhaps deep learning only works because of vast quantities of data? Maybe theres a better algorithm that can generalize better, apply knowledge learned in one domain to another better, and use smaller amounts of data to get much better quality insights.

People have tried recently to automate many of the jobs that people do. Throughout history, Luis says that technology has killed businesses when it tries to replace humans. Technology and businesses are successful when they expand on what humans can do. Attempting to replace humans is a difficult task and one that is going to lead companies down the road to failure. As humans, he points out, we crave human interaction. Even the age that is constantly on their technology desires human interaction greatly.

Luis also makes a point that while many people mistakenly confuse automation and AI. Automation is using a computer to carry out specific tasks, it is not the creation of intelligence. This is something that many are mentioning on several occasions. Indeed, its the fear of automation and the fictional superintelligence that has many people worried about AI. Dr. Perez-Breva makes the point that many ascribe to machines human characteristics. But this should not be the case with AI system.

Rather, he sees AI systems more akin to a new species with a different mode of intelligence than humans. His opinion is that researchers are very far from creating an AI that is similar to what you will find in books and movies. He blames movies for giving people the impression of robots (AI) killing people and being dangerous technologies. While there are good robots in movies there are few of them and they get pushed to the side by bad robots. He points out that we need to move away from this pushing images of bad robots. Our focus needs to be on how artificial intelligence can help humans grow. It would be beneficial if the movie-making industry could help with this. As such, AI should be thought of as a new intelligent species were trying to create, not something that is meant to replace us.

A positive AI future

Despite negative images and talk, Luis is sure that artificial intelligence is here to stay. At least for a while. So many companies have made large investments into AI that it would be difficult for them to just stop using them or to stop the development.

As a final question in the interview, Luis was asked where he sees the industry of artificial intelligence going. Prefacing his answer with the fact that based on the earlier discussion people are investing in machine learning and not true artificial intelligence, Luis said that he is happy in the investment that businesses are making in what they call AI. He believes that these investments will help the development of this technology to stay around for a minimum of four years.

Once we can stop comparing humans to artificial intelligence, Luis believes that we will see great advancements in what AI can do. He believes that AI has the power to work alongside humans to unlock knowledge and tasks that we werent previously able to do. The point when this happens, he doesnt believe is that far away. We are getting closer to it every day.

Many of Luiss ideas are contrary to popular beliefs by many people who are interested in the world of artificial intelligence. At the same time, these ideas that he presents are presented in a very logical manner and are very thought-provoking. The only way that we will be able to see what is right or where his ideas go is time.

Original post:
This AI Researcher Thinks We Have It All Wrong - Forbes