Page 3,828«..1020..3,8273,8283,8293,830..3,8403,850..»

Interest in machine learning and AI up, though slowing, one platform reports – HR Dive

Dive Brief:

As technologies such as AI and machine learning revolutionize the workplace, learning and development is coming to the forefront of talent management. Preparing workers for AI and automation will lead learning trends in 2020, according to a November 2019 Udemy report. While many workplaces will train employees to sharpen their tech skills, the report said, learning professionals will also need to focus on soft skills and skills related to project management, risk management and change management.

About 120 million workers around the world will need access to retraining opportunities, a need at least partly driven by AI and automation, according to a report from IBM. This need vastly outpaces the number of organizations equipped with resources that suffice for such an effort, however.

Platforms such as O'Reilly may aid in filling this gap. Third-party training programs are growing in popularity with seemingly positive results. Managers may prefer coders with training from a boot camp, for example, a recent report from HackerRank found. But there has been at least one report that external L&D programs boast false results;New York Magazine reported Lambda School, "a 'boot camp' for people who want to quickly learn how to code," has inflated the number of job placements secured by its graduates.

Originally posted here:
Interest in machine learning and AI up, though slowing, one platform reports - HR Dive

Read More..

How AI and Machine Learning is Transforming the Organic Farming Industry | Quantzig’s Recent Article Offers Detailed Insights – Business Wire

LONDON--(BUSINESS WIRE)--Quantzig, a leading analytics advisory firm that delivers customized analytics solutions, has announced the completion of its new article that illustrates the role of big data in smart farming. This article also sheds light on the importance of analytics and machine learning in driving improvements within the organic farming industry.

The growing use of commercial farming methodologies has caused a major tradeoff in the quality of the food being produced. Large scale farming practices combined with unstructured supply chain networks have not only diminished nutrition but have led to a rise in waste and food safety & contamination risks. Today big data in smart farming has gained immense popularity as it plays a crucial role in driving improvements across segments. However, with access to several data sets from disparate sources data management issues have started to be a major concern for players in the organic farming industry. We at Quantzig, understand the challenges facing this sector, which is why weve developed solutions that focus on leveraging big data in smart farming. Big data in smart farming also plays a key role in helping businesses build a data-driven business culture by offering access to data-driven insights to various segments within the organization.

At Quantzig, we believe that organizations must leverage big data analytics to tackle the complexities in todays data sets. Request a FREE proposal for in-depth insights on our big data analytics solutions portfolio.

Benefits of Leveraging Big Data in Smart Farming

Adhere to food safety regulations

Big data in smart farming empowers businesses to prevent food spoilage and maintain food quality by enabling them to instantly access details on food contamination. The collection and analysis of data that offers insights into humidity, temperature, and chemicals will help also them gain a better picture of the quality of food being produced.

Big data in smart farming plays a crucial role in helping businesses identify and capitalize on new opportunities. Request a free demo to know more about our big data analytics solutions.

Improve operations equipment management

Applying big data-based insights to improve machine performance can help the organic farming industry to effectively track and monitor issues around machine failure and downtime. This not only aids in the smooth delivery of the produce but helps prevent issues that may hinder their ability to drive performance.

We can help you drive profitable growth by guiding you throughout your analytics journey. Contact our analytics experts to learn more about big data in smart farming.

Predict yield by analyzing several key factors

Big data within the organic farming industry acts as a powerful tool that helps transform every facet of an organization. It also enables businesses to predict their yield by deploying mathematical models and machine learning to analyze data around leaf biomass index, chemicals, and weather conditions.

Embedding big data analytics into your business processes can be quite challenging, but it helps you maximize the value of data by analyzing data from various sources to create customized reports. The goal is to help decision-makers within the organic farming sector to understand and analyze data to make crucial decisions and warn them about changes that go beyond a defined threshold.

Would you like to learn more about the future of the organic farming industry? Read the complete article here: https://bit.ly/386Kqdb

About Quantzig

Quantzig is a global analytics and advisory firm with offices in the US, UK, Canada, China, and India. For more than 15 years, we have assisted our clients across the globe with end-to-end data modeling capabilities to leverage analytics for prudent decision making. Today, our firm consists of 120+ clients, including 45 Fortune 500 companies. For more information on our engagement policies and pricing plans, visit: https://www.quantzig.com/request-for-proposal

Original post:
How AI and Machine Learning is Transforming the Organic Farming Industry | Quantzig's Recent Article Offers Detailed Insights - Business Wire

Read More..

DrChrono teams up with Cognition IP on machine learning patents, insurer deal brings Pager to South America and more digital health deals -…

DrChrono, a digital health company that works with EHRs, is teaming up with Cognition IP to help it draft five patent applications. The new patents will primarily be focused on machine learning technology. The pair are also going to be working on pushing through two patents that have been stalled for years.

It was crucial to work with a partner that had a deep understanding of healthcare and machine learning,Daniel Kivatinos, cofounder and COO of DrChrono, said in a statement. We wanted to ensure that our intellectual property was patented and protected and Cognition IP helped us do just that by successfully expanding our portfolio.Investing in machine learning is critical to the future of ourhealthcare platform used by thousands of medical practices.

Virtual care company Pager announced that it is launching in South America. It inked a deal with insurer Seguros SURA in Colombia. As part of the deal, Pager will provide its members with a virtual care team.Patients will be able to chat, call and video chat their doctor or care team. The services include triage, telemedicine, appointment setups, transportationand follow up care. The service is already available in all 50 US states but this deal marks the first international launch for the company.

Convenient and cost-effective access to healthcare is a global issue Walter Jin, chairman and CEO of Pager, said in a statement."The partnership between Seguros SURA Colombias efficient health clinics and Pagers technology platform will showcase the next generation model for the future of healthcare. Pager and Seguros Sura Colombia collaboration shows that our consumer-first approach to healthcare transcends geographical borders.

Yale School of Medicine is teaming up with Foretell Reality in an effort to measure the effects of VR therapy on cancer patients. Specifically, the group is looking at the levels of anxiety and depression in cancer patients.

A major factor in this study is the convenience for patients to enter a VR-based chat room, something that is especially useful for people with rare diseases or those who live in rural areas Dr. Asher Marks, sssistant professor of pediatrics (Hematology / Oncology) and director of pediatric Neuro-Oncology, said in a statement. The VR technology offered by Foretell Reality allows users to jointly partake as avatars in a shared experience which cannot be replicated over a conference call or video chat. Additionally, patients in the study are presented the option to remain anonymous during the VR group session, giving them a unique opportunity to communicate with others in a way they may otherwise not be comfortable with.

UPMC Health Plan will be employing VirtualHealths HELIOS care management and coordination platform for one of its programs focused on long-term services and supports for patients eligible for both Medicare and Medicaid, the companies announced. The tool automates caseload assignments and task management for care managers, while also assisting with assessment, planning and authorization.

Forward-looking health plans understand that HELIOS will position them for success through a true whole-person, member-centered approach to care management, Adam Sabloff, CEO and founder of VirtualHealth, said in a statement. We are honored that UPMC Health Plan plans to utilize HELIOS to enhance its members care and experiences in its Community HealthChoices program.

Read more here:
DrChrono teams up with Cognition IP on machine learning patents, insurer deal brings Pager to South America and more digital health deals -...

Read More..

Machine Learning on the Edge, Hold the Code – Datanami

(Dmitriy Rybin/Shutterstock)

Many companies are scrambling to find machine learning engineers who can build smart applications that run on edge devices, like mobile phones. One company thats attacking the problem in a broad way is Qeexo, which sells an AutoML platform for building and deploying ML applications to microcontrollers without writing a line of code.

Qeexo emerged from Carnegie Mellon University in 2012, just at the dawn of the big data age. According to Sang Won Lee, the companys co-founder and CEO, the original plan called for Qeexo to be a machine learning application company.

The company landed a big fish, the Chinese mobile phone manufacturer Huawei, right out the gate. Huawei liked the ML-based finger-gesture application that Qeexo (pronounced Key-tzo) developed, and the company wanted Qeexo to ensure that it could run across all of its phone lines. That was a good news-bad news situation, Lee says.

Our first commercial implementation with Huawei kept the whole company in China for two months, to finish one model with one hardware variant, Lee tells Datanami. We came back and it was difficult to keep the morale high for our ML engineers because nobody wanted to constantly go abroad to do this type of repetitive implementation.

Qeexos AutoML solution handles many aspects of ML model development and deployment for customers

It quickly dawned on Lee that, with more ML models and more hardware types, the amount of manual work would quickly get out of hand. That led him to the idea of developing an automated machine learning, or AutoML, platform that could automatically generate ML models based on the data presented to it, automatically flash it to a group of pre-selected microcontrollers.

Lee and his team of developers, which is led by CTO and co-founder Chris Harrison (who is an assistant professor at Carnegie Mellon University), developed the offering nearly five years ago, and the company has been using it ever since for its own ML services engagements.

Huawei continues to utilize Qeexos AutoML solution to generate ML applications for its handsets. In 2018, we completed 57 projects for Hauwei, and most of the projects were completed by our field engineers just using the AutoML platform, without the help of ML engineers in the US, Lee says.

Sang Won Lee is the co-founder and CEO of Qeexo

In October 2019, Qeexo released its AutoML offering as a stand-alone software offering. The product automates many steps in the ML process, from building models from collected data, comparing performance of those models, and then deploying the finished model to a microcontroller all without requiring the user to write any code.

The offering has built-in support for the most popular ML algorithms, including random forest, gradient boosted machine, and linear regression, among others. Users can also select deep learning models, like convolutional neural networks, but many microcontrollers lack the memory to handle those libraries, Lee says.

Qeexos AutoML solution automatically handles many of the engineering tasks that would otherwise require the skills of a highly trained ML engineer, including feature selection and hyperparameter optimization. These feature are built into the Qeexo offering, which also sports a built-in C compiler and generates binary code that can be deployed to microcontrollers, such as those from Renasas Electronics.

Lee says ML engineers might be able to get a little more efficiency by developing their own ML libraries, but that it wont be worth the effort for many users. There are always more improvements that you can get with ML experts digging into it and doing the research, he says. But this is giving you the convenience of being able to build a commercially viable solution without having to write a single line of code.

Today Qeexo announced its new AWS solution. Instead of training a model on a laptop, customers can use now AWS resources to train their model. It also announced more ML algorithms, including deep learning algorithms and traditional algorithms. The visualization that Qeexo provides have also been enhanced to give the user the ability to better spot outliers and trends in data. Support for microphone data has been supported. And it also added support for the Renesas RA Family of Cortex-M MCUs, which are geared toward low-power IoT edge devices.

Having Huawei as a client certainly gives Qeexo some experience with scalability under its belt. But the Mountain View, California-based company is bullish on the potential for a new class of application developers to get started using its software to imbue everyday devices with the intelligence of data.

What we really want to tell the market is that even or those microcontrollers that are already out and that have very limited memory resource and processing power, you can still have a commercially viable ML solution running on it, if you use the right tool, Lee says. You dont want to neglect all the sensor data thats connected to the microcontroller. We can provide a tool that you can use to build intelligence that can be embedded into those tools.

Related Items:

On the Radar: Cortex Labs, RadicalBit, Qeexo

AutoML Tools Emerge as Data Science Difference Makers

Cloud Tools Rev Up AI Dev Platforms

Read more from the original source:
Machine Learning on the Edge, Hold the Code - Datanami

Read More..

Using machine learning to solve real-world data problems for scientists and engineers – WSU News

Graduate students Aryan Deshwal and Syrine Belakaria presenting at the NeurIPS conference.

Many researchers in artificial intelligence and machine learning aim to develop computer programs that can sift through huge amounts of data, learn from it, and guide future decisions.

But, what if the many data options are hugely expensive and difficult to acquire and one has to decide which data is best to spend your money on? What direction should scientists head with their next experiment?

A WSU research team is taking a different angle in machine learning research in what they say can be of great practical use, especially, to engineers and scientists.

Computer science graduate students Syrine Belakaria and Aryan Deshwal recently presented their research at major international artificial intelligence and machine learning conferences, including the 2019 Conference on Neural Information Processing Systems (NeurIPs) in Vancouver, Canada and the 2020 Association for the Advancement of Artificial Intelligence Conference in New York. The NeurIPS conference is the premier machine learning conference in the world with more than 14,000 attendees. Belakaria and Dewshwal are advised by Jana Doppa, the George and Joan Berry Chair Assistant Professor in WSUs School of Electrical Engineering and Computer Science.

The group will also present their collaborative work with computer engineering researchers at the Design, Automation and Test in Europe Conference (DATE-2020) in Grenoble, France.

The groups research is based on Doppas 2019 NSF Early Career Award and focuses on developing general-purpose learning and reasoning computer algorithms to support engineers and scientists in their efforts to optimize the way they conduct complex experiments. They are working to combine domain knowledge from engineers and scientists with data from past experiments to select future experiments, so that researchers can minimize the number of experiments needed to find near-optimal designs.

Doppas team has analyzed and experimentally evaluated the algorithms for diverse applications in electronic design automation, such as for analog circuit design, manycore chip design, or tuning compiler settings, and in materials science, such as for designing shape memory alloys and piezo-electric materials. They also proposed two algorithms to optimize multiple objectives with minimal experiments and have developed the first theoretical analysis for multi-objective setting. They also developed a novel learning to search framework to optimize combinatorial structures, which is very challenging when compared to continuous design spaces.

The common theme behind this work is better uncertainty management to select the sequence of experiments, Doppa said.

While Belakaria and Deshwal have contributed important research innovations in the field, they also have gained valuable learning opportunities during their studies. While at NeurIPS, the students had the chance to network with leaders in the field of machine learning as well as attend a special session for women and those who are underrepresented in the field. The session had more than one thousand attendees.

The conference gave the students a chance to see the real-world applications of AI, said Deshwal, and meeting professionals who are using machine learning to solve challenges in medical and science fields. A large number of prominent companies, such as Uber, Google, Facebook, and Amazon, sent representatives to the conference and hosted events.

Belakaria, who is originally from Tunisia, said it was amazing to attend a roundtable and sit down with female leaders in the explosive and competitive field. She appreciated getting advice on how many women in the field are finding success while balancing their work and personal lives.

Both she and Deshwal expressed appreciation for a supportive lab that has provided mentoring and has encouraged their growth and success.

Research is as emotional as it is academic, and our camaraderie helps us a lot, said Deshwal, originally from India. When you have a field that is moving so quickly such as machine learning, having people who are supportive is so important.

Being in a community where you feel safe, respected, and valued for your scientific contribution is very crucial for women in science. We feel welcome in the machine learning community, added Belakaria.

With WSU since 2014, Doppa is part of a major expansion on the part of the university to meet the growing demands in the fields of electrical engineering and computer science. Since 2015, research expenditures in WSUs School of Electrical Engineering and Computer Science have nearly doubled, as have the number of graduates from its computer science program.

See the original post:
Using machine learning to solve real-world data problems for scientists and engineers - WSU News

Read More..

Stockholm-based medtech AlgoDx raises a 600K to save lives with its machine learning diagnostics – EU-Startups

Swedish startup AlgoDx, which focuses on supporting disease detection and prediction with machine learning algorithms, has closed a 600K seed round. The round was led by Nascent Invest, with participation from angel investors Fredrik Sjdin and Tomas Mora-Morrison, co-founder of Cambio Healthcare Systems.

Founded in 2018, uses artificial intelligence and machine learning to increase efficiency and save time for healthcare professionals. Today, many healthcare processes require manual input and analyses, which is time consuming, expensive and leaving room for human errors. For example, in sepsis treatment, time factor is critical, as the cornerstones of intervention are early and appropriate antibiotics together with source control and fluid administration. Current detection methods for sepsis are incapable of early prediction. Thats where AlgoDx comes in.

AlgoDxs first product, ExPRESS, has been developed to autonomously predict sepsis in hospitalized patients using data from electronic healthcare records. Reliable early prediction can mean the difference between life and death for patients that develop sepsis.

We invested in AlgoDx because we believe that the team has a strong competitive edge within machine learning and a profound understanding of the clinical validation required to bring products to market in areas with unmet medical need, says Erik Gozzi, CEO at Nascent Invest.

The fresh funds will be used to further develop its first product ExPRESS, specifically by further scaling clinical validation and showing the benefits of autonomous sepsis risk monitoring in patients being treated at Intensive Care Units.

This seed round will allow us to continue the clinical validation of our sepsis prediction algorithm as planned. We are very proud to be supported by investors with a commercial outlook and a long-term investment horizon, says David Becedas, CEO at AlgoDx.

We are at the commencement of a new age where machine learning approaches will enable earlier and more accurate detection and prediction of disease. The founding team at AlgoDx understands that clinical rigor is essential in order to bring machine learning solutions to market with integrations into electronic healthcare record systems, says Tomas Mora-Morrison, who will also chair the companys new Board.

See the original post:
Stockholm-based medtech AlgoDx raises a 600K to save lives with its machine learning diagnostics - EU-Startups

Read More..

Artificial intelligence and machine learning for data centres and edge computing to feature at Datacloud Congress 2020 in Monaco – Data Economy

Vertiv EMEA president Giordano Albertazzi looks back on data center expansion in the Nordics and the regions role as an efficient best execution venue for the future.

At the start of the new year its natural to look to thefuture. But its also worth taking some time to think back to the past.

Last year was not only another period of strong data center growthglobally, and in the Nordic region specifically, but also the end of a decadeof sustained digital transformation.

There have been dramatic shifts over the last ten years butthe growth in hyperscale facilities is one of the most defining and one withwhich the Nordic region is very well acquainted.

According to figures from industry analysts Synergy Researchthe total number of hyperscale sites has tripled since 2013 and there are nowmore than 500 such facilities worldwide

And it seems that growth shows no signs of abating. Accordingto Synergy, in addition to the 504 current hyperscale data centers, a further151 that are at various stages of planning or building.

A good numberof those sites will be sited in the Nordics if recent history is anything to goby. The region has already seen significant investment from cloud andhyperscale operators such as Facebook, AWS and Apple. Google was also one ofthe early entrants and invested $800 million inits Hamina, Finland facility in 2010. It recently announced plans to invest a further $600 million in an expansion ofthat site.

I was lucky enough to speak at the recent DataCloud Nordicsevent at the end of last year. My presentation preceded Googles country manager,Google Cloud, Denmark and Finland, Peter Harden, who described the companysgrowth plans for the region. Hamina, Finland is one of Googles mostsustainable facilities thanks in no small part to its Nordics location whichenables 100% renewable energy and innovative sea water cooling.

Continuing that theme of sustainability, if the last decadehas been about keeping pace with data demand, then the next ten years will beabout continued expansion but importantly efficient growth in the right locations,using the right technology and infrastructure. The scale of growth beingpredicted billions of new edge devices for example will necessitate asustainable approach.

That future we at Vertiv, and others, believe will be basedaround putting workloads where they make most sense from a cost, risk, latency,security and efficiency perspective. Or as industry analysts 451 Research putsit: TheBest Execution Venue (BEV). (a slightlyunwieldy term but an accurate one). BEV refers to the specific ITinfrastructure an app or workload should run on cloud, on-premise or at theedge for example but could also equally apply to geographic location of datacenters.

In that BEV future, the Nordics will become increasingly important for hosting a variety of workloads but the sweet-spot could be those that are less latency sensitive high performance compute (HPC) for example and can therefore benefit from the stable, renewable and cheap power as well as the abundance of free cooling. Several new sub-sea cables coming online over the near future will also address some of the connectivity issues the region has faced.

Newsletter

Time is precious, but news has no time. Sign up today to receive daily free updates in your email box from the Data Economy Newsroom.

A recent study by the Nordic Council of Ministers estimatesthat approximately EUR 2.2 bn. have been invested in the Nordics on initiateddata centre construction works over the last 12 to 18 months (2018). Mainlywithin hyperscale and cloud infrastructure. This number could exceed EUR 4 bn.annually within the next five to seven years because of increasing marketdemand and a pipeline of planned future projects.

Vertiv recently conducted some forward-looking research thatappears to reinforce the Nordics future potential. Vertiv first conducted itsData Center 2025 research back in 2014 to understand where the industry thoughtit was headed. In 2019, weupdated that study to find out how attitudes had shifted in the interveningfive years a half way point if you will be between 2014 and 2025.

The survey of more than 800 data center experts covers a range of technology areas but lets focus on a few that are important and relevant to the Nordics.

We mentioned the edge a little earlier when talking about BEV.Vertiv has identified fourkey edge archetypes that cover the edge use cases that our experts believewill drive edge deployments in the future. According to the 2025 research, ofthose participants who have edge sites today, or expect to have edge sites in2025, 53% expect the number of edge sites they support to grow by at least 100%with 20% expecting an increase of 400% or more.

So along with providing a great venue for future colo and cloud growth, the Nordics, like other regions, is also likely to see strong edge growth. That edge demand will require not only new data center form-factors such as prefabricated modular (PFM) data center designs but also monitoring and management software and specialist services.

Another challenge around edge compute, and the core for thatmatter, is energy availability and increasingly, access to clean, renewableenergy.

The results of the 2025 research revealed that respondentsare perhaps more realistic and pragmatic about the importance and access toclean power than back in 2014. Participants in the original survey projected22% of data center power would come from solar and an additional 12% from windby 2025. Thats a little more than one-third of data center power from thesetwo renewable sources, which seemed like an unrealistic projection at the time.

This years numbers for solar and wind (13% and 8% respectively) seem more realistic. However, importantly for Nordics countries with an abundance of hydropower, participants in this years survey expect hydro to be the largest energy source for data centers in 2025.

The data center 2025 research, also looked at one of theother big drivers for building capacity in the Nordics: access to efficientcooling.

According to the 2025 survey, around 42% of respondentsexpect future cooling requirements to be met by mechanical cooling systems. Liquidcooling and outside air also saw growth from 20% in 2014 to 22% in 2019, likelydriven by the more extreme rack densities being observed today. This growth inthe use of outside air obviously benefits temperate locations like the Nordics.

In summary, if the last ten years have been about simplykeeping up with data center demand, the next ten years will be about addingpurposeful capacity in the most efficient, sustainable and cost-effective way:the right data center type, thermal and power equipment, and location for theright workloads.

If the past is anything to go by, the Nordics will have an important role to play in that future.

Read the latest from the Data Economy Newsroom:

The rest is here:
Artificial intelligence and machine learning for data centres and edge computing to feature at Datacloud Congress 2020 in Monaco - Data Economy

Read More..

Sony Music and The Orchard held a machine-learning hackathon – Music Ally

Sony Music, The Orchard and Amazon Web Services held a machine-learning focused hackathon last week. Music Ally couldnt make the event, so we asked if Sony could give us some information about the winning hacks. The label group came back with a full report on the event, quotesnall. So, in its own words, heres what went down at Music ML 2020.

Music ML 2020 is the second annual hackathon held by Sony Music, The Orchard and Amazon Web Services (AWS) and the first to be expanded out of the US. The Orchard office hosted the New York competition and Sony Music simultaneously hosted in London over the course of three days. The hackathons primary purpose is to allow cross-functional teams of industry professionals to create business solutions at the intersection of music and machine-learning.

Each team was tasked with creating a proof-of-concept for a new machine-learning solution that would positively impact the business. At The Orchard, teams worked daily to connect fans to songs, albums, and videos from their favourite artists across the globe.

The hackathon focused on improving the quality of content releases and making them more discoverable, using machine learning as a toolset to support creative decisions. The opening keynote by Jacob Fowler, Chief Technology Officer at The Orchard, challenged competitors to inspire, create, and push the boundaries of innovation.

In each location the teams were comprised of technical staff from Sony Music, The Orchard UK, and solutions architects from AWS. BRIT-nominated producer DJ Fresh, who also works as a machine-learning engineer, joined the British hackathon to lend his expertise.On the competitions final day, the UK teams presented their projects followed by the New York teams.

Five executives made up the UK judging panel. From Sony Music UK: Cassandra Gracey, President, 4th Floor Creative Group, Michael Hanson, Head of Digital, Columbia Records and Olivier Parfait, Director, Global Business Development & Digital Strategy. From The Orchard UK: Chris Manning, General Manager, UK & EU and Joe Andrews, Senior Director, International Sales and Marketing.

The US judges were Jacob Fowler, CTO, The Orchard; Rachel Stoewer, VP, Artist and Label Services, The Orchard; Devki Patel, Strategy and Finance; and Chris Frankenberg, VP of Emerging Technology, Global Digital Business, Sony Music.

In London the winning team design-trained a machine learning model to understand and monitor user behaviour and interactions with brands and artists. This concept put a powerful and user-friendly tool in the hands of marketers, empowering them to rapidly create retargeting lists comprised of high-value users.

This is particularly valuable for breaking new artists, as the model can identify high-value users with only a small amount of input data. The tool can also be used by sync and brand partnership teams, enabling them to actively pitch brands on potential deals.

In New York, the winning team produced a model that uses machine learning to auto-generate artwork for artist and label merchandising. This innovative design created interesting and creative opportunities for artists and labels to utilise existing content on various platforms.

Lewis Donovan, lead web developer & Music ML project lead for Sony Music UK, said:

After three intense days in London the teams produced a tool that analyses data to inform A&R decisions, a spike analysis tool and an artist/brand/user affinity tool. Achieving this is just three days, by teams with varying degrees of coding experience illustrates the energy, creativity and ambition for innovation here. Every participant in the UK event had the opportunity to collaborate with new people, learn new skills, and help create a new business application using machine-learning.

DJ Fresh, Ministry of Sound artist and machine learning engineer, said:

The way we consume media now is three dimensional, as an artist I want to learn how to use that palette. The last few years, being an all-or-nothing kind of guy, I devoted myself to AI and trained as a machine-learning engineer, so when Sony Music asked me to get involved I thought it sounded like an interesting challenge.

Ive always been involved with tech in the early 2000s I started a site that was the biggest music forum on the net at the time, a place for the drum and bass scene to network.Now Im working on a new app called Golddust (Golddust.io), and with my new single Drive out now, its a very creative time for me.

Cassandra Gracey, President, 4th Floor Creative and Music ML judge said:

I loved seeing what could be achieved in such a short space of time, looking at ways to boost our business using various machine learning applications and our owned data. I look forward to the winning teams programme being finished so I can start using it!

Sony also noted that the winning UK team includedSony Music UKs web developer Lewis Donovan, Francesca Lamaina and Josh Rubner, AWS Andrew Morrow, and (via a Hamburg dial-in) The Orchards Dino Celotti.The winning New York team was The Orchards Anthony Khoudary, Jinny Yang, and Peter Iannone, and AWS team members Julia Soscia, Rahul Popat, and Alex Jestin Taylor.

Stuart Dredge

'); // Defining the Status message element var statusdiv=$('#comment-status'); commentform.submit(function(){ // Serialize and store form data var formdata=commentform.serialize(); //Add a status message statusdiv.html('

Processing...

'); //Extract action URL from commentform var formurl=commentform.attr('action'); //Post Form with data $.ajax({ type: 'post', url: formurl, data: formdata, error: function(XMLHttpRequest, textStatus, errorThrown){ statusdiv.html('

Thanks for your comment. We appreciate your response.

'); commentform.find('textarea[name=comment]').val(''); } }); return false; }); });

Read this article:
Sony Music and The Orchard held a machine-learning hackathon - Music Ally

Read More..

Keeping machine learning algorithms humble and honest in the ‘ethics-first’ era – TechNative

AI and machine learning (ML) applications have been at the centre of several high-profile controversies, witness the recent Apple Cardcredit limit differencesand Amazonsrecruitment toolbias.

Mind Foundry has been a pioneer in the development and use of humble and honest algorithms from the very beginning of its applications development. As Davide Zilli, Client Services Director atMind Foundryexplains, baked in transparency and explainability will be vital in winning the fight against biased algorithms and inspiring greater trust in AI and ML solutions.

Today in so many industries, from manufacturing and life sciences to financial services and retail, we rely on algorithms to conduct large-scale machine learning analysis. They are hugely effective for problem-solving and beneficial for augmenting human expertise within an organisation. But they are now under the spotlight for many reasons and regulation is on the horizon, withGartner projectingfour of the G7 countries will establish dedicated associations to oversee AI and ML design by 2023. It remains vital that we understand their reasoning and decision-making process at every step.

Algorithms need to be fully transparent in their decisions, easily validated and monitored by a human expert. Machine learning tools must introduce this full accountability to evolve beyond unexplainable black box solutions and eliminate the easy excuse of the algorithm made me do it!

Bias can be introduced into the machine learning process as early as the initial data upload and review stages. There are hundreds of parameters to take into consideration during data preparation, so it can often be difficult to strike a balance between removing bias and retaining useful data.

Gender for example might be a useful parameter when looking to identify specific disease risks or health threats, but using gender in many other scenarios is completely unacceptable if it risks introducing bias and, in turn, discrimination. Machine learning models will inevitably exploit any parameters such as gender in data sets they have access to, so it is vital for users to understand the steps taken for a model to reach a specific conclusion.

Removing the complexity of the data science procedure will help users discover and address bias faster and better understand the expected accuracy and outcomes of deploying a particular model.

Machine learning tools with built-in explainability allow users to demonstrate the reasoning behind applying ML to a tackle a specific problem, and ultimately justify the outcome. First steps towards this explainability would be features in the ML tool to enable the visual inspection of data with the platform alerting users to potential bias during preparation and metrics on model accuracy and health, including the ability to visualise what the model is doing.

Beyond this, ML platforms can take transparency further by introducing full user visibility, tracking each step through a consistent audit trail. This records how and when data sets have been imported, prepared and manipulated during the data science process. It also helps ensure compliance with national and industry regulations such as the European Unions GDPRright to explanation clause and helps effectively demonstrate transparency to consumers.

There is a further advantage here of allowing users to quickly replicate the same preparation and deployment steps, guaranteeing the same results from the same data particularly vital for achieving time efficiencies on repetitive tasks. We find for example in the Life Sciences sector, users are particularly keen on replicability and visibility for ML where it becomes an important facility in areas such as clinical trials and drug discovery.

There are so many different model types that it can be a challenge to select and deploy the best model for a task. Deep neural network models, for example, are inherently less transparent than probabilistic methods, which typically operate in a more honest and transparent manner.

Heres where many machine learning tools fall short. Theyre fully automated with no opportunity to review and select the most appropriate model. This may help users rapidly prepare data and deploy a machine learning model, but it provides little to no prospect of visual inspection to identify data and model issues.

An effective ML platform must be able to help identify and advise on resolving possible bias in a model during the preparation stage, and provide support through to creation where it will visualise what the chosen model is doing and provide accuracy metrics and then on to deployment, where it will evaluate model certainty and provide alerts when a model requires retraining.

Building greater visibility into data preparation and model deployment, we should look towards ML platforms that incorporate testing features, where users can test a new data set and receive best scores of the model performance. This helps identify bias and make changes to the model accordingly.

During model deployment, the most effective platforms will also extract extra features from data that are otherwise difficult to identify and help the user understand what is going on with the data at a granular level, beyond the most obvious insights.

The end goal is to put power directly into the hands of the users, enabling them to actively explore, visualise and manipulate data at each step, rather than simply delegating to an ML tool and risking the introduction of bias.

The introduction of explainability and enhanced governance into ML platforms is an important step towards ethical machine learning deployments, but we can and should go further.

Researchers and solution vendors hold a responsibility as ML educators to inform users of the use and abuses of bias in machine learning. We need to encourage businesses in this field to set up dedicated education programmes on machine learning including specific modules that cover ethics and bias, explaining how users can identify and in turn tackle or outright avoid the dangers.

Raising awareness in this manner will be a key step towards establishing trust for AI and ML in sensitive deployments such as medical diagnoses, financial decision-making and criminal sentencing.

AI and machine learning offer truly limitless potential to transform the way we work, learn and tackle problems across a range of industriesbut ensuring these operations are conducted in an open and unbiased manner is paramount to winning and retaining both consumer and corporate trust in these applications.

The end goal is truly humble, honest algorithms that work for us and enable us to make unbiased, categorical predictions and consistently provide context, explainability and accuracy insights.

Recent research shows that84% of CEOsagree that AI-based decisions must be explainable in order to be trusted. The time is ripe to embrace AI and ML solutions with baked in transparency.

Featured image: MaZi

See the rest here:
Keeping machine learning algorithms humble and honest in the 'ethics-first' era - TechNative

Read More..

A 5-Year Vision for Artificial Intelligence in Higher Ed – EdTech Magazine: Focus on Higher Education

The Historical Hype Cycle of AI

Before talking about the current and projected impact of AI in education and other industries, Ramsey explained the concept of the AI winter.

He showed a graph on the historical hype cycle of AI that featured peaks and drops over a 70-year period.

There was a big peak in the mid-1960s, when there was an emergence of symbolic AI research and new insights into the possibility of training two-layer neural networks. A resurgence came in the 1980s with the invention of certain algorithms for training three-plus layer neural networks.

The graph showed a drop in the mid-1990s, as the computational horsepower and data did not exist to develop real-world applications for AI a situation he calls an AI winter. We are in the middle of another resurgence today, he said.

There has been a huge increase in the amount of data and computer power that we have available, sparking research, Ramsey said. People have been able to start inventing algorithms and training not just three-layer neural networks but a 100-layer one.

The question now is where we will go next, he said. His answer? We will sustain progress, leading to true or strong AI the point at which a machines intellectual capability is functionally equal to a humans.

The number of researchers working on this, the amount of money thats being spent on this and the amount of research publications its all growing, he said. And where Google is right now is on a plateau of productivity because were using AI in everything that we do, at scale.

MORE ON EDTECH:Learn how data-powered AI tools are helping universities drive enrollment and streamline operations.

During his presentation, Ramsey showed an infographic that featured what machine learning could look like across a students journey through higher education, starting from their college search and ending with employment.

For example, he said, colleges and universities can apply machine learning when targeting quality prospective students to attend their schools. They can even automate call center operations to make contacting prospective students more efficient and deploy AI-driven assistants to engage with applicants in a personalized way, he said.

Once students are enrolled, they can also useAI chatbotsto improve student support services, assisting new students in their adjustment to college. They can leverage adaptive learning technology topredictperformance as they choose a path through school, and they can tailor material to their knowledge levels and learning styles.

For example, a machine learning algorithm helped educators at Ivy Tech Community College in Indianapolis identify at-risk students and provide early intervention, Ramsey said.

Ivy Tech shifted toGoogleCloud Platform, which allowed the school to manage 12 million data points from student interactions and develop aflexible AI engineto analyze student engagement and success. For instance, a student who stops logging in to their learning management system or showing up to class would be flagged as needing assistance.

The predictions were 83 percent accurate, Ramsey said. It worked quite well, and they were actually able to save students from dropping out, which makes a big difference because their funding is based on how many students they have, he said.

As students near graduation and start their job searches, schools can also use AI to understand career trends and match them to a students competencies and skills. Machine learning can be used to better understand job listings and a jobseekers intent, matching candidates to their ideal jobs more quickly.

At the end of the day, what were doing with these technologies is trying to understand who we are and how our minds work, Ramsey said. Once we fully understand that, we can build machines that function in the same way, and the possibilities are endless.

Excerpt from:
A 5-Year Vision for Artificial Intelligence in Higher Ed - EdTech Magazine: Focus on Higher Education

Read More..