Category Archives: Machine Learning
Machine learning and the power of big data can help achieve stronger investment decisions – BNNBloomberg.ca
Will machines rise against us?
Sarah Ryerson, President of TMX Datalinx, is certain we dont need to worry about that. And its safe to say we can trust her opinion with data being her specialty, as well as having spent five years at Google before joining TMX.
She applies her experience on Bay Street by helping traders, investors and analysts mine the daily avalanche of data that pours out of TMX every day.
If information is power what will we be doing with data in the future?
Ryerson has the answer, explaining that we will be mining data for patterns and signals that will help us draw new insights and allow us to make better investment decisions.
Ryerson is bringing real-time, historical and alternative data together for TMX clients. Its all about picking up the signals and patterns that the combined data set that will deliver.
She also affirms that she is aiming to make this information more accessible. This will be done through platforms where investors can do their own analysis via easy-to-use distribution channels where they can get the data they want through customized queries. Ryerson notes, Machine learning came into its own because we now have the computing power and available data for that iterate and learn opportunity.
Ryerson knows that for savvy investors to get ahead of algorithms, machine learning or artificial intelligence (AI), they need more than buy-and-sell data. This could be weather data, pricing data, sentiment data from social media or alternative data. When you combine techniques to the vast amounts of data we have thats where we can derive new insights from combinations of data we havent been able to analyze before.
One of the most important elements of AI that data scientists realize is that algorithms cant be black boxes. The analysts and investors using them need transparency to understand why an algorithm is advising to buy, sell or hold.
Looking further into the future, Ryerson believes, We will be seeing more data and better investment decisions because of the insights were getting from a combined set of data.
Thats a lot of data to dissect!
Go here to read the rest:
Machine learning and the power of big data can help achieve stronger investment decisions - BNNBloomberg.ca
Tying everything together Solving a Machine Learning problem in the Cloud (Part 4 of 4) – Microsoft – Channel 9
This is the final, part 4 of a four-part series that breaks up a talk that I gave at the Toronto AI Meetup. Part 1, Part 2 and Part 3 were all about the foundations of machine learning, optimization, models, and even machine learning in the cloud. In this video I show an actual machine learning problem (see the GitHub repo for the code) that does the important job of distinguishing between tacos and burritos (an important problem to be sure). The primary concepts included is MLOps both on the machine learning side as well as the deliver side in Azure Machine Learning and Azure DevOps respectively.
Hope you enjoy the final of the series, Part 4! As always feel free to send any feedback or add any comments below if you have any questions. If you would like to see more of this style of content let me know!
The AI Show's Favorite links:
Continued here:
Tying everything together Solving a Machine Learning problem in the Cloud (Part 4 of 4) - Microsoft - Channel 9
Tip: Machine learning solutions for journalists | Tip of the day – Journalism.co.uk
Much has been said about what artificial intelligence and machine learning can do for journalism: from understanding human ethics to predicting when readers are about to cancel their subscriptions.
Want to get hands on with machine learning? Quartz investigative editor John Keefe provides 15 video lessons taken from the 'Hands-on Machine Learning Solutions for Journalists' online class he lead through the Knight Center for Journalism in the Americas. It covers all the techniques that the Quartz investigative team and AI studio commonly use in their journalism.
"Machine learning is particularly good at finding patterns and that can be useful to you when you're trying to search through text documents or lots of images," Keefe explained in the introduction video.
Want to learn more about using artificial intelligence in your newsroom? Join us on the 4 June 2020 at our digital journalism conference Newsrewired at MediaCityUK, which will feature a workshop on implementing artificial intelligence into everyday journalistic work. Visit newsrewired.com for the full agenda and tickets
If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).
More:
Tip: Machine learning solutions for journalists | Tip of the day - Journalism.co.uk
XMOS Appoints AI Professor and Turing Fellow Peter Flach as Special Advisor – Business Wire
BRISTOL, England--(BUSINESS WIRE)--XMOS, a company at the leading edge of the AIoT, today announces the appointment of Bristol University artificial intelligence (AI) professor and Turing fellow Peter Flach as special advisor.
An internationally-renowned researcher in data science and machine learning, professor Flach joins XMOS just after its announcement of xcore.ai the worlds first crossover processor that enables device manufacturers to affordably build artificial intelligence into devices, with prices from just $1.
The launch of xcore.ai and appointment of Flach marks a new phase in XMOSs business, as it looks to kick start the $3 trillion artificial intelligence of things (AIoT) market with a disruptively economical platform.
Commenting on his appointment, Flach said: XMOS is at the forefront of AI, making the technology available and affordable for the first time to almost every industry. The AIoT is one of the biggest opportunities device manufacturers have to differentiate, but its not something they can do easily without companies like XMOS.
XMOS CEO Mark Lippett said: Peter is one of the biggest names in artificial intelligence there are few people more qualified than him on the subject. His knowledge of AI will be crucial for XMOS as we look to unlock the AIoT market with xcore.ai.
Professor Flachs current Google Scholar profile lists more than 300 publications that have accumulated over 11,000 citations and a Hirsch index of 51 (as of February 2020). He is the current editor in chief of Machine Learning Journal and publishes regularly in the leading data mining and AI journals including Communications of the ACM, Data Mining and Knowledge Discovery, Machine Learning, and Neurocomputing. He is President of the European Association for Data Science.
He is also author of Simply Logical: Intelligent Reasoning By Example and Machine Learning: The Art And Science Of Algorithms That Make Sense Of Data, which has to date sold over 15,000 copies and has established itself as a key reference in machine learning with translations into Russian, Mandarin and Japanese.
Ends
About XMOSXMOS stands at the intersection between voice-processing, edge AI and the IoT (AIoT). XMOSs unique silicon architecture and differentiated software delivers class-leading voice-enabled solutions to AIoT applications.
Read more:
XMOS Appoints AI Professor and Turing Fellow Peter Flach as Special Advisor - Business Wire
Improving your Accounts Payable Process with Machine Learning in D365 FO and AX – MSDynamicsWorld.com
Everywhere you look there's another article written about machine learning and automation. You understand the concepts but aren't sure how it applies to your day-to-day job.
If you work with Dynamics 365 Finance and Operations or AX in a Finance or Accounts Payable role, you probably say to yourself, Theres gotta be a better way to do this. But with your limited time and resources, the prospect of modernizing your AP processes seems unrealistic right now.
If this describes you, then dont sweat! Weve done all the legwork to bring machine learning to AP and specifically for companies using Dynamics 365 or AX.
Join us to learn about:
To learn about our findings, join us on Wednesday March 25th at any of three times for our "Improving your Accounts Payable Process with Machine Learning" webinar.
Read more from the original source:
Improving your Accounts Payable Process with Machine Learning in D365 FO and AX - MSDynamicsWorld.com
Ads, Tweets And Vlogs: How Censorship Works In The Age Of Algorithms – Analytics India Magazine
Over the last seven days, online media moguls Facebook, YouTube and Twitter have been in the news for stifling the content on their platforms.
While Facebook is removing the campaign ads of Donald Trump, YouTube has reportedly halved the number of conspiracy theory videos. Whereas, Twitter took a resolve to tighten the screws on hate speech or dehumanising speech as they call it.
In January 2019, YouTube said it would limit the spread of videos that could misinform users in harmful ways.
YouTubes recommendation algorithm follows a technique called Multi-gate Mixture Of Experts. Ranking with multiple objectives is really a hard task, so the team at YouTube decided to mitigate the conflict between multiple objectives using Multi-gate Mixture-of-Experts (MMoE).
This technique enables YouTube to improve the experience for billions of its users by recommending the most relevant video. Since the algorithm takes into account the type of content as an important factor, classification of a video based on its title and the context of the video for its conspiratorial nature becomes easier.
Ever since YouTube announced that it would recommend less conspiracy content and the numbers dropped by about 70% at the lowest point in May 2019. These recommendations are now only 40% less common.
If your tweet is on the lines of any of the above themes, as shown above, you might risk losing your account forever. Last week, Twitter officially announced that they had updated their policies of 2019.
The year 2019 had been turbulent for Twitter. The firms management faced a lot of flak for banning a few celebrities such as Alex Jones for their tweets. Most complained Twitter had shown double standard by banning an individual based on the reports of the rival faction.
Vegans reported meat lovers. The left reported the right and so on and so forth. No matter what the reason was, at the end of the day, the argument boils down to the state of free speech in the digital era.
Twitter, however, has been eloquent about their initiatives in a blog post, they wrote last year, which also has been updated yesterday. Here is how they claim that their review system is promoting healthy conversations:
Skimming over a million tweets in a second can be exhaustive, so Twitter uses the same algorithms that detect spam.
The same technology we use to track spam, platform manipulation and other rule violations is helping us flag abusive tweets to our team for review
With a focus on reviewing this type of content, Twitter has expanded teams in key areas and geographies for staying ahead and working quickly to keep people safe.
Twitter, now, offers an option to mute words of your choice that would eliminate any news related to that word on your feed.
Twitter has a gigantic task ahead as it has to find a way between relentless reporting of the easily offended and the inexplicable angst of the radicals.
From Cambridge Analytica to involvement in Myanmar genocide to Zuckerbergs awkward senate hearings, Facebook had been the most scandalous of all social media platforms in the past couple of years.
However, amidst all these turbulence, Facebooks AI team kept on delivering with great innovations. They have also employed plenty of machine learning models to detect deep fakes, fake news, and fake profiles. When ML is classifying at scale, adversaries can reverse engineer features, which limits the amount of ground truth data that can be obtained. So, Facebook uses deep entity classification (DEC), a machine learning framework designed to detect abusive accounts.
The DEC system is responsible for the removal of hundreds of millions of fake accounts.
Instead of relying on content alone or handcrafting features for abuse detection in posts, Facebook uses an algorithm called temporal interaction embeddings (TIEs), a supervised deep learning model that captures static features around each interaction source and target, as well as temporal features of the interaction sequence.
However, producing these features is labour-intensive, requires deep domain expertise, and may not capture all the important information about the entity being classified.
Last week, Facebook was alleged for displaying inaccurate campaign ads from the President of the US, Donald Trumps team. Facebook then started taking down the ads, which were categorised as spreading misinformation.
When it comes to digital space, championing free speech is easier said than done. An allegation or a report need not always be credible and to make sure an algorithm doesnt take down a harmless post is a tricky thing.
Curbing free speech is curbing the freedom to think. Thought policing has been practised for ages through different means. Kings and dictators detained those who spread misinformation regardless of its veracity. However, spotting the perpetrator was not an easy task in the pre-Internet era. Things took a weird turn when the Internet became a household name. People now carry this great invention, which is packed meticulously into a palm-sized slim metal gadget.
The flow of information happens at lightning speed. GPS coordinates, likes, dislikes and various other pointers are continuously gathered and fed into massive machine learning engines working tirelessly to churn profits through customer satisfaction. The flip side to this is, these platforms now have become the megaphone of the common man.
Anyone can talk to anyone about anything. These online platforms can be leveraged for a reach that is unprecedented. People are no longer afraid of being banned from public rallies or other sanctions as they can fire up their smartphone and start a periscope session. So, any suspension from these platforms is almost like potential obscurity forever. The opinions, activism or even fame, everything gets erased. This leads to an age-old existential question of an identity crisis, only this time, it is done by an algorithm.
A non-human entity[algorithm] classifying a humans act for being dehumanising
Does this make things worse or better? Or should we bask in the fact that we all would be served an equal, unbiased algorithmic judgement?
Machine learning models are not perfect. The results are as good as the data, and the data can only be as true as the ones that generate it. Monitoring billions of messages in the span of a few seconds is a great test of social, ethical and most importantly, computational abilities of the organisations. There is no doubt that companies like Google, Facebook and Twitter have a responsibility that has never been bestowed upon any other company in the past.
We also realise we dont have all the answers, which is why we have developed a global working group of outside experts to help us think.
The responsibilities are critical, the problems are ambiguous, and the solutions hinge on a delicate tightrope. Both the explosion of innovation and policies will have to converge at some point in the future. This will need a combined effort of man and machine as the future stares at us with melancholic indifference.
comments
See the original post here:
Ads, Tweets And Vlogs: How Censorship Works In The Age Of Algorithms - Analytics India Magazine
Machine Learning Software Market Increasing Demand with Leading Player, Comprehensive Analysis, Forecast to 2026 – News Times
The report on the Machine Learning Software Market is a compilation of intelligent, broad research studies that will help players and stakeholders to make informed business decisions in future. It offers specific and reliable recommendations for players to better tackle challenges in the Machine Learning Software market. Furthermore, it comes out as a powerful resource providing up to date and verified information and data on various aspects of the Machine Learning Software market. Readers will be able to gain deeper understanding of the competitive landscape and its future scenarios, crucial dynamics, and leading segments of the Machine Learning Software market. Buyers of the report will have access to accurate PESTLE, SWOT, and other types of analysis on the Machine Learning Software market.
The Global Machine Learning Software Market is growing at a faster pace with substantial growth rates over the last few years and is estimated that the market will grow significantly in the forecasted period i.e. 2019 to 2026.
Machine Learning Software Market: A Competitive Perspective
Competition is a major subject in any market research analysis. With the help of the competitive analysis provided in the report, players can easily study key strategies adopted by leading players of the Machine Learning Software market. They will also be able to plan counterstrategies to gain a competitive advantage in the Machine Learning Software market. Major as well as emerging players of the Machine Learning Software market are closely studied taking into consideration their market share, production, revenue, sales growth, gross margin, product portfolio, and other significant factors. This will help players to become familiar with the moves of their toughest competitors in the Machine Learning Software market.
Machine Learning Software Market: Drivers and Limitations
The report section explains the various drivers and controls that have shaped the global market. The detailed analysis of many market drivers enables readers to get a clear overview of the market, including the market environment, government policy, product innovation, development and market risks.
The research report also identifies the creative opportunities, challenges, and challenges of the Machine Learning Software market. The framework of the information will help the reader identify and plan strategies for the potential. Our obstacles, challenges and market challenges also help readers understand how the company can prevent this.
Machine Learning Software Market: Segment Analysis
The segmental analysis section of the report includes a thorough research study on key type and application segments of the Machine Learning Software market. All of the segments considered for the study are analyzed in quite some detail on the basis of market share, growth rate, recent developments, technology, and other critical factors. The segmental analysis provided in the report will help players to identify high-growth segments of the Machine Learning Software market and clearly understand their growth journey.
Ask for Discount @ https://www.marketresearchintellect.com/ask-for-discount/?rid=173628&utm_source=NT&utm_medium=888
Machine Learning Software Market: Regional Analysis
This section of the report contains detailed information on the market in different regions. Each region offers a different market size because each state has different government policies and other factors. The regions included in the report are North America, Europe, Asia Pacific, the Middle East and Africa. Information about the different regions helps the reader to better understand the global market.
Table of Content
1 Introduction of Machine Learning Software Market
1.1 Overview of the Market1.2 Scope of Report1.3 Assumptions
2 Executive Summary
3 Research Methodology of Market Research Intellect
3.1 Data Mining3.2 Validation3.3 Primary Interviews3.4 List of Data Sources
4 Machine Learning Software Market Outlook
4.1 Overview4.2 Market Dynamics4.2.1 Drivers4.2.2 Restraints4.2.3 Opportunities4.3 Porters Five Force Model4.4 Value Chain Analysis
5 Machine Learning Software Market , By Deployment Model
5.1 Overview
6 Machine Learning Software Market , By Solution
6.1 Overview
7 Machine Learning Software Market , By Vertical
7.1 Overview
8 Machine Learning Software Market , By Geography
8.1 Overview8.2 North America8.2.1 U.S.8.2.2 Canada8.2.3 Mexico8.3 Europe8.3.1 Germany8.3.2 U.K.8.3.3 France8.3.4 Rest of Europe8.4 Asia Pacific8.4.1 China8.4.2 Japan8.4.3 India8.4.4 Rest of Asia Pacific8.5 Rest of the World8.5.1 Latin America8.5.2 Middle East
9 Machine Learning Software Market Competitive Landscape
9.1 Overview9.2 Company Market Ranking9.3 Key Development Strategies
10 Company Profiles
10.1.1 Overview10.1.2 Financial Performance10.1.3 Product Outlook10.1.4 Key Developments
11 Appendix
11.1 Related Research
Request Report Customization @ https://www.marketresearchintellect.com/product/global-machine-learning-software-market-size-forecast/?utm_source=NT&utm_medium=888
About Us:
Market Research Intellect provides syndicated and customized research reports to clients from various industries and organizations with the aim of delivering functional expertise. We provide reports for all industries including Energy, Technology, Manufacturing and Construction, Chemicals and Materials, Food and Beverage and more. These reports deliver an in-depth study of the market with industry analysis, market value for regions and countries and trends that are pertinent to the industry.
Contact Us:
Mr. Steven FernandesMarket Research IntellectNew Jersey ( USA )Tel: +1-650-781-4080
Email: [emailprotected]
TAGS: Machine Learning Software Market Size, Machine Learning Software Market Growth, Machine Learning Software Market Forecast, Machine Learning Software Market Analysis, Machine Learning Software Market Trends, Machine Learning Software Market
See the original post:
Machine Learning Software Market Increasing Demand with Leading Player, Comprehensive Analysis, Forecast to 2026 - News Times
3 important trends in AI/ML you might be missing – VentureBeat
According to a Gartner survey, 48% of global CIOs will deploy AI by the end of 2020. However, despite all the optimism around AI and ML, I continue to be a little skeptical. In the near future, I dont foresee any real inventions that will lead to seismic shifts in productivity and the standard of living. Businesses waiting for major disruption in the AI/ML landscape will miss the smaller developments.
Here are some trends that may be going unnoticed at the moment but will have big long-term impacts:
Gone are the days when on-premises versus cloud was a hot topic of debate for enterprises. Today, even conservative organizations are talking cloud and open source. No wonder cloud platforms are revamping their offerings to include AI/ML services.
With ML solutions becoming more demanding in nature, the number of CPUs and RAM are no longer the only way to speed up or scale. More algorithms are being optimized for specific hardware than ever before be it GPUs, TPUs, or Wafer Scale Engines. This shift towards more specialized hardware to solve AI/ML problems will accelerate. Organizations will limit their use of CPUs to solve only the most basic problems. The risk of being obsolete will render generic compute infrastructure for ML/AI unviable. Thats reason enough for organizations to switch to cloud platforms.
The increase in specialized chips and hardware will also lead to incremental algorithm improvements leveraging the hardware. While new hardware/chips may allow use of AI/ML solutions that were earlier considered slow/impossible, a lot of the open-source tooling that currently powers the generic hardware needs to be rewritten to benefit from the newer chips. Recent examples of algorithm improvements include Sidewaysto speed up DL training by parallelizing the training steps, andReformerto optimize the use of memory and compute power.
I also foresee a gradual shift in the focus on data privacy towards privacy implications on ML models. A lot of emphasis has been placed on how and what data we gather and how we use it. But ML models are not true black boxes. It is possible to infer the model inputs based on outputs over time. This leads to privacy leakage. Challenges in data and model privacy will force organizations to embrace federated learningsolutions. Last year, Google releasedTensorFlow Privacy, a framework that works on the principle of differential privacy and the addition of noise to obscure inputs. With federated learning, a users data never leaves their device/machine. These machine learning models are smart enough and have a small enough memory footprint to run on smartphones and learn from the data locally.
Usually, the basis for asking for a users data was to personalize their individual experience. For example, Google Mail uses the individual users typing behavior to provide autosuggest. What about data/models that will help improve the experience not just for that individual but for a wider group of people? Would people be willing to share their trained model (not data) to benefit others? There is an interesting business opportunity here: paying users for model parameters that come from training on the data on their local device and using their local computing power to train models (for example, on their phone when it is relatively idle).
Currently, organizations are struggling to productionize models for scalability and reliability. The people who are writing the models are not necessarily experts on how to deploy them with model safety, security, and performance in mind. Once machine learning models become an integral part of mainstream and critical applications, this will inevitably lead to attacks on models similar to the denial-of-service attacks mainstream apps currently face. Weve already seen some low-tech examples of what this could look like: making a Tesla speed up instead of slow down, switch lanes, abruptly stop, or turning on wipers without proper triggers. Imagine the impacts such attacks could have on financial systems, healthcare equipment, etc. that rely heavily on AI/ML?
Currently, adversarial attacks are limited to academia to understand the implications of models better. But in the not too distant future, attacks on models will be for profit driven by your competitors who want to show they are somehow better, or by malicious hackers who may hold you to ransom. For example, new cybersecurity tools today rely on AI/ML to identify threats like network intrusions and viruses. What if I am able to trigger fake threats? What would be the costs associated with identifying real-vs-fake alerts?
To counter such threats, organizations need to put more emphasis on model verification to ensure robustness. Some organizations are already using adversarial networks to test deep neural networks. Today, we hire external experts to audit network security, physical security, etc. Similarly, we will see the emergence of a new market for model testing and model security experts, who will test, certify, and maybe take on some liability of model failure.
Organizations aspiring to drive value through their AI investments need to revisit the implications on their data pipelines. The trends Ive outlined above underscore the need for organizations to implement strong governance around their AI/ML solutions in production. Its too risky to assume your AI/ML models are robust, especially when theyre left to the mercy of platform providers. Therefore, the need of the hour is to have in-house experts who understand why models work or dont work. And thats one trend thats here to stay.
Sudharsan Rangarajan is Vice President of Engineering at Publicis Sapient.
Read more here:
3 important trends in AI/ML you might be missing - VentureBeat
Is Machine Learning Always The Right Choice? – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times
By: Mark Krupnik, PhD, Founder and CEO, Retalon
Since this article will probably come out during Income tax season, let me start with the following example: Suppose we would like to build a program that calculates income tax for people. According to US federal income tax rules: For single filers, all income less than $9,875 is subject to a 10% tax rate. Therefore, if you have $9,900 in taxable income, the first$9,875 is subject to the 10% rate and the remaining $25 is subject to the tax rate of the next bracket (12%).
This is an example of rules or an algorithm (set of instructions) for a computer.
Lets look at this from a formal, pragmatic point of view. A computer equipped with this program can achieve the goal (calculate tax) without human help. So technically, this can be classified as Artificial Intelligence.
But is it cool enough? No. Its not. That is why many people would not consider it part of AI. They may say that if we already know how to do a certain thing, then the process cannot be considered real intelligence. This is a phenomena that has become known as AI Effect. One of the first references is known as Teslers theorem that says: AI is whatever hasnt been done yet.
In the eyes of some people, the cool part of AI is associated with machine learning, and more specifically with deep learning which requires no instructions and utilizes Neural Nets to learn everything by itself, like a human brain.
The reality is that human development is a combination of multiple processes, including both: instructions, and Neural Net training, as well as many other things.
Lets take another simple example: If you work in a workshop on a complex project, you may need several tools, for instance a hammer, a screwdriver, plyers, etc. Of course, you can make up a task that can be solved by only using a hammer or only screwdriver, but for most real-life projects you will likely need to use various tools in combination to a certain extent.
In the same manner, AI also consists of several tools (such as algorithms, supervised and unsupervised machine learning, etc.). Solving a real-life problem requires a combination of these tools, and depending on the task, they can be used in different proportions or not used at all.
There are and there will always be situations where each of these methods will be preferred over others.
For example, the tax calculation task described in the beginning of this article will probably not be delegated to machine learning. There are good reasons to it, for example:
the solution of this problem does not depend on data the process should be controllable, observable, and 100% accurate (You cant just be 80% accurate on your income taxes)
However, the task to assess income tax submissions to identify potential fraud is a perfect application for ML technologies.
Equipped with a number of well labelled data inputs (age, gender, address, education, National Occupational Classification code, job title, salary, deductions, calculated tax, last year tax, and many others) and using the same type of information available from millions of other people, ML models can quickly identify outliers.
What happens next? The outliers in data are not necessarily all fraud. Data scientists will analyse anomalies and try to understand the reason for these individuals being flagged. It is quite possible that they will find some additional factors that had to be considered (feature engineering), for example a split between tax on salary, and tax on capital gain of investment. In this case, they would probably add an instruction to the computer to split this data set based on income type. At this very moment, we are not dealing with a pure ML model anymore (as the scientists just added an instruction), but rather with a combination of multiple AI tools.
ML is a great technology that can already solve many specific tasks. It will certainly expand to many areas, due to its ability to adapt to change without major effort on a human side.
At the same time, those segments that can be solved using specific instructions and require predictable outcome (financial calculations) or those involving high risk (human life, health, very expensive and risky projects) require more control and if the algorithmic approach can provide it, it will still be used.
For practical reasons, to solve any specific complex problem, the right combination of tools and methods of both types are required.
About the Author:
Mark Krupnik, PhD, is the founder and CEO ofRetalon, an award-winning provider of retail AI and predictive analytics solutions for planning, inventory optimization, merchandising, pricing and promotions.Mark is a leading expert on building and delivering state-of-the-art solutions for retailers.
Go here to read the rest:
Is Machine Learning Always The Right Choice? - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times
How artificial intelligence outsmarted the superbugs – The Guardian
One of the seminal texts for anyone interested in technology and society is Melvin Kranzbergs Six Laws of Technology, the first of which says that technology is neither good nor bad; nor is it neutral. By this, Kranzberg meant that technologys interaction with society is such that technical developments frequently have environmental, social and human consequences that go far beyond the immediate purposes of the technical devices and practices themselves, and the same technology can have quite different results when introduced into different contexts or under different circumstances.
The saloon-bar version of this is that technology is both good and bad; it all depends on how its used a tactic that tech evangelists regularly deploy as a way of stopping the conversation. So a better way of using Kranzbergs law is to ask a simple Latin question: Cui bono? who benefits from any proposed or hyped technology? And, by implication, who loses?
With any general-purpose technology which is what the internet has become the answer is going to be complicated: various groups, societies, sectors, maybe even continents win and lose, so in the end the question comes down to: who benefits most? For the internet as a whole, its too early to say. But when we focus on a particular digital technology, then things become a bit clearer.
A case in point is the technology known as machine learning, a manifestation of artificial intelligence that is the tech obsession de nos jours. Its really a combination of algorithms that are trained on big data, ie huge datasets. In principle, anyone with the computational skills to use freely available software tools such as TensorFlow could do machine learning. But in practice they cant because they dont have access to the massive data needed to train their algorithms.
This means the outfits where most of the leading machine-learning research is being done are a small number of tech giants especially Google, Facebook and Amazon which have accumulated colossal silos of behavioural data over the last two decades. Since they have come to dominate the technology, the Kranzberg question who benefits? is easy to answer: they do. Machine learning now drives everything in those businesses personalisation of services, recommendations, precisely targeted advertising, behavioural prediction For them, AI (by which they mostly mean machine learning) is everywhere. And it is making them the most profitable enterprises in the history of capitalism.
As a consequence, a powerful technology with great potential for good is at the moment deployed mainly for privatised gain. In the process, it has been characterised by unregulated premature deployment, algorithmic bias, reinforcing inequality, undermining democratic processes and boosting covert surveillance to toxic levels. That it doesnt have to be like this was vividly demonstrated last week with a report in the leading biological journal Cell of an extraordinary project, which harnessed machine learning in the public (as compared to the private) interest. The researchers used the technology to tackle the problem of bacterial resistance to conventional antibiotics a problem that is rising dramatically worldwide, with predictions that, without a solution, resistant infections could kill 10 million people a year by 2050.
The team of MIT and Harvard researchers built a neural network (an algorithm inspired by the brains architecture) and trained it to spot molecules that inhibit the growth of the Escherichia coli bacterium using a dataset of 2,335 molecules for which the antibacterial activity was known including a library of 300 existing approved antibiotics and 800 natural products from plant, animal and microbial sources. They then asked the network to predict which would be effective against E coli but looked different from conventional antibiotics. This produced a hundred candidates for physical testing and led to one (which they named halicin after the HAL 9000 computer from 2001: A Space Odyssey) that was active against a wide spectrum of pathogens notably including two that are totally resistant to current antibiotics and are therefore a looming nightmare for hospitals worldwide.
There are a number of other examples of machine learning for public good rather than private gain. One thinks, for example, of the collaboration between Google DeepMind and Moorfields eye hospital. But this new example is the most spectacular to date because it goes beyond augmenting human screening capabilities to aiding the process of discovery. So while the main beneficiaries of machine learning for, say, a toxic technology like facial recognition are mostly authoritarian political regimes and a range of untrustworthy or unsavoury private companies, the beneficiaries of the technology as an aid to scientific discovery could be humanity as a species. The technology, in other words, is both good and bad. Kranzbergs first law rules OK.
Every cloud Zeynep Tufekci has written a perceptive essay for the Atlantic about how the coronavirus revealed authoritarianisms fatal flaw.
EU ideas explained Politico writers Laura Kayali, Melissa Heikkil and Janosch Delcker have delivered a shrewd analysis of the underlying strategy behind recent policy documents from the EU dealing with the digital future.
On the nature of loss Jill Lepore has written a knockout piece for the New Yorker under the heading The lingering of loss, on friendship, grief and remembrance. One of the best things Ive read in years.
See the original post here:
How artificial intelligence outsmarted the superbugs - The Guardian