Category Archives: Machine Learning
The FCA and Bank of England step into the AI and machine learning debate – Lexology
On 23 January 2020, the Financial Conduct Authority (FCA) and the Bank of England (BofE) announced that they will be establishing the Financial Services Artificial Intelligence Public-Private Forum (AIPPF).
The aim of the AIPPF will be to progress the regulators dialogue with the public and private sectors to better understand the relevant technical and public policy issues related to the adoption of artificial intelligence (AI) and machine learning (ML). It will gather views on potential areas where principles, guidance or good practice examples could support the safe adoption of such technologies, and explore whether ongoing industry input could be useful and what form this could take. The AIPPF will also share information and understand the practical challenges of using AI and ML within the financial services sector, as well as the barriers to deployment and any potential risks or trade-offs.
Participating in the AIPPF will be by invitation only, with the final selection taken at the discretion of both the BofE and the FCA. Firms that are active in the development of AI and the use of ML will be prioritised over public authorities and academics. It will be co-chaired by Sir Dave Ramsden, deputy governor for markets and banking at the BofE, and Christopher Woolard, Executive Director of Strategy and Competition at the FCA.
This comes at a time when financial services institutions such as banks and fund managers rely heavily on technology to facilitate increased regulatory reporting that is both timely and accurate, and when the stakes can be high for institutions that can face multi-million pound penalties for failing to meet their reporting obligations, despite having invested the time and money in systems to meet their reporting obligations. The regulators themselves are also looking to reduce the onus on their own supervisors as they continue to receive ever-increasing data sets each week from financial services firms, as explored in the BofEs discussion paper called Transforming data collection from the UK financial sector published on 7 January 2020.
The BofEs discussion paper marked the start of its process to work closely with firms to ensure it and the FCA improves data collection, and eases the administrative and financial burden on firms to deliver that data; the announcement of the AIPPF marks the next step in this process.
In a similar way, the European Commission (EC) established its High-Level Expert Group on Artificial Intelligence in October 2019, to support the implementation of its AI vision, namely that AI must be trustworthy and human-centric. Notable recommendations made by the group relevant to the banking sector included developing and supporting AI-specific cybersecurity infrastructures, upskilling and reskilling the current workforce, and developing legally compliant and ethical data management and sharing initiatives in Europe. The EC has also plans to set out rules relating to AI over the next five years (as from June 2019).
There is a clear indication that the regulators both in and outside the UK are taking a keener interest in how regulated entities deploy AI and machine learning. Given the direction of regulatory travel planned by the EC, we expect this emerging trend to result in similar regulatory guidance and possible rules in the UK over the coming years.
Go here to see the original:
The FCA and Bank of England step into the AI and machine learning debate - Lexology
Here’s what happens when you apply machine learning to enhance the Lumires’ 1896 movie "Arrival of a Train at La Ciotat" – Boing Boing
Here's what happens when you apply machine learning to enhance the Lumires' 1896 movie "Arrival of a Train at La Ciotat" / Boing Boing
First, take a look at this 1895 short movie "L'Arrive d'un Train la Ciotat" ("Arrival of a Train at La Ciotat," from the Lumire Brothers. This film was upscaled to 4K and 60 frames per second using a variety of neural networks and other enhancement techniques. The result can be seen in the video below:
The Spot has an article about how it was done:
[YouTuber Denis Shiryaev] used a mix of neural networks from Gigapixel AI and a technique called depth-aware video frame interpolation to not only upscale the resolution of the video, but also increase its frame rate to something that looks a lot smoother to the human eye.
At this years Conference on Computer Vision and Pattern Recognition, researcher Chen Chen presented a cool project that vastly improves the quality of images captured in low-light conditions. Via his presentation: Imaging in low light is challenging due to low photon count and low SNR. Short-exposure images suffer from noise, while long exposure can induce []
In Identifiable Images of Bystanders Extracted from Corneal Reflections, British psychology researchers Rob Jenkins and Christie Kerr show that recognizable images of the faces of unpictured bystanders can be captured from modern, high-resolution photography by zooming in on subjects eyes to see the reflections in their corneas. The researchers asked experimental subjects to identify faces []
If you want to have an edge in the increasingly lucrative yet competitive world of graphic design, you need to have the right skills on your resume to help you stand out from the crowd. But instead of investing large amounts of time and money in traditional graphic design education, check out the All-in-One Adobe []
Any frequent traveler will tell you that one of the most important things you can do in order to ensure that youre comfortable on the go is to have a great piece of luggage. But luggage has come a long way since its humble inception, and its now possible to grab a piece of gear []
The dreaded tax season is nearly upon us, but you dont have to suffer this year as you dig up old receipts and fill out an endless number of forms. In fact, you can turn this tax season into an opportunity to maximize your returns while boosting your financial literacy when you pick up the []
BrainChip Showcases Vision and Learning Capabilities of its Akida Neural Processing IP and Device at tinyML Summit 2020 – Yahoo Finance
BrainChip Holdings Ltd. (ASX: BRN), a leading provider of ultra-low power, high-performance edge AI technology, today announced that it will present its revolutionary new breed of neuromorphic processing IP and Device in two sessions at the tinyML Summit at the Samsung Strategy & Innovation Center in San Jose, California February 12-13.
In the Poster Session, "Bio-Inspired Edge Learning on the Akida Event-Based Neural Processor," representatives from BrainChip will explain to attendees how the companys Akida Neuromorphic System-on-Chip processes standard vision CNNs using industry standard flows and distinguishes itself from traditional deep-learning accelerators through key features of design choices and bio-inspired learning algorithm. These features allow Akida to require 40 to 60 percent fewer computations to process a given CNN when compared to a DLA, as well as allowing it to perform learning directly on the chip.
BrainChip will also demonstrate "On-Chip Learning with Akida" in a presentation by Senior Field Applications Engineer Chris Anastasi. The demonstration will involve capturing a few hand gestures and hand positions from the audience using a Dynamic Vision Sensor camera and performing live learning and classification using the Akida neuromorphic platform. This will showcase the fast and lightweight unsupervised live learning capability of the spiking neural network (SNN) and the Akida neuromorphic chip, which takes much less data than a traditional deep neural network (DNN) counterpart and consuming much less power during training.
"We look forward to having the opportunity to share the advancements we have made with our flexible neural processing technology in our Poster Session and Demonstration at the tinyML Summit," said Louis DiNardo, CEO of BrainChip. "We recognize the growing need for low-power machine learning for emerging applications and architectures and have worked diligently to provide a solution that performs complex neural network training and inference for these systems. We believe that as a high-performance and ultra-power neural processor, Akida is ideally suited to be implemented at the Edge and IoT applications."
Akida is available as a licensable IP technology that can be integrated into ASIC devices and will be available as an integrated SoC, both suitable for applications such as surveillance, advanced driver assistance systems (ADAS), autonomous vehicles (AV), vision guided robotics, drones, augmented and virtual reality (AR/VR), acoustic analysis, and Industrial Internet-of-Things (IoT). Akida performs neural processing on the edge, which vastly reduces the computing resources required of the system host CPU. This unprecedented efficiency not only delivers faster results, it consumes only a tiny fraction of the power resources of traditional AI processing while enabling customers to develop solutions with industry standard flows, such as Tensorflow/Keras. Functions like training, learning, and inferencing are orders of magnitude more efficient with Akida.
Tiny machine learning is broadly defined as a fast growing field of machine learning technologies and applications including hardware (dedicated integrated circuits), algorithms and software capable of performing on-device sensor (vision, audio, IMU, biomedical, etc.) data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery operated devices. tinyML Summit 2020 will continue the tradition of high-quality invited talks, poster and demo presentations, open and stimulating discussions, and significant networking opportunities. It will cover the whole stack of technologies (Systems-Hardware-Algorithms-Software-Applications) at the deep technical levels, a unique feature of the tinyML Summits. Additional information about the event is available at https://tinymlsummit.org/
Follow BrainChip on Twitter: https://twitter.com/BrainChip_inc Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006
About BrainChip Holdings Ltd (ASX: BRN)
BrainChip is a global technology company that has developed a revolutionary advanced neural networking processor that brings artificial intelligence to the edge in a way that existing technologies are not capable. The solution is high performance, small, ultra-low power and enables a wide array of edge capabilities that include local training, learning and inference. The Company markets an innovative event-based neural network processor that is inspired by the spiking nature of the human brain and implements the network processor in an industry standard digital process. By mimicking brain processing BrainChip has pioneered a spiking neural network, called Akida, which is both scalable and flexible to address the requirements in edge devices. At the edge, sensor inputs are analyzed at the point of acquisition rather than transmission to the cloud or a datacenter. Akida is designed to provide a complete ultra-low power AI Edge Network for vision, audio and smart transducer applications. The reduction in system latency provides faster response and a more power efficient system that can reduce the large carbon footprint datacenters. Additional information is available at https://www.brainchipinc.com
View source version on businesswire.com: https://www.businesswire.com/news/home/20200207005137/en/
Contacts
Continue reading here:
BrainChip Showcases Vision and Learning Capabilities of its Akida Neural Processing IP and Device at tinyML Summit 2020 - Yahoo Finance
VUniverse Named One of Five Finalists for SXSW Innovation Awards: AI & Machine Learning Category – Yahoo Finance
Company to Demonstrate Live at Finalists Showcase in Austin, TX on Saturday, March 14
NEW YORK, Feb. 5, 2020 /PRNewswire/ -- VUniverse, a personalized movie and show recommendation platform that enables users to browse their streaming services in one appa channel guide for the streaming universe, announced today its been named one of five finalists in the AI & Machine Learning category for the 23rd annual SXSW Innovation Awards.
The SXSW Innovation Awards recognizes the most exciting tech developments in the connected world. During the showcase on Saturday, March 14, 2020, VUniverse will offer first-look demos of its platform as attendees explore this year's most transformative and forward-thinking digital projects. They'll be invited to experience how VUniverse utilizes AI to cross-reference all streaming services a user subscribes to and then delivers personalized suggestions of what to watch.
"We're honored to be recognized as a finalist for the prestigious SXSW Innovation Awards and look forward to showcasing our technology that helps users navigate the increasingly ever-changing streaming service landscape," said VUniverse co-founder Evelyn Watters-Brady. "With VUniverse, viewers will spend less time searching and more time watching their favorite movies and shows, whether it be a box office hit or an obscure indie gem."
About VUniverse VUniverse is a personalized movie and show recommendation platform that enables users to browse their streaming services in one appa channel guide for the streaming universe. Using artificial intelligence, VUniverse creates a unique taste profile for every user and serves smart lists of curated titles using mood, genre, and user-generated tags, all based on content from the user's existing subscription services. Users can also create custom watchlists and share them with friends and family.
Media Contact Jessica Cheng jessica@relativity.ventures
View original content:http://www.prnewswire.com/news-releases/vuniverse-named-one-of-five-finalists-for-sxsw-innovation-awards-ai--machine-learning-category-300999113.html
SOURCE VUniverse
Global Machine Learning Market to garner returns to garner returns worth USD 20.83 Billion by 2024 – Northwest Trail
The Global Machine Learning Market is Set for a Rapid Growth and is Expected to Reach around USD 20.83 Billion by 2024 Research Report provides the newest industry data and industry future trends, allowing you to identify the products and end users driving Revenue growth and profitability. A leading market research firm,Zion Market Researchadded industry report onMachine Learning Marketconsisting of 110+ pages with TOC (Table of Contents) including a list of tables & figuresduring the forecast period and Machine Learning Market report offers a comprehensive research updates and information related to market growth, demand, opportunities in the global Machine Learning Market.
FREE | Request Sample Report of Machine Learning Market Report @www.zionmarketresearch.com/sample/machine-learning-market
Our Free Complimentary Sample Report Accommodate a Brief Introduction of the research report, TOC, List of Tables and Figures, Competitive Landscape and Geographic Segmentation, Innovation and Future Developments Based on Research Methodology
The global Machine Learning Market report offers an extensive analysis of the realistic data collected from the global Machine Learning Market. It demonstrates major drifts and key drivers playing an important role in the growth of the global Machine Learning Market during the foretold time. The report focuses on the analysis of the key features such as drivers, new development opportunities, and restraints influencing the expansion of the Machine Learning Market for the forecasted period.
The report covers a detailed analysis of the development of the Machine Learning Market for the upcoming time. The global Machine Learning Market is segmented based on the various product categories, delivery channels, and applications.
Major Market Players Included in This Report:
International Business Machines Corporation, Microsoft Corporation, Amazon Web ServicesInc., BigmlInc., Google Inc., Hewlett Packard Enterprise Development Lp, Intel Corporation, and others.
A complete value chain of the global Machine Learning Market is emphasized in the global Machine Learning Market report along with the review of the downstream and upstream components influencing the global Machine Learning Market. It analyzes the expansion of every segment of the Machine Learning Market. The data presented in the research report is collected from various industry organizations to estimate the development of each segment of the global Machine Learning Market in the coming period.
The global Machine Learning Market research report presents market dynamics and inclinations influencing the growth of the global Machine Learning Market. It uses SWOT analysis to review the competitive players of the Machine Learning Market. Furthermore, the report also includes a synopsis of the various business strategies of the key players of the Machine Learning Market.
Download Free PDF Report Brochure @www.zionmarketresearch.com/requestbrochure/machine-learning-market
Promising Regions & Countries Mentioned In The Machine Learning Market Report:
The report focuses on the latest market trends and major growth opportunities assisting in the expansion of the global Machine Learning Market. On the basis of topography, the global Machine Learning Market is classified into Europe, North America, Latin America, Middle East & Africa, and the Asia Pacific.
The Machine Learning Market report provides company market size, share analysis in order to give a broader overview of the key players in the market. Additionally, the report also includes key strategic developments of the market including acquisitions & mergers, new product launch, agreements, partnerships, collaborations & joint ventures, research & development, product and regional expansion of major participants involved in the market on the global and regional basis.
Browse Press Release:www.zionmarketresearch.com/news/machine-learning-market
Following 15 Chapters represents the Machine Learning Market globally:
Chapter 1,enlist the goal of global Machine Learning Market covering the market introduction, product image, market summary, development scope, Machine Learning Market presence;
Chapter 2,studies the key global Machine Learning Market competitors, their sales volume, market profits and price of Machine Learning Market in 2018 and 2026;
Chapter 3,shows the competitive landscape view of global Machine Learning Market on the basis of dominant market players and their share in the market growth in 2018 and 2026;
Chapter 4,conducts the region-wise study of the global Machine Learning Market based on the sales ratio in each region, and market share from 2018 to 2026;
Chapter 5,6,7,8 and 9demonstrates the key countries present in these regions which have revenue share in Machine Learning Market;
Chapter 10 and 11describes the market based on Machine Learning Market product category, a wide range of applications, growth based on market trend, type and application from 2018 to 2026;
Chapter 12shows the global Machine Learning Market plans during the forecast period from 2018 to 2026 separated by regions, type, and product application.
Chapter 13, 14, 15mentions the global Machine Learning Market sales channels, market vendors, dealers, market information and study conclusions, appendix and data sources.
Inquire more about this report @www.zionmarketresearch.com/inquiry/machine-learning-market
Available Array of Customizations:
Country-level bifurcation of data in terms of Product Type (Concentration, Temperature, Combustion, Conductivity, and Others) and Application (Petrochemical, Metallurgy, Electricity, and Others) for any specific country/countries.
Expansion of scope and data forecasts until 2030
Company Market Share for specific country/countries and regions
Customized Report Framework for Go-To Market Strategy
Customized Report Framework for Merger & Acquisitions and Partnerships/JVs Feasibility
Customized Report Framework for New Product/Service Launch and/or Expansion
Detailed Report and Deck for any specific Company operating in Machine Learning Market
Any other Miscellaneous requirements with feasibility analysis
Reasons for Buying this Report
Thanks for reading this article; you can also get individual chapter wise section or region wise report versions like North America, Europe or Asia
See more here:
Global Machine Learning Market to garner returns to garner returns worth USD 20.83 Billion by 2024 - Northwest Trail
AI Hype: Why the Reality Often Falls Short of Expectations – insideBIGDATA
In this special guest feature, AJ Abdallat, CEO of Beyond Limits, takes a look at the tech industrys hype cycle, in particular for how it often falls short of expectations when related to AI. Beyond Limits is a full-stack Artificial Intelligence engineering company creating advanced software solutions that go beyond conventional AI. Founded in 2014, Beyond Limits is transforming proven technologies from Caltech and NASAs Jet Propulsion Laboratory into advanced AI solutions, hardened to industrial strength, and put to work for forward-looking companies on earth.
Despite what we see in science fiction, artificialintelligence (AI) is not likely going to produce sentient machines that will takeover Earth, subordinate human beings, or change the hierarchy of the planets foodchain. Nor will it be humanitys savior.
AI essentially equates to the ability of machines to performtasks that usually require human reasoning. The concept of artificialintelligence has existed for more than 60 years, and modern AI systems arerevolutionizing how people live and work. However, conventional AI solutions donot use the technology to its fullest potential.
Decisions are usually made inside black boxes
Conventional AI solutions operate inside black boxes, unableto explain or substantiate their reasoning or decisions. These solutions dependon intricate neural networks that are too complex for people to understand. Companiesutilizing conventional AI approaches primarily are in somewhat of a quandarybecause they dont know how or why the system produces its conclusions, and mostAI firms refuse to divulge, or are unable to divulge, the inner workings oftheir technology.
However, these smart systems arent generally all thatsmart. They can process very large, complex data sets, but cannot employhuman-like reasoning or problem-solving. They see data as a series ofnumbers, label those numbers based on how they were trained, and depend onrecognition to solve problems. When presented with data, a conventional AIsystem asks itself if it has seen the information before and, if so, how itlabeled that data last time. It cannot diagnose or solve problems in real-timeunless it has the ability to communicate with human operators.
Scenarios do exist where AI users may not be as concerned about collecting information around reasoning because the consequences of a negative outcome are minimal, such as algorithms that recommend items based on consumers purchasing or viewing history. However, trusting the decisions of black box-oriented AI is extremely problematic in high-value, high-risk industries such as finance, healthcare, and energy where machines may be tasked to make recommendations on which millions of dollars, or the safety and well being of humans, hang in the balance.
Imperfect edge conditions complicate matters
Enterprises are increasingly deploying AI systems to monitor IoT devices in far-flung environments where humans are not always present, and internet connectivity is spotty at best; think highway cams, drones that survey farmlands, or an oil rig infrastructure in the middle of the ocean. One-quarter of organizations with established IoT strategies are also investing in AI.
Cognitive AI solves these problems
Cognitive AI solutions solve theseissues by employing human-like problem-solving and reasoning skills that letusers see inside the black box. They do not replace complex neural networks appliedby conventional solutions, but instead interpret their outputs and usenatural-language declarations to provide an annotated narrative that humans canunderstand. Cognitive AI systems understand how they solve problems and arealso aware of the context that makes the information relevant. So instead of beingasked to implicitly trust the conclusions of a machine, with cognitive AI, humanusers can actually obtain audit trails that substantiate the systems recommendationswith evidence, risk assessment, certainty, and uncertainty.
The level of explainability generatedby an AI system is based on its use case. In general, the higher the stakes,the more explainability is needed. A robust cognitive AI system should have theautonomy to adjust the depth of its explanations based on who is viewing theinformation and in what context.
Audit trails in the form of decisiontrees are one of the most helpful methods for illustrating the cognitive AI reasoningprocess behind recommendations. The top of a tree represents the minimum amountof information explaining a decision-process, while the bottom denotes explanationsthat go into the greatest amount of detail. For this reason, explainability isclassified into two categories, top-down or bottom-up.
The top-down approach is for endusers who dont require intricate details, only a positive or negative pointof reference about whether or not an answer is correct. For example, a managermay think that a panel on a solar farm isnt working properly and simply needsto know the status of the solar panel; a cognitive AI system could generate aprediction around how much energy the panel will generate in its currentcondition.
On the other hand, a bottom-upapproach would be more useful for engineers dispatched to fix the problem.These users could query the cognitive AI system at any point along its decisiontree and obtain detailed information and suggestions to remedy the problem.
If the ultimate expectation for AI is to live up to its promise of transforming society, human users must be comfortable with the idea of trusting machine-generated decisions. Cognitive, explainable AI makes this possible. It breaks down organizational silos and bridges gaps between IT personnel and non-technical executive decision-makers of an organization, enabling optimal effectiveness in governance, compliance, risk management and quality assurance, while improving accountability.
Sign up for the free insideBIGDATAnewsletter.
Excerpt from:
AI Hype: Why the Reality Often Falls Short of Expectations - insideBIGDATA
AI, machine learning, robots, and marketing tech coming to a store near you – TechRepublic
Retailers are harnessing the power of new technology to dig deeper into customer decisions and bring people back into stores.
The National Retail Federation's 2020 Big Show in New York was jam packed full of robots, frictionless store mock-ups, and audacious displays of the latest technology now available to retailers.
Dozens of robots, digital signage tools, and more were available for retail representatives to test out, with hundreds of the biggest tech companies in attendance offering a bounty of eye-popping gadgets designed to increase efficiency and bring the wow factor back to brick-and-mortar stores.
SEE: Artificial intelligence: A business leader's guide (free PDF) (TechRepublic)
Here are some of the biggest takeaways from the annual retail event.
With the explosion in popularity of Amazon, Alibaba, and other e-commerce sites ready to deliver goods right to your door within days, many analysts and retailers figured the brick-and-mortar stores of the past were on their last legs.
But it turns out billions of customers still want the personal, tailored touch of in-store experiences and are not ready to completely abandon physical retail outlets.
"It's not a retail apocalypse. It's a retail renaissance," said Lori Mitchell-Keller, executive vice president and global general manager of consumer industries at SAP.
As leader of SAP's retail, wholesale distribution, consumer products, and life sciences industries division, Mitchell-Keller said she was surprised to see that retailers had shifted their stance and were looking to find ways to beef up their online experience while infusing stores with useful but flashy technology.
"Brick-and-mortar stores have this unique capability to have a specific advantage against online retailers. So despite the trend where everything was going online, it did not mean online at the expense of brick-and-mortar. There is a balance between the two. Those companies that have a great online experience and capability combined with a brick-and-mortar store are in the best place in terms of their ability to be profitable," Mitchell-Keller said during an interview at NRF 2020.
"There is an experience that you cannot get online. This whole idea of customer experience and experience management is definitely the best battleground for the guys that can't compete in delivery. Even for the ones that can compete on delivery, like the Walmarts and Targets, they are using their brick-and-mortar stores to offer an experience that you can't get online. We thought five years ago that brick-and-mortar was dead and it's absolutely not dead. It's actually an asset."
In her experience working with the world's biggest retailers, companies that have a physical presence actually have a huge advantage because customers are now yearning for a personalized experience they can't get online. While e-commerce sites are fast, nothing can beat the ability to have real people answer questions and help customers work through their options, regardless of what they're shopping for.
Retailers are also transforming parts of their stores into fulfillment centers for their online sales, which have the doubling effect of bringing customers into the store where they may spend even more on things they see.
"The brick-and-mortar stores that are using their stores as fulfillment centers have a much lower cost of delivery because they're typically within a few miles of customers. If they have a great online capability and good store fulfillment, they're able to get to customers faster than the aggregators," Mitchell-Keller said. "It's better to have both."
SEE: Feature comparison: E-commerce services and software (TechRepublic Premium)
But one of the main trends, and problems, highlighted at NRF 2020 was the sometimes difficult transition many retailers have had to make to a digitized world.
NRF 2020 was full of decadent tech retail tools like digital price tags, shelf-stocking robots and next-gen advertising signage, but none of this could be incorporated into a retail environment without a basic amount tech talent and systems to back it all.
"It can be very overwhelmingly complicated, not to mention costly, just to have a team to manage technology and an environment that is highly digitally integrated. The solution we try to bring to bear is to add all these capabilities or applications into a turn key environment because fundamentally, none of it works without the network," said Michael Colaneri, AT&T's vice president of retail, restaurants and hospitality.
While it would be easy for a retailer to leave NRF 2020 with a fancy robot or cool gadget, companies typically have to think bigger about the changes they want to see, and generally these kinds of digital transformations have to be embedded deep throughout the supply chain before they can be incorporated into stores themselves.
Colaneri said much of AT&T's work involved figuring out how retailers could connect the store system, the enterprise, the supply chain and then the consumer, to both online and offline systems. The e-commerce part of retailer's business now had to work hand in hand with the functionality of the brick-and-mortar experience because each part rides on top of the network.
"There are five things that retailers ask me to solve: Customer experience, inventory visibility, supply chain efficiency, analytics, and the integration of media experiences like a robot, electronic shelves or digital price tags. How do I pull all this together into a unified experience that is streamlined for customers?" Colaneri said.
"Sometimes they talk to me about technical components, but our number one priority is inventory visibility. I want to track products from raw material to where it is in the legacy retail environment. Retailers also want more data and analytics so they can get some business intelligence out of the disparate data lakes they now have."
The transition to digitized environments is different for every retailer, Colaneri added. Some want slow transitions and gradual introductions of technology while others are desperate for a leg up on the competition and are interested in quick makeovers.
While some retailers have balked at the thought, and price, of wholesale changes, the opposite approach can end up being just as costly.
"Anybody that sells you a digital sign, robot, Magic Mirror or any one of those assets is usually partnering with network providers because it requires the network. And more importantly, what typically happens is if someone buys an asset, they are underestimating the requirements it's going to need from their current network," Colaneri said.
"Then when their team says 'we're already out of bandwidth,' you'll realize it wasn't engineered and that the application wasn't accommodated. It's not going to work. It can turn into a big food fight."
Retailers are increasingly realizing the value of artificial intelligence and machine learning as a way to churn through troves of data collected from customers through e-commerce sites. While these tools require the kind of digital base that both Mitchell-Keller and Colaneri mentioned, artificial intelligence (AI) and machine learning can be used to address a lot of the pain points retailers are now struggling with.
Mitchell-Keller spoke of SAP's work with Costco as an example of the kind of real-world value AI and machine learning can add to a business. Costco needed help reducing waste in their bakeries and wanted better visibility into when customers were going to buy particular products on specific days or at specific times.
"Using machine learning, what SAP did was take four years of data out of five different stores for Costco as a pilot and used AI and machine learning to look through the data for patterns to be able to better improve their forecasting. They're driving all of their bakery needs based on the forecast and that forcecast helped Costco so much they were able to reduce their waste by about 30%," Mitchell-Keller said, adding that their program improved productivity by 10%.
SAP and dozens of other tech companies at NRF 2020 offered AI-based systems for a variety of supply chain management tools, employee payment systems and even resume matches. But AI and machine learning systems are nothing without more data.
SEE:Managing AI and ML in the enterprise 2019: Tech leaders expect more difficulty than previous IT projects(TechRepublic Premium)
Jeff Warren, vice president of Oracle Retail, said there has been a massive shift toward better understanding customers through increased data collection. Historically, retailers simply focused on getting products through the supply chain and into the hands of consumers. But now, retailers are pivoting toward focusing on how to better cater services and goods to the customer.
Warren said Oracle Retail works with about 6,000 retailers in 96 different countries and that much of their work now prioritizes collecting information from every customer interaction.
"What is new is that when you think of the journey of the consumer, it's not just about selling anymore. It's not just about ringing up a transaction or line busting. All of the interactions between you and me have value and hold something meaningful from a data perspective," he said, adding that retailers are seeking to break down silos and pool their data into a single platform for greater ease of use.
"Context would help retailers deliver a better experience to you. Its petabytes of information about what the US consumer market is spending and where they're spending. We can take the information that we get from those interactions that are happening at the point of sale about our best customers and learn more."
With the Oracle platform, retailers can learn about their customers and others who may have similar interests or live in similar places. Companies can do a better job of targeting new customers when they know more about their current customers and what else they may want.
IBM is working on similar projects with hundreds of different retailers , all looking to learn more about their customers and tailor their e-commerce as well as in-store experience to suit their biggest fans.
IBM global managing director for consumer industries Luq Niazi told TechRepublic during a booth tour that learning about consumer interests was just one aspect of how retailers could appeal to customers in the digital age.
"Retailers are struggling to work through what tech they need. When there is so much tech choice, how do you decide what's important? Many companies are implementing tech that is good but implemented badly, so how do you help them do good tech implemented well?" Niazi said.
"You have all this old tech in stores and you have all of this new tech. You have to think about how you bring the capability together in the right way to deploy flexibly whatever apps and experiences you need from your store associate, for your point of sale, for your order management system that is connected physically and digitally. You've got to bring those together in different ways. We have to help people think about how they design the store of the future."
Get expert tips on mastering the fundamentals of big data analytics, and keep up with the latest developments in artificial intelligence. Delivered Mondays
Read this article:
AI, machine learning, robots, and marketing tech coming to a store near you - TechRepublic
The Best 17 AI and Machine Learning TED Talks for Practitioners – Solutions Review
The editors at Solutions Review curated this list of the best AI and machine learning TED talks for practitioners in the field.
TED Talks are influential videos from expert speakers in a variety of verticals. TED began in 1984 as a conference where Technology, Entertainment and Design converged, and today covers almost all topics from business to technology to global issues in more than 110 languages. TED is building a clearinghouse of free knowledge from the worlds top thinkers, and their library of videos is expansive and rapidly growing.
Solutions Review has curated this list of AI and machine learning TED talks to watch if you are a practitioner in the field. Talks were selected based on relevance, ability to add business value, and individual speaker expertise. Weve also curated TED talk lists for topics like data visualization and big data.
Erik Brynjolfsson is the director of the MIT Center for Digital Business and a research associate at the National Bureau of Economic Research. He asks how IT affects organizations, markets and the economy. His books include Wired for Innovation and Race Against the Machine. Brynjolfsson was among the first researchers to measure the productivity contributions of information and community technology (ICT) and the complementary role of organizational capital and other intangibles.
In this talk, Brynjolfsson argues that machine learning and intelligence are not the end of growth its simply the growing pains of a radically reorganized economy. A riveting case for why big innovations are ahead of us if we think of computers as our teammates. Be sure to watch the opposing viewpoint from Robert Gordon.
Jeremy Howard is the CEO ofEnlitic, an advanced machine learning company in San Francisco. Previously, he was the president and chief scientist atKaggle, a community and competition platform of over 200,000 data scientists. Howard is a faculty member atSingularity University, where he teaches data science. He is also a Young Global Leader with the World Economic Forum, and spoke at the World Economic Forum Annual Meeting 2014 on Jobs for the Machines.
Technologist Jeremy Howard shares some surprising new developments in the fast-moving field of deep learning, a technique that can give computers the ability to learn Chinese, or to recognize objects in photos, or to help think through a medical diagnosis.
Nick Bostrom is a professor at the Oxford University, where he heads theFuture of Humanity Institute, a research group of mathematicians, philosophers and scientists tasked with investigating the big picture for the human condition and its future. Bostrom was honored as one ofForeign Policys 2015Global Thinkers. His bookSuperintelligenceadvances the ominous idea that the first ultraintelligent machine is the last invention that man need ever make.
In this talk, Nick Bostrom calls machine intelligence the last invention that humanity will ever need to make. Bostrom asks us to think hard about the world were building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values or will they have values of their own?
Lis work with neural networks and computer vision (with Stanfords Vision Lab) marks a significant step forward for AI research, and could lead to applications ranging from more intuitive image searches to robots able to make autonomous decisions in unfamiliar situations. Fei-Fei was honored as one ofForeign Policys 2015Global Thinkers.
This talk digs into how computers are getting smart enough to identify simple elements. Computer vision expert Fei-Fei Li describes the state of the art including the database of 15 million photos her team built to teach a computer to understand pictures and the key insights yet to come.
Anthony Goldbloom is the co-founder and CEO ofKaggle. Kaggle hosts machine learning competitions, where data scientists download data and upload solutions to difficult problems. Kaggle has a community of over 600,000 data scientists. In 2011 and 2012,Forbesnamed Anthony one of the 30 under 30 in technology; in 2013 theMIT Tech Reviewnamed him one of top 35 innovators under the age of 35, and the University of Melbourne awarded him an Alumni of Distinction Award.
This talk by Anthony Goldbloom describes some of the current use cases for machine learning, far beyond simple tasks like assessing credit risk and sorting mail.
Tufekci is a contributing opinion writer at theNew York Times, an associate professor at the School of Information and Library Science at University of North Carolina, Chapel Hill, and a faculty associate at Harvards Berkman Klein Center for Internet and Society. Her book,Twitter and Tear Gas was published in 2017 by Yale University Press.
Machine intelligence is here, and were already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that dont fit human error patterns and in ways we wont expect or be prepared for.
In his bookThe Business Romantic, Tim Leberecht invites us to rediscover romance, beauty and serendipity by designing products, experiences, and organizations that make us fall back in love with our work and our life. The book inspired the creation of the Business Romantic Society, a global collective of artists, developers, designers and researchers who share the mission of bringing beauty to business.
In this talk, Tim Leberecht makes the case for a new radical humanism in a time of artificial intelligence and machine learning. For the self-described business romantic, this means designing organizations and workplaces that celebrate authenticity instead of efficiency and questions instead of answers. Leberecht proposes four principles for building beautiful organizations.
Grady Booch is Chief Scientist for Software Engineering as well as Chief Scientist for Watson/M at IBM Research, where he leads IBMs research and development for embodied cognition. Having originated the term and the practice of object-oriented design, he is best known for his work in advancing the fields of software engineering and software architecture.
Grady Booch allays our worst (sci-fi induced) fears about superintelligent computers by explaining how well teach, not program, them to share our human values. Rather than worry about an unlikely existential threat, he urges us to consider how artificial intelligence will enhance human life.
Tom Gruberis a product designer, entrepreneur, and AI thought leader who uses technology to augment human intelligence. He was co-founder, CTO, and head of design for the team that created theSiri virtual assistant. At Apple for over 8 years, Tom led the Advanced Development Group that designed and prototyped new capabilities for products that bring intelligence to the interface.
This talk introduces the idea of Humanistic AI. He shares his vision for a future where AI helps us achieve superhuman performance in perception, creativity and cognitive function from turbocharging our design skills to helping us remember everything weve ever read. The idea of an AI-powered personal memory also extends to relationships, with the machine helping us reflect on our interactions with people over time.
Stuart Russell is a professor (and formerly chair) of Electrical Engineering and Computer Sciences at University of California at Berkeley. His bookArtificial Intelligence: A Modern Approach (with Peter Norvig) is the standard text in AI; it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries. He also works for the United Nations, developing a new global seismic monitoring system for the nuclear-test-ban treaty.
His talk centers around the question of whether we can harness the power of superintelligent AI while also preventing the catastrophe of robotic takeover. As we move closer toward creating all-knowing machines, AI pioneer Stuart Russell is working on something a bit different: robots with uncertainty. Hear his vision for human-compatible AI that can solve problems using common sense, altruism and other human values.
Dr. Pratik Shahs research creates novel intersections between engineering, medical imaging, machine learning, and medicine to improve health and diagnose and cure diseases. Research topics include: medical imaging technologies using unorthodox artificial intelligence for early disease diagnoses; novel ethical, secure and explainable artificial intelligence based digital medicines and treatments; and point-of-care medical technologies for real world data and evidence generation to improve public health.
TED Fellow Pratik Shah is working on a clever system to do just that. Using an unorthodox AI approach, Shah has developed a technology that requires as few as 50 images to develop a working algorithm and can even use photos taken on doctors cell phones to provide a diagnosis. Learn more about how this new way to analyze medical information could lead to earlier detection of life-threatening illnesses and bring AI-assisted diagnosis to more health care settings worldwide.
Margaret Mitchells research involves vision-language and grounded language generation, focusing on how to evolve artificial intelligence towards positive goals. Her work combines computer vision, natural language processing, social media as well as many statistical methods and insights from cognitive science. Before Google, Mitchell was a founding member of Microsoft Researchs Cognition group, focused on advancing artificial intelligence, and a researcher in Microsoft Researchs Natural Language Processing group.
Margaret Mitchell helps develop computers that can communicate about what they see and understand. She tells a cautionary tale about the gaps, blind spots and biases we subconsciously encode into AI and asks us to consider what the technology we create today will mean for tomorrow.
Kriti Sharma is the Founder of AI for Good, an organization focused on building scalable technology solutions for social good. Sharma was recently named in theForbes 30 Under 30 list for advancements in AI. She was appointed a United Nations Young Leader in 2018 and is an advisor to both the United Nations Technology Innovation Labs and to the UK Governments Centre for Data Ethics and Innovation.
AI algorithms make important decisions about you all the time like how much you should pay for car insurance or whether or not you get that job interview. But what happens when these machines are built with human bias coded into their systems? Technologist Kriti Sharma explores how the lack of diversity in tech is creeping into our AI, offering three ways we can start making more ethical algorithms.
Matt Beane does field research on work involving robots to help us understand the implications of intelligent machines for the broader world of work. Beane is an Assistant Professor in the Technology Management Program at the University of California, Santa Barbara and a Research Affiliate with MITs Institute for the Digital Economy. He received his PhD from the MIT Sloan School of Management.
The path to skill around the globe has been the same for thousands of years: train under an expert and take on small, easy tasks before progressing to riskier, harder ones. But right now, were handling AI in a way that blocks that path and sacrificing learning in our quest for productivity, says organizational ethnographer Matt Beane. Beane shares a vision that flips the current story into one of distributed, machine-enhanced mentorship that takes full advantage of AIs amazing capabilities while enhancing our skills at the same time.
Leila Pirhaji is the founder ofReviveMed, an AI platform that can quickly and inexpensively characterize large numbers of metabolites from the blood, urine and tissues of patients. This allows for the detection of molecular mechanisms that lead to disease and the discovery of drugs that target these disease mechanisms.
Biotech entrepreneur and TED Fellow Leila Pirhaji shares her plan to build an AI-based network to characterize metabolite patterns, better understand how disease develops and discover more effective treatments.
Janelle Shane is the owner of AIweirdness.com. Her book, You Look Like a Thing and I Love Youuses cartoons and humorous pop-culture experiments to look inside the minds of the algorithms that run our world, making artificial intelligence and machine learning both accessible and entertaining.
The danger of artificial intelligence isnt that its going to rebel against us, but that its going to do exactly what we ask it to do, says AI researcher Janelle Shane. Sharing the weird, sometimes alarming antics of AI algorithms as they try to solve human problems like creating new ice cream flavors or recognizing cars on the road Shane shows why AI doesnt yet measure up to real brains.
Sylvain Duranton is the global leader of BCG GAMMA, a unit dedicated to applying data science and advanced analytics to business. He manages a team of more than 800 data scientists and has implemented more than 50 custom AI and analytics solutions for companies across the globe.
In this talk, business technologist Sylvain Duranton advocates for a Human plus AI approach using AI systems alongside humans, not instead of them and shares the specific formula companies can adopt to successfully employ AI while keeping humans in the loop.
For more AI and machine learning TED talks, browse TEDs complete topic collection.
Timothy is Solutions Review's Senior Editor. He is a recognized thought leader and influencer in enterprise BI and data analytics. Timothy has been named a top global business journalist by Richtopia. Scoop? First initial, last name at solutionsreview dot com.
See original here:
The Best 17 AI and Machine Learning TED Talks for Practitioners - Solutions Review
REPLY: European Central Bank Explores the Possibilities of Machine Learning With a Coding Marathon Organised by Reply – Business Wire
TURIN, Italy--(BUSINESS WIRE)--The European Central Bank (ECB), in collaboration with Reply, leader in digital technology innovation, is organising the Supervisory Data Hackathon, a coding marathon focussing on the application of Machine Learning and Artificial Intelligence.
From 27 to 29 February 2020, at the ECB in Frankfurt, more than 80 participants from the ECB, Reply and further companies explore possibilities to gain deeper and faster insights into the large amount of supervisory data gathered by the ECB from financial institutions through regular financial reporting for risk analysis. The coding marathon provides a protected space to co-creatively develop new ideas and prototype solutions based on Artificial Intelligence within a short timeframe.
Ahead of the event, participants submit projects in the areas of data quality, interlinkages in supervisory reporting and risk indicators. The most promising submissions will be worked on for 48 hours during the event by the multidisciplinary teams composed of members from the ECB, Reply and other companies.
Reply has proven its Artificial Intelligence and Machine Learning capabilities with numerous projects in various industries and combines this technological expertise with in-depth knowledge of the financial services industry and its regulatory environment.
Coding marathons using the latest technologies are a substantial element in Replys toolset for sparking innovation through training and knowledge transfer internally and with clients and partners.
ReplyReply [MTA, STAR: REY] specialises in the design and implementation of solutions based on new communication channels and digital media. As a network of highly specialised companies, Reply defines and develops business models enabled by the new models of big data, cloud computing, digital media and the internet of things. Reply delivers consulting, system integration and digital services to organisations across the telecom and media; industry and services; banking and insurance; and public sectors. http://www.reply.com
See original here:
REPLY: European Central Bank Explores the Possibilities of Machine Learning With a Coding Marathon Organised by Reply - Business Wire
Top Machine Learning Projects Launched By Google In 2020 (Till Date) – Analytics India Magazine
It may be that time of the year when new year resolutions start to fizzle, but Google seems to be just getting started.The tech giant has been building tools and services to bring in the benefits of artificial intelligence (AI) to its users. The company has begun upping its arsenal of AI-powered products with a string of new releases this month alone.
Here is a list of the top products launched by Google in January 2020.
Although first introduced in 2014, the latest iterations of sequence-to-sequence (seq2seq) AI models have strengthened the capability of key text-generating tasks including sentence formation and grammar correction. Googles LaserTagger, which the company has open-sourced, speeds up the text generation process and reduces the chances of errors
Compared to traditional seq2seq methods, LaserTagger computes predictions up to 100 times faster, making it suitable for real-time applications. Furthermore, it can be plugged into an existing technology stack without adding any noticeable latency on the user side because of its high inference speed. These advantages become even more pronounced when applied at a large scale.
The company has expanded its Coral lineup by unveiling two new Coral AI products Coral Dev Board Mini and Coral Accelerator Module. Announced ahead of the Consumer Electronics Show (CES) this year, the latest addition to the Coral family followed a successful beta run of the platform in October 2019.
The Coral Accelerator Module is a multi-chip package that encapsulates the companys custom-designed Edge Tensor Processing Unit (TPU). The chip inside the Coral Dev Board is designed to execute multiple computer vision models at 30 frames per second or a single model at over 100fps. Users of this technology have said that it is easy to integrate into custom PCB designs.
Coral Accelerator Module, a new multi-chip module with Google Edge TPU.
Google has also released the Coral Dev Board Mini which provides a smaller form-factor, lower-power, and a cost-effective alternative to the Coral Dev Board.
Caption: The Coral Dev Board Mini is a cheaper, smaller and lower power version of the Coral Dev Board
Officially announced in March 2019, the Coral products were intended to help developers work more efficiently by reducing their reliance on connections to cloud-based systems by creating AI that works locally.
Chatbots are one of the hottest trends in AI owing to its tremendous growth in applications. Google has added to the mix with its human-like multi-turn open-domain version. Meena has been trained in an end-to-end fashion on data mined from social media conversations held in the public domain with a totalling 300GB+ text data. Furthermore, it is massive in size with 2.6B parameter neural network and has been trained to minimize perplexity of the next token.
Furthermore, Googles human evaluation metric called Sensibleness and Specificity Average (SSA) also captures the key elements of a human-like multi-turn conversation, making this chatbot even more versatile. In a blog post, Google had claimed that Meena can conduct conversations that are more sensible and specific than existing state-of-the-art chatbots.
Plugged as an important development of Googles Transformer the novel neural network architecture for language understanding Reformer is intended to handle context windows of up to 1 million words, all on a single AI accelerator using only 16GB of memory.
Google had first mooted the idea of a new transformer model in a research paper in collaboration with UC Berkeley in 2019. The core idea behind this model was self-attention, and the ability to attend to different positions of an input sequence to compute a representation of that sequence elaborated in one of our articles.
Today, Reformer can process whole books concurrently and that too on a single gadget, thereby exhibiting great potential.
Google has time and again reiterated its commitment to the development of AI. Seeing it as more profound than fire or electricity, it firmly believes that this technology can eliminate many of the constraints we face today.
The company has also delved into research anchored around AI that is spread across a host of sectors, whether it be detecting breast cancer or protecting whales or other endangered species.
comments
See the original post here:
Top Machine Learning Projects Launched By Google In 2020 (Till Date) - Analytics India Magazine