Category Archives: Machine Learning

Buzzwords ahoy as Microsoft tears the wraps off machine-learning enhancements, new application for Dynamics 365 – The Register

Microsoft has announced a new application, Dynamics 365 Project Operations, as well as additional AI-driven features for its Dynamics 365 range.

If you are averse to buzzwords, look away now. Microsoft Business Applications President James Phillips announced the new features in a post which promises AI-driven insights, a holistic 360-degree view of a customer, personalized customer experiences across every touchpoint, and real-time actionable insights.

Dynamics 365 is Microsofts cloud-based suite of business applications covering sales, marketing, customer service, field service, human resources, finance, supply chain management and more. There are even mixed reality offerings for product visualisation and remote assistance.

Dynamics is a growing business for Microsoft, thanks in part to integration with Office 365, even though some of the applications are quirky and awkward to use in places. Licensing is complex too and can be expensive.

Keeping up with what is new is a challenge. If you have a few hours to spare, you could read the 546-page 2019 Release Wave 2 [PDF] document, for features which have mostly been delivered, or the 405-page 2020 Release Wave 1 [PDF], about what is coming from April to September this year.

Many of the new features are small tweaks, but the company is also putting its energy into connecting data, both from internal business sources and from third parties, to drive AI analytics.

The updated Dynamics 365 Customer Insights includes data sources such as demographics and interests, firmographics, market trends, and product and service usage data, says Phillips. AI is also used in new forecasting features in Dynamics 365 Sales and in Dynamics 365 Finance Insights, coming in preview in May.

Dynamics 365 Project Operations ... Click to enlarge

The company is also introducing a new application, Dynamics 365 Business Operations, with general availability promised for October 1 2020. This looks like a business-oriented take on project management, with the ability to generate quotes, track progress, allocate resources, and generate invoices.

Microsoft already offers project management through its Project products, though this is part of Office rather than Dynamics. What can you do with Project Operations that you could not do before with a combination of Project and Dynamics 365?

There is not a lot of detail in the overview, but rest assured that it has AI-powered business insights and seamless interoperability with Microsoft Teams, so it must be great, right? More will no doubt be revealed at the May Business Applications Summit in Dallas, Texas.

Sponsored: Detecting cyber attacks as a small to medium business

See original here:
Buzzwords ahoy as Microsoft tears the wraps off machine-learning enhancements, new application for Dynamics 365 - The Register

Grok combines Machine Learning and the Human Brain to build smarter AIOps – Diginomica

A few weeks ago I wrote a piece here about Moogsoft which has been making waves in the service assurance space by applying artificial intelligence and machine learning to the arcane task of keeping on keeping critical IT up and running and lessening the business impact of service interruptions. Its a hot area for startups and Ive since gotten article pitches from several other AIops firms at varying levels of development.

The most intriguing of these is a company called Grok which was formed by a partnership between Numenta, a pioneering AI research firm co-founded by Jeff Hawkins and Donna Dubinsky, who are famous for having started two classic mobile computing companies, Palm and Handspring, and Avik Partners. Avik is a company formed by brothers Casey and Josh Kindiger, two veteran entrepreneurs who have successfully started and grown multiple technology companies in service assurance and automation over the past two decadesmost recently Resolve Systems.

Josh Kindiger told me in a telephone interview how the partnership came about:

Numenta is primarily a research entity started by Jeff and Donna about 15 years ago to support Jeffs ideas about the intersection of neuroscience and data science. About five years ago, they developed an algorithm called HTM and a product called Grok for AWS which monitors servers on a network for anomalies. They werent interested in developing a company around it but we came along and saw a way to link our deep domain experience in the service management and automation areas with their technology. So, we licensed the name and the technology and built part of our Grok AIOps platform around it.

Jeff Hawkins has spent most of his post-Palm and Handspring years trying to figure out how the human brain works and then reverse engineering that knowledge into structures that machines can replicate. His model or theory, called hierarchical temporal memory (HTM), was originally described in his 2004 book On Intelligence written with Sandra Blakeslee. HTM is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian (in particular, human) brain. For a little light reading, I recommend a peer-reviewed paper called A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex.

Grok AIOps also uses traditional machine learning, alongside HTM. Said Kindiger:

When I came in, the focus was purely on anomaly detection and I immediately engaged with a lot of my old customers--large fortune 500 companies, very large service providers and quickly found out that while anomaly detection was extremely important, that first signal wasn't going to be enough. So, we transformed Grok into a platform. And essentially what we do is we apply the correct algorithm, whether it's HTM or something else, to the proper stream events, logs and performance metrics. Grok can enable predictive, self-healing operations within minutes.

The Grok AIOps platform uses multiple layers of intelligence to identify issues and support their resolution:

Anomaly detection

The HTM algorithm has proven exceptionally good at detecting and predicting anomalies and reducing noise, often up to 90%, by providing the critical context needed to identify incidents before they happen. It can detect anomalies in signals beyond low and high thresholds, such as signal frequency changes that reflect changes in the behavior of the underlying systems. Said Kindiger:

We believe HTM is the leading anomaly detection engine in the market. In fact, it has consistently been the best performing anomaly detection algorithm in the industry resulting in less noise, less false positives and more accurate detection. It is not only best at detecting an anomaly with the smallest amount of noise but it also scales, which is the biggest challenge.

Anomaly clustering

To help reduce noise, Grok clusters anomalies that belong together through the same event or cause.

Event and log clustering

Grok ingests all the events and logs from the integrated monitors and then applies to it to event and log clustering algorithms, including pattern recognition and dynamic time warping which also reduce noise.

IT operations have become almost impossible for humans alone to manage. Many companies struggle to meet the high demand due to increased cloud complexity. Distributed apps make it difficult to track where problems occur during an IT incident. Every minute of downtime directly impacts the bottom line.

In this environment, the relatively new solution to reduce this burden of IT management, dubbed AIOps, looks like a much needed lifeline to stay afloat. AIOps translates to "Algorithmic IT Operations" and its premise is that algorithms, not humans or traditional statistics, will help to make smarter IT decisions and help ensure application efficiency. AIOps platforms reduce the need for human intervention by using ML to set alerts and automation to resolve issues. Over time, AIOps platforms can learn patterns of behavior within distributed cloud systems and predict disasters before they happen.

Grok detects latent issues with cloud apps and services and triggers automations to troubleshoot these problems before requiring further human intervention. Its technology is solid, its owners have lots of experience in the service assurance and automation spaces, and who can resist the story of the first commercial use of an algorithm modeled on the human brain.

View post:
Grok combines Machine Learning and the Human Brain to build smarter AIOps - Diginomica

Removing the robot factor from AI – Gigabit Magazine – Technology News, Magazine and Website

AI and machine learning have something of an image problem.

Theyve never been quite so widely discussed as topics, or, arguably, their potential so widely debated. This is, to some extent, part of the problem. Artificial Intelligence can, still, be anything, achieve anything. But until its results are put into practice for people, it remains a misunderstood concept, especially to the layperson.

While well-established industry thought leaders are rightly championing the fact that AI has the potential to be transformative and capable of a wide range of solutions, the lack of context for most people is fuelling fears that it is simply going to replace peoples roles and take over tasks, wholesale. It also ignores the fact that AI applications have been quietly assisting peoples jobs, in a light touch manner, for some time now and people are still in those roles.

Many people are imagining AI to be something it is not. Given the technology is still in a fast-development phase, some people think it is helpful to consider the tech as a type of plug and play, black box technology. Some believe this helps people to put it into the context of how it will work and what it will deliver for businesses. In our opinion, this limits a true understanding of its potential and what it could be delivering for companies day in, day out.

The hyperbole is also not helping. The statements we use AI and our products AI driven have already become well-worn by enthusiastic salespeople and marketeers. While theres a great sales case to be made by that exciting assertion, its rarely speaking the truth about the situation. What is really meant by the current use of artificial intelligence? Arguably, AI is not yet a thing in its own right; i.e the capability of machines to be able to do the things which people do instinctively, which machines instinctively do not. Instead of being excited by hearing the phrase we do AI!, people should see it as a red flag to dig deeper into the technology and the AI capability in question.

SEE ALSO:

Machine learning, similarly, doesnt benefit from sci-fi associations or big sales patter bravado. In its simplest form, while machine learning sounds like a defined and independent process, it is actually a technique to deliver AI functions. Its maths, essentially, applied alongside data, processing power and technology to deliver an AI capability. Machine learning models dont execute actions or do anything themselves, unless people put them to use. They are still human tools, to be deployed by someone to undertake a specific action.

The tools and models are only as good as the human knowledge and skills programming them. People, especially in the legal sectors autologyx works with, are smart, adaptable and vastly knowledgeable. They can quickly shift from one case to another, and have their own methods and processes of approaching problem solving in the workplace. Where AI is coming in to lift the load is on lengthy, detailed, and highly repetitive tasks such as contract renewals. Humans can get understandably bored when reviewing highly repetitive, vast volumes of contracts to change just a few clauses and update the document. A machine learning solution does notnget bored, and performs consistently with a high degree of accuracy, freeing those legal teams up to work on more interesting, varied, or complicated casework.

Together, AI, machine learning and automation are the arms and armour businesses across a range of sectors need to acquire to adapt and continue to compete in the future. The future of the legal industry, for instance, is still a human one where knowledge of people will continue to be an asset. AI in that sector is more focused on codifying and leveraging that intelligence and while the machine and AI models learn and grow from people, so those people will continue to grow and expand their knowledge within the sector too. Today, AI and ML technologies are only as good as the people power programming them.

As Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence put it, AI is neither good nor evil. Its a tool. A technology for us to use. How we choose to apply it is entirely up to us.

By Ben Stoneham, founder and CEO, autologyx

Follow this link:
Removing the robot factor from AI - Gigabit Magazine - Technology News, Magazine and Website

How Will Machine Learning Serve the Hotel Industry in 2020 and Beyond? – CIOReview

Machine learning will help the hotel industry to remain tech-savvy and also help them to save money, improve service, and grow more efficient.

Fremont, CA: Artificial intelligence (AI) implementation grew tremendously last year alone such that any business that does not consider the implications of machine learning (ML) will find itself in multiple binds. It has become mandatory that companies should question themselves how they will utilize machine learning to reap its benefits while staying in business. Similarly, hotels should interrogate themselves about how they will use ML. However, trying to catch-up with this technology is potentially dangerous when companies realize that their competition is outperforming them. When hotels believe that robotic housekeepers and facial recognition kiosks are the effective applications of ML, they can do much more. Here is how ML serves the hotel industry while helping save money, improve service, and grow more efficient.

For successfully running the hotel industry, energy and water are the two most important factors. Will there be a no if there is a technology that controls the use of the two critical factors without affecting the guest's comfort zone. Every dollar saved on energy and water can impact the bottom line of the business in a big way. Hotels can track the actual consumption of energy against predictive models allowing them to manage performance against competitors. Hotel brands can link-in room energy to the PMS so that when the room is empty, the heater or any other electrical appliances, automatically turns off.

ML helps brands hire suitable candidates and also highly qualified candidates who might have been overlooked for not fulfilling traditional expectations. ML algorithms were used to create assessments to test candidates for recruiting against the personas using gamification-based tools. Further, ML maximizes the value of premium inventory and increases guest satisfaction by offering guests personalized upgrades based on their previous stay at a price that the guest is ready to pay at booking and pre-arrival period. Using ML technology, hotel brands can create offers at any point during the guest stay, including the front desk. Thus, the future of sustainability in the hospitality industry relies on ML.

Link:
How Will Machine Learning Serve the Hotel Industry in 2020 and Beyond? - CIOReview

Overview of causal inference in machine learning – Ericsson

In a major operators network control center complaints are flooding in. The network is down across a large US city; calls are getting dropped and critical infrastructure is slow to respond. Pulling up the systems event history, the manager sees that new 5G towers were installed in the affected area today.

Did installing those towers cause the outage, or was it merely a coincidence? In circumstances such as these, being able to answer this question accurately is crucial for Ericsson.

Most machine learning-based data science focuses on predicting outcomes, not understanding causality. However, some of the biggest names in the field agree its important to start incorporating causality into our AI and machine learning systems.

Yoshua Bengio, one of the worlds most highly recognized AI experts, explained in a recent Wired interview: Its a big thing to integrate [causality] into AI. Current approaches to machine learning assume that the trained AI system will be applied on the same kind of data as the training data. In real life it is often not the case.

Yann LeCun, a recent Turing Award winner, shares the same view, tweeting: Lots of people in ML/DL [deep learning] know that causal inference is an important way to improve generalization.

Causal inference and machine learning can address one of the biggest problems facing machine learning today that a lot of real-world data is not generated in the same way as the data that we use to train AI models. This means that machine learning models often arent robust enough to handle changes in the input data type, and cant always generalize well. By contrast, causal inference explicitly overcomes this problem by considering what might have happened when faced with a lack of information. Ultimately, this means we can utilize causal inference to make our ML models more robust and generalizable.

When humans rationalize the world, we often think in terms of cause and effect if we understand why something happened, we can change our behavior to improve future outcomes. Causal inference is a statistical tool that enables our AI and machine learning algorithms to reason in similar ways.

Lets say were looking at data from a network of servers. Were interested in understanding how changes in our network settings affect latency, so we use causal inference to proactively choose our settings based on this knowledge.

The gold standard for inferring causal effects is randomized controlled trials (RCTs) or A/B tests. In RCTs, we can split a population of individuals into two groups: treatment and control, administering treatment to one group and nothing (or a placebo) to the other and measuring the outcome of both groups. Assuming that the treatment and control groups arent too dissimilar, we can infer whether the treatment was effective based on the difference in outcome between the two groups.

However, we can't always run such experiments. Flooding half of our servers with lots of requests might be a great way to find out how response time is affected, but if theyre mission-critical servers, we cant go around performing DDOS attacks on them. Instead, we rely on observational datastudying the differences between servers that naturally get a lot of requests and those with very few requests.

There are many ways of answering this question. One of the most popular approaches is Judea Pearl's technique for using to statistics to make causal inferences. In this approach, wed take a model or graph that includes measurable variables that can affect one another, as shown below.

To use this graph, we must assume the Causal Markov Condition. Formally, it says that subject to the set of all its direct causes, a node is independent of all the variables which are not direct causes or direct effects of that node. Simply put, it is the assumption that this graph captures all the real relationships between the variables.

Another popular method for inferring causes from observational data is Donald Rubin's potential outcomes framework. This method does not explicitly rely on a causal graph, but still assumes a lot about the data, for example, that there are no additional causes besides the ones we are considering.

For simplicity, our data contains three variables: a treatment , an outcome , and a covariate . We want to know if having a high number of server requests affects the response time of a server.

In our example, the number of server requests is determined by the memory value: a higher memory usage means the server is less likely to get fed requests. More precisely, the probability of having a high number of requests is equal to 1 minus the memory value (i.e. P(x=1)=1-z , where P(x=1) is the probability that x is equal to 1). The response time of our system is determined by the equation (or hypothetical model):

y=1x+5z+

Where is the error, that is, the deviation from the expected value of given values of and depends on other factors not included in the model. Our goal is to understand the effect of on via observations of the memory value, number of requests, and response times of a number of servers with no access to this equation.

There are two possible assignments (treatment and control) and an outcome. Given a random group of subjects and a treatment, each subject has a pair of potential outcomes: and , the outcomes Y_i (0) and Y_i (1) under control and treatment respectively. However, only one outcome is observed for each subject, the outcome under the actual treatment received: Y_i=xY_i (1)+(1-x)Y_i (0). The opposite potential outcome is unobserved for each subject and is therefore referred to as a counterfactual.

For each subject, the effect of treatment is defined to be Y_i (1)-Y_i (0) . The average treatment effect (ATE) is defined as the average difference in outcomes between the treatment and control groups:

E[Y_i (1)-Y_i (0)]

Here, denotes an expectation over values of Y_i (1)-Y_i (0)for each subject , which is the average value across all subjects. In our network example, a correct estimate of the average treatment effect would lead us to the coefficient in front of x in equation (1) .

If we try to estimate this by directly subtracting the average response time of servers with x=0 from the average response time of our hypothetical servers with x=1, we get an estimate of the ATE as 0.177 . This happens because our treatment and control groups are not inherently directly comparable. In an RTC, we know that the two groups are similar because we chose them ourselves. When we have only observational data, the other variables (such as the memory value in our case) may affect whether or not one unit is placed in the treatment or control group. We need to account for this difference in the memory value between the treatment and control groups before estimating the ATE.

One way to correct this bias is to compare individual units in the treatment and control groups with similar covariates. In other words, we want to match subjects that are equally likely to receive treatment.

The propensity score ei for subject is defined as:

e_i=P(x=1z=z_i ),z_i[0,1]

or the probability that x is equal to 1the unit receives treatmentgiven that we know its covariate is equal to the value z_i. Creating matches based on the probability that a subject will receive treatment is called propensity score matching. To find the propensity score of a subject, we need to predict how likely the subject is to receive treatment based on their covariates.

The most common way to calculate propensity scores is through logistic regression:

Now that we have calculated propensity scores for each subject, we can do basic matching on the propensity score and calculate the ATE exactly as before. Running propensity score matching on the example network data gets us an estimate of 1.008 !

We were interested in understanding the causal effect of binary treatment x variable on outcome y . If we find that the ATE is positive, this means an increase in x results in an increase in y. Similarly, a negative ATE says that an increase in x will result in a decrease in y .

This could help us understand the root cause of an issue or build more robust machine learning models. Causal inference gives us tools to understand what it means for some variables to affect others. In the future, we could use causal inference models to address a wider scope of problems both in and out of telecommunications so that our models of the world become more intelligent.

Special thanks to the other team members of GAIA working on causality analysis: Wenting Sun, Nikita Butakov, Paul Mclachlan, Fuyu Zou, Chenhua Shi, Lule Yu and Sheyda Kiani Mehr.

If youre interested in advancing this field with us, join our worldwide team of data scientists and AI specialists at GAIA.

In this Wired article, Turing Award winner Yoshua Bengio shares why deep learning must begin to understand the why before it can replicate true human intelligence.

In this technical overview of causal inference in statistics, find out whats needed to evolve AI from traditional statistical analysis to causal analysis of multivariate data.

This journal essay from 1999 offers an introduction to the Causal Markov Condition.

Read more:
Overview of causal inference in machine learning - Ericsson

LinkShadow to Showcase Machine Learning Based Threat Analytics Technology at RSA Conference 2020 – PRNewswire

ATHENS, Ga., Feb. 7, 2020 /PRNewswire/ -- LinkShadow,Next-Generation Cybersecurity Analytics, announces its presence at the prestigious RSA Conference 2020 in San Francisco from February 24-28.

LinkShadow offers a wide spectrum of cybersecurity solutions that focuses on how to overcome the critical challenges in this smart cyberattacks era.These products include ThreatScore Quadrant, Identity Intelligence, Asset AutoDiscovery, TrafficScene Visualizer & AttackScape Viewer, CXO Dashboards and Threat Shadow. When combined with state-of-art machine-learning capabilities, LinkShadow delivers supreme solutions which include Behavioral Analytics, Threat Intelligence, Insider Threat Management, Privileged Users Analytics, Network Security Optimization, Application Security Visibility, Risk Scoring and Prioritization, Machine Learning and Statistical Analysis and, finally, Anomaly Detection and Predictive Analytics.

At RSA Conference, LinkShadow expert teams will be sharing valuable insights on how this dynamic platform can empower organizations and help improve their defenses against advanced cyberattacks.

Duncan Hume, Vice President USA, LinkShadow, commented that "Undoubtedly RSA Conference is the perfect platform to showcase this unique technology, and we plan to make the best of this opportunity.While you are there, meet the technical teams for a demo session and learn how LinkShadow's best-in-class threat hunting capabilities powered by intense and extensive machine learning algorithms can help organizations become cyber-resilient."

To schedule a personalized demo or fix a meeting at LinkShadow - Booth No. 5487, North Hall, register now:https://www.linkshadow.com/events/RSA-Conference

About LinkShadow

LinkShadow is a U.S.-registered company with regional offices in the Middle East.It is pioneered by a team of highly skilled solution architects, product specialists and programmers with a vision to formulate a next-generation cybersecurity solution that provides unparalleled detection of even the most sophisticated threats. LinkShadow was built with the vision of enhancing organizations' defenses against advanced cyberattacks, zero-day malware and ransomware, while simultaneously gaining rapid insight into the effectiveness of their existing security investments.For more information, visit http://www.linkshadow.com.

Raji John | Head of Client ServiceseMediaLinkT: +971 4 279 4091E: raji@emedialinkme.net

Related Links

Website

Registration page

SOURCE LinkShadow

https://www.linkshadow.com

Read more here:
LinkShadow to Showcase Machine Learning Based Threat Analytics Technology at RSA Conference 2020 - PRNewswire

Machine Learning Patentability In 2019: 5 Cases Analyzed And Lessons Learned Part 1 – Mondaq News Alerts

To print this article, all you need is to be registered or login on Mondaq.com.

This article is the first of a five-part series of articlesdealing with what patentability of machine learning looks like in2019. This article begins the series by describing the USPTO's2019 Revised Patent Subject MatterEligibility Guidance (2019 PEG) in the context of the U.S.patent system. Then, this article and the four followingarticles will describe one of five cases in whichExaminer's rejections under Section 101 were reversed bythe PTAB under this new 2019 PEG. Each of the five cases discusseddeal with machine-learning patents, andmay provide some insight into how the 2019 PEG affects the patentability ofmachine-learning, as well as software more broadly.

The US patent laws are set out in Title 35 of the United StatesCode (35 U.S.C.). Section 101 of Title 35 focuses on severalthings, including whether the invention is classified aspatent-eligible subject matter. As a general rule, an invention isconsidered to be patent-eligible subject matter if it "fallswithin one of the four enumerated categories of patentable subjectmatter recited in 35 U.S.C. 101 (i.e.,process, machine, manufacture, or composition of matter)."1 This,on its own, is an easy hurdle to overcome. However, there areexceptions (judicial exceptions). These include (1) laws of nature;(2) natural phenomena; and (3) abstract ideas. If the subjectmatter of the claimed invention fits into any of these judicialexceptions, it is not patent-eligible, and a patent cannot beobtained. The machine-learning and software aspects of a claim face101 issues based on the "abstract idea" exception, andnot the other two.

Section 101 is applied by Examiners at the USPTO in determiningwhether patents should be issued; by district courts in determiningthe validity of existing patents; in the Patent Trial and Appeal Board(PTAB) in appeals from Examinerrejections, in post-grant-review (PGR)proceedings, and in covered-business-method-review(CBM) proceedings; and in the Federal Circuit on appeals. ThePTAB is part of the USPTO, and may hear an appeal of anExaminer's rejection of claims of a patent application when theclaims have been rejected at least twice.

In determining whether a claim fits into the "abstractidea" category at the USPTO, the Examiners and the PTAB mustapply the 2019 PEG, which is described in the following section ofthis paper. In determining whether a claim is patent-ineligible asan "abstract idea" in the district courts and the FederalCircuit, however, the courts apply the "Alice/Mayo" test;and not the 2019 PEG. The definition of "abstract idea"was formulated by the Alice and Mayo Supreme Court cases. Thesetwo cases have been interpreted by a number of Federal Circuitopinions, which has led to a complicated legal framework that theUSPTO and the district courts must follow.2

The USPTO, which governs the issuance of patents, decided thatit needed a more practical, predictable, and consistent method forits over 8,500 patent examiners to apply when determining whether aclaim is patent-ineligible as an abstract idea.3 Previously, theUSPTO synthesized and organized, for its examiners to compare to anapplicant's claims, the facts and holdings of each FederalCircuit case that deals with section 101. However, the large andstill-growing number of cases, and the confusion arising from"similar subject matter [being] described both as abstract andnot abstract in different cases,"4 led to issues.Accordingly, the USPTO issued its 2019 Revised Patent SubjectMatter Eligibility Guidance on January 7, 2019 (2019 PEG), whichshifted from the case-comparison structure to a new examinationstructure.5 The new examination structure,described below, is more patent-applicant friendly than the priorstructure,6 thereby having the potential toresult in a higher rate of patent issuances. The 2019 PEG does notalter the federal statutory law or case law that make up the U.S.patent system.

The 2019 PEG has a structure consisting of four parts: Step 1,Step 2A Prong 1, Step 2A Prong 2, and Step 2B. Step 1 refers to thestatutory categories of patent-eligible subject matter, while Step2 refers to the judicial exceptions. In Step 1, the Examiners mustdetermine whether the subject matter of the claim is a process,machine, manufacture, or composition of matter. If it is, theExaminer moves on to Step 2.

In Step 2A, Prong 1, the Examiners are to determine whether theclaim "recites" a judicial exception includinglaws of nature, natural phenomenon, and abstract ideas. Forabstract ideas, the Examiners must determine whether the claimfalls into at least one of three enumerated categories: (1)"mathematical concepts" (mathematical relationships,mathematical formulas or equations, mathematical calculations); (2)"certain methods of organizing human activity"(fundamental economic principles or practices, commercial or legalinteractions, managing personal behavior or relationships orinteractions between people); and (3) "mental processes"(concepts performed in the human mind: encompassing acts people canperform using their mind, or using pen and paper). These threeenumerated categories are not mere examples, but arefully-encompassing. The Examiners are directed that "[i]n therare circumstance in which they believe[] a claim limitation thatdoes not fall within the enumerated groupings of abstract ideasshould nonetheless be treated as reciting an abstract idea,"they are to follow a particular procedure involving providingjustifications and getting approval from the Technology CenterDirector.

Next, if the claim limitation "recites" one of theenumerated categories of abstract ideas under Prong 1 of Step 2A,the Examiner is instructed to proceed to Prong 2 of Step 2A. InStep 2A, Prong 2, the Examiners are to determine if the claim is"directed to" the recited abstract idea. In this step,the claim does not fall within the exception, despite reciting theexception, if the exception is integrated into a practicalapplication. The 2019 PEG provides a non-exhaustive list ofexamples for this, including, among others: (1) an improvement inthe functioning of a computer; (2) a particular treatment for adisease or medical condition; and (3) an application of "thejudicial exception in some other meaningful way beyond generallylinking the use of the judicial exception to a particulartechnological environment, such that the claim as a whole is morethan a drafting effort designed to monopolize theexception."

Finally, even if the claim recites a judicial exception underStep 2A Prong 1, and the claim is directed to the judicialexception under Step 2A Prong 2, it might still be patent-eligibleif it satisfies the requirement of Step 2B. In Step 2B, theExaminer must determine if there is an "inventiveconcept": that "the additional elements recited in theclaims provide[] 'significantly more' than the recitedjudicial exception." This step attempts to distinguish betweenwhether the elements combined to the judicial exception (1)"add[] a specific limitation or combination of limitationsthat are not well-understood, routine, conventional activity in thefield"; or alternatively (2) "simply append[]well-understood, routine, conventional activities previously knownto the industry, specified at a high level of generality."Furthermore, the 2019 PEG indicates that where "an additionalelement was insignificant extra-solution activity, [the Examiner]should reevaluate that conclusion in Step 2B. If such reevaluationindicates that the element is unconventional . . . this finding mayindicate that an inventive concept is present and that the claim isthus eligible."

In summary, the 2019 PEG provides an approach for the Examinersto apply, involving steps and prongs, to determine if a claim ispatent-ineligible based on being an abstract idea. Conceptually,the 2019-PEG method begins with categorizing the type of claiminvolved (process, machine, etc.); proceeds to determining if anexception applies (e.g., abstract idea); then, if an exceptionapplies, proceeds to determining if an exclusion applies (i.e.,practical application or inventive concept). Interestingly, thePTAB not only applies the 2019 PEG in appeals from Examinerrejections, but also applies the 2019 PEG in its other Section-101decisions, including CBM review and PGRs.7 However, the 2019PEG only applies to the Examiners and PTAB (the Examiners and thePTAB are both part of the USPTO), and does not apply to districtcourts or to the Federal Circuit.

Case 1: Appeal 2018-0074438 (Decided October 10,2019)

This case involves the PTAB reversing the Examiner's Section101 rejections of claims of the 14/815,940 patent application. Thispatent application relates to applying AI classificationtechnologies and combinational logic to predict whether machinesneed to be serviced, and whether there is likely to be equipmentfailure in a system. The Examiner contended that the claims fitinto the judicial exception of "abstract idea" because"monitoring the operation of machines is a fundamentaleconomic practice." The Examiner explained that "thelimitations in the claims that set forth the abstract idea are:'a method for reading data; assessing data; presenting data;classifying data; collecting data; and tallying data.'"The PTAB disagreed with the Examiner. The PTAB stated:

Specifically, we do not find 'monitoring the operation ofmachines,' as recited in the instant application, is afundamental economic principle (such as hedging, insurance, ormitigating risk). Rather, the claims recite monitoring operation ofmachines using neural networks, logic decision trees, confidenceassessments, fuzzy logic, smart agent profiling, and case-basedreasoning.

As explained in the previous section of this paper, the 2019 PEGset forth three possible categories of abstract ideas: mathematicalconcepts, certain methods of organizing human activity, and mentalprocesses. Here, the PTAB addressed the second of these categories.The PTAB found that the claims do not recite a fundamental economicprinciple (one method of organizing human activity) because theclaims recite AI components like "neural networks" in thecontext of monitoring machines. Clearly, economic principles and AIcomponents are not always mutually exclusive concepts.9 Forexample, there may be situations where these algorithms are applieddirectly to mitigating business risks. Accordingly, the PTAB waslikely focusing on the distinction between monitoring machines andmitigating risk; and not solely on the recitation of the AIcomponents. However, the recitation of the AI components did notseem to hurt.

Then, moving on to another category of abstract ideas, the PTABstated:

Claims 1 and 8 as recited are not practically performed in thehuman mind. As discussed above, the claims recite monitoringoperation of machines using neural networks, logic decision trees,confidence assessments, fuzzy logic, smart agent profiling, andcase-based reasoning. . . . [Also,] claim 8 recites 'an outputdevice that transforms the composite prediction output intohuman-readable form.'

. . . .

In other words, the 'classifying' steps of claims 1 and'modules' of claim 8 when read in light of theSpecification, recite a method and system difficult and challengingfor non-experts due to their computational complexity. As such, wefind that one of ordinary skill in the art would not find itpractical to perform the aforementioned 'classifying' stepsrecited in claim 1 and function of the 'modules' recited inclaim 8 mentally.

In the language above, the PTAB addressed the third category ofabstract ideas: mental processes. The PTAB provided that the claimdoes not recite a mental process because the AI algorithms, basedon the context in which they are applied, are computationallycomplex.

The PTAB also addressed the first of the three categories ofabstract ideas (mathematical concepts), and found that it does notapply because "the specific mathematical algorithm or formulais not explicitly recited in the claims." Requiring that amathematical concept be "explicitly recited" seems to bea narrow interpretation of the 2019 PEG. The 2019 PEG does notrequire that the recitation be explicit, and leaves the mathcategory open to relationships, equations, or calculations. Fromthis, the PTAB might have meant that the claims list a mathematicalconcept (the AI algorithm) by its name, as a component of theprocess, rather than trying to claim the steps of the algorithmitself. Clearly, the names of the algorithms are "explicitlyrecited"; the steps of the AI algorithms, however, are notrecited in the claims.

Notably, reciting only the name of an algorithm, rather thanreciting the steps of the algorithm, seems to indicate that theclaims are not directed to the algorithms (i.e., the claims have apractical application for the algorithms). It indicates that theclaims include an algorithm, but that there is more going on in theclaim than just the algorithm. However, instead of determining thatthere is a practical application of the algorithms, or an inventiveconcept, the PTAB determined that the claim does not even recitethe mathematical concepts.

Additionally, the PTAB found that even if the claims hadbeen classified as reciting an abstract idea, as the Examiner hadcontended the claims are not directed to that abstractidea, but are integrated into a practical application. The PTABstated:

"Appellant's claims address a problem specificallyusing several artificial intelligence classification technologiesto monitor the operation of machines and to predict preventativemaintenance needs and equipment failure."

The PTAB seems to say that because the claims solve a problemusing the abstract idea, they are integrated into a practicalapplication. The PTAB did not specify why the additional elementsare sufficient to integrate the invention. The opinion actuallydoes not even specifically mention that there are additionalelements. Instead, the PTAB's conclusion might have been that,based on a totality of the circumstances, it believed that theclaims are not directed to the algorithms, but actually just applythe algorithms in a meaningful way. The PTAB could have fit thisreasoning into the 2019 PEG structure through one of the Step 2A,Prong 2 examples (e.g., that the claim applies additional elements"in some other meaningful way"), but did not expressly doso.

This case illustrates:

(1) the monitoring of machines was held to not be an abstractidea, in this context;(2) the recitation of AI components such as "neuralnetworks" in the claims did not seem to hurt for arguing anyof the three categories of abstract ideas;(3) complexity of algorithms implemented can help with the"mental processes" category of abstract ideas; and(4) the PTAB might not always explicitly state how the rule for"practical application" applies, but seems to apply itconsistently with the examples from the 2019 PEG.

The next four articles will build on this background, and willprovide different examples of how the PTAB approaches reversingExaminer 101-rejections of machine-learning patents under the 2019PEG. Stay tuned for the analysis and lessons of the next case,which includes methods for overcoming rejections based on the"mental processes" category of abstract ideas, on anapplication for a "probabilistic programming compiler"that performs the seemingly 101-vulnerable function of"generat[ing] data-parallel inference code."

Footnotes

1 MPEP 2106.04.

2Accordingly, the USPTO must follow both the Federal Circuit'scase law that interprets Title 35 of the United States Code, andmust follow the 2019 PEG. The 2019 PEG is not the same as theFederal Circuit's standard the 2019 PEG does notinvolve distinguishing case law (the USPTO, in its 2019 PEG, hasdeclared the Federal Circuit's case law to be too clouded to bepractically applied by the Examiners. 84 Fed. Reg. 52.). The USPTOpractically could not, and actually did not, synthesize theholdings of each of the Federal Circuit opinions regarding Section101 into the standard of the 2019 PEG. Therefore, logically, theonly way to ensure that the 2019 PEG does not impinge on thestatutory rights (provided by 35 U.S.C.) of patent applicants, asinterpreted by the Federal Circuit, is for the 2019 PEG to definethe scope of the 101 judicial exceptions more narrowly than theStatutory requirement. However, assuming there are instances wherethe 2019 PEG defines the 101 judicial exceptions more broadly thanthe statutory standard (if the USPTO rejects claims that theFederal Circuit would not have), that patent applicant may haveadditional arguments for eligibility.

3 84 Fed.Reg. 50, 52.

4Id.

5 TheUSPTO also, on October 17 of 2019, issued an update to the 2019PEG. The October update is consistent with the 2019 PEG, and merelyprovides clarification to some of the terms used in the 2019 PEG,and clarification as to the scope of the 2019 PEG. October 2019 Update: Subject MatterEligibility (October 17, 2019), https://www.uspto.gov/sites/default/files/documents/peg_oct_2019_update.pdf.

6See "Frequently Asked Questions (FAQs) on the 2019Revised Patent Subject Matter Eligibility Guidance ('2019PEG')", C-6 (https://www.uspto.gov/sites/default/files/documents/faqs_on_2019peg_20190107.pdf)("Any claim considered patent eligible under the currentversion of the MPEP and subsequent guidance should be consideredpatent eligible under the 2019 PEG. Because the claim at issue wasconsidered eligible under the current version of the MPEP, theExaminer should not make a rejection under 101 in view ofthe 2019 PEG.").

7See American Express v. Signature Systems, CBM2018-00035(Oct. 30, 2019); Supercell Oy v. Gree, Inc., PGR2018-00061 (Oct.15, 2019).

8 https://e-foia.uspto.gov/Foia/RetrievePdf?system=BPAI&flNm=fd2018007443-10-10-2019-0.

9Notably, the "mental process" category and notthe "certain methods of organizing human activity"category is the one that focuses on the complexity of theprocess. Furthermore, as shown in the following paragraph, the"mental process" category was separately discussed by thePTAB, again mentioning the algorithms. Accordingly, the PTAB islikely not mentioning the algorithms for the purpose of describingthe complexity of the method.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

View original post here:
Machine Learning Patentability In 2019: 5 Cases Analyzed And Lessons Learned Part 1 - Mondaq News Alerts

European Central Bank Partners with Digital Innovation Platform Reply to Offer AI and Machine Learning Coding Marathon – Crowdfund Insider

The European Central Bank (ECB) has partnered with Reply, a platform focused on digital innovation, in order to offer a 48-hour coding marathon, which will focus on teaching participants how to apply the latest artificial intelligence (AI) and machine learning (ML) algorithms.

The marathon is scheduled to take place during the final days of February 2020 at the ECB in Frankfurt, Germany. The supervisory data hackathon will have over 80 participants from the ECB, Reply and various other organizations.

Participants will be using AI and ML techniques to gain a better understanding and quicker insights into the large amounts of supervisory data gathered by the ECB from various banks and other financial institutions via regular reporting methods for risk analysis purposes.

Program participants will have to turn in projects in the areas of data quality, interlinkages in supervisory reporting and risk indicators, before the event takes place. The best submissions will be worked on for a 48-hour period by multidisciplinary teams.

Last month, the Bank of England (BoE) and UKs financial regulator, the Financial Conduct Authority (FCA), announced that they would be running a public/private forum that would cover the relevant technical and public policy issues related to bank adoption of artificial intelligence (AI) and machine learning (ML) technologies and software.

A survey conducted by the BoE last year revealed that ML tools are being used in around two-thirds, or 66%, of UKs financial institutions, with the technology expected to enter a new stage of development and maturity that could lead to more advanced deployments in the future.

Read the original post:
European Central Bank Partners with Digital Innovation Platform Reply to Offer AI and Machine Learning Coding Marathon - Crowdfund Insider

New cybersecurity system protects networks with LIDAR, no not that LiDAR – C4ISRNet

When it comes to identifying early cyber threats, its important to have laser-like precision. Mapping out a threat environment can be done with a range of approaches, and a team of researchers from Purdue University created a new system for just such applications. They are calling that approach LIDAR, or lifelong, intelligent, diverse, agile and robust.

This is not to be confused with LiDAR, for Light Detection and Ranging, a kind of remote sensing system that uses laser pulses to measure distances from the sensor. The light-specific LiDAR, sometimes also written LIDAR, is a valuable tool for remote sensing and mapping, and features prominently in the awareness tools of self-driving vehicles.

Purdues LIDAR, instead, is a kind of architecture for network security. It can adapt to threats, thanks in part to its ability to learn three ways. These include supervised machine learning, where an algorithm looks at unusual features in the system and compares them to known attacks. An unsupervised machine learning component looks through the whole system for anything unusual, not just unusual features that resemble attacks. These two machine-learning components are mediated by a rules-based supervisor.

One of the fascinating things about LIDAR is that the rule-based learning component really serves as the brain for the operation, said Aly El Gamal, an assistant professor of electrical and computer engineering in Purdues College of Engineering. That component takes the information from the other two parts and decides the validity of a potential attack and necessary steps to move forward.

By knowing existing attacks, matching to detected threats, and learning from experience, this LIDAR system can potentially offer a long-term solution based on how the machines themselves become more capable over time.

Aiding the security approach, said the researchers, is the use of a novel curiosity-driven honeypot, which can like a carnivorous pitcher plant lure attackers and then trap them where they will do no harm. Once attackers are trapped, it is possible the learning algorithm can incorporate new information about the threat, and adapt to prevent future attacks making it through.

The research team behind this LIDAR approach is looking to patent the technology for commercialization. In the process, they may also want to settle on a less-confusing moniker. Otherwise, we may stumble into a future where users securing a network of LiDAR sensors with LIDAR have to enact an entire Whos on First? routine every time they update their cybersecurity.

See more here:
New cybersecurity system protects networks with LIDAR, no not that LiDAR - C4ISRNet

Artnome Wants to Predict the Price of a Masterpiece. The Problem? There’s Only One. – Built In

Buying a Picasso is like buying a mansion.

Theres not that many of them, so it can be hard to know what a fair price should be. In real estate, if the house last sold in 2008 right before the lending crisis devastated the real estate market basing todays price on the last sale doesnt make sense.

Paintings are also affected by market conditions and a lack of data. Kyle Waters, a data scientist at Artnome, explained to us how his Boston-area firm is addressing this dilemma and, in doing so, aims to do for the art world what Zillow did for real estate.

If only 3 percent of houses are on the market at a time, we only see the prices for those 3 percent. But what about the rest of the market? Waters said. Its similar for art too. We want to price the entire market and give transparency.

We want to price the entire market and give transparency.

Artnome is building the worlds largest database of paintings by blue-chip artists like Georgia OKeeffe, including her super-famous works, lesser-known items, those privately heldand artworks publicly displayed. Waters is tinkering with the data to create a machine learning model that predicts how much people will pay for these works at auctions. Because this model includes an artists entire collection, and not just those works that have been publicly sold before, Artnome claims its machine learning model will be more accurate than the auction industrys previous practice of simply basing current prices on previous sales.

The companys goal is to bring transparency to the auction house industry. But Artnomes new model faces the old problem: Its machine learning system performs poorly on the works that typically sell for the most the ones that people are the most interested in since its hard to predict the price of a one-of-a-kind masterpiece.

With a limited data set, its just harder to generalize, Waters said.

We talked to Waters about how he compiled, cleaned and created Artnomes machine learning model for predicting auction prices, which launched in late January.

Most of the information about artists included in Artnomes model comes from the dusty basement libraries of auction houses, where they store their catalog raissons, which are books that serve as complete records of an artists work. Artnome is compiling and digitizing these records representing the first time these books have ever been brought online, Waters said.

Artnomes model currently includes information from about 5,000 artists whose works have been sold over the last 15 years. Prices in the data set range from $100 at the low end to Leonardo DaVincis record-breaking Salvator Mundi a painting thatsold for $450.3 million in 2017, making it the most expensive work of art ever sold.

How hard was it to predict what DaVincis 500-year-old Mundi would sell for? Before the sale, Christies auction house estimated his portrait of Jesus Christ was worth around $100 million less than a quarter of the price.

It was unbelievable, Alex Rotter, chairman of Christies postwar and contemporary art department, told The Art Newspaper after the sale. Rotter reported the winning phone bid.

I tried to look casual up there, but it was very nerve-wracking. All I can say is, the buyer really wanted the painting and it was very adrenaline-driven.

The buyer really wanted the painting and it was very adrenaline-driven.

A piece like Salvatore Mundi could come to market in 2017 and then not go up for auction again for 50 years. And because a machine learning model is only as good as the quality and quantity of the data it is trained on, market, condition and changes in availability make it hard to predict a future price for a painting.

These variables are categorized into two types of data: structured and unstructured. And cleaning all of it represents a major challenge.

Structured data includes information like what artist painted which painting on what medium, and in whichyear.

Waters intentionally limited the types of structured information he included in the model to keep the system from becoming too unruly to work with. But defining paintings as solely two-dimensional works on only certain mediums proved difficult, since there are so many different types of paintings (Salvador Dali famously painted on a cigar box, after all). Artnomes problem represents an issue of high cardinality, Waters said, since there are so many different categorical variables he could include in the machine learning system.

You want the model to be narrow enough so that you can figure out the nuances between really specific mediums, but you also dont want it to be so narrow that youre going to overfit.

You want the model to be narrow enough so that you can figure out the nuances between really specific mediums, but you also dont want it to be so narrow that youre going to overfit, Waters said, adding that large models also become more unruly to work with.

Other structured data focuses on the artist herself, denoting details like when the creator was born or if they were alive during the time of auction. Waters also built a natural language processing system that analyzes the type and frequency of the words an artist used in her paintings titles, noting trends like Georgia OKeeffe using the word white in many of her famous works.

Including information on market conditions, like current stock prices or real estate data, was important from a structured perspective too.

How popular is an artist, are they exhibiting right now? How many people are interested in this artist? Whats the state of the market? Waters said. Really getting those trends and quantifying those could be just as important as more data.

Another type of data included in the model is unstructured data which, as the name might suggest, is a little less concrete than the structured items. This type of data is mined from the actual painting, and includes information like the artworks dominant color, number of corner points and if faces are pictured.

Waters created a pre-trained convolutional neural network to look for these variables, modeling the project after the ResNet 50 model, which famously won the ImageNet Large Scale Visual Recognition Challenge in 2012 after it correctly identified and classified nearly all of the 14 billion objects featured.

Including unstructured data helps quantify the complexity of an image, Waters said, giving it what he called an edge score.

An edge score helps the machine learning system quantify the subjective points of a painting thatseem intuitive to humans, Waters said. An example might be Vincent Van Goghs series of paintings of red-haired men posing in front of a blue background. When youre looking at the painting, its not hard to see youre looking at self portraits of Van Gogh, by Van Gogh.

Including unstructured data in Artnomes system helps the machine spot visual cues that suggest images are part of a series, which has an impact on their value, Waters said.

When you start interacting with different variables, then you can start getting into more granular details.

Knowing that thats a self-portrait would be important for that artist, Waters said. When you start interacting with different variables, then you can start getting into more granular details that, for some paintings by different artists, might be more important than others.

Artnomes convoluted neural network is good at analyzing paintings for data that tells a deeper story about the work. Butsometimes, there are holes inthe story being told.

In its current iteration, Artnomes model includes both paintings with and without frames it doesnt specify which work falls into which category. Not identifying the frame could affect the dominant color the system discovers, Waters said, adding an error to its results.

That could maybe skew your results and say, like, the dominant color was yellow when really the painting was a landscape and it was green, Waters said.

Interested in convolutional neural networks?Convolutional Neural Networks Explained: From Pytorch to CNN

The model also lacks information on the condition of the painting, which, again, could impact the artworks price. If the model cant detect a crease in the painting, it might overestimate its value. Also missing is data on an artworks provenance, or its ownership history. Some evidence suggests that paintings that have been displayed by prominent institutions sell for more. Theres also the issue of popularity. Waters hasnt found a concrete way to tell the system that people like the work of OKeeffe more than the paintings by artist and actor James Franco.

Im trying to think of a way to come up with a popularity score for these very popular artists, Waters said.

An auctioneer hits the hammer to indicate a sale has been made. But the last price the bidder shouts isnt what theyactually pay.

Buyers also must pay the auction house a commission, which varies between auction houses and has changed over time. Waters has had to dig up the commission rates for these outlets over the years and add them to the sales price listed. Hes also had to make sure all sales prices are listed in dollars, converting those listed in other currencies. Standardizing each sale ensures the predictions the model makes are accurate, Waters said.

Youd introduce a lot of bias into the model if some things didnt have the commission, but some things did.

Youd introduce a lot of bias into the model if some things didnt have the commission, but some things did, Waters said. It would be clearly wrong to start comparing the two.

Once Artnomes data has been gleaned and cleaned, information is input into the machine learning system, which Waters structured into a random forest model, an algorithm that builds and merges multiple decision trees to arrive at an accurate prediction. Waters said using a random forest model keeps the system from overfitting paintings into one category, and also offers a level of explainability through its permutation score a metric that basically decides the most important aspects of a painting.

Waters doesnt weigh the data he puts into the model. Instead, he lets the machine learning system tell him whats important, with the model weighing factors like todays S&P prices more heavily than the dominant color of a work.

Thats kind of one way to get the feature importance, for kind of a black box estimator, Waters said.

Although Artnome has been approached by private collectors, gallery owners and startups in the art tech world interested in its machine learning system, Waters said its important this data set and model remain open to the public.

His aim is for Artnomes machine learning model to eventually function like Zillows Zestimate, which estimates real estate prices for homes on and off the market, and act as a general starting point for those interested in finding out the price of an artwork.

When it gets to the point where people see it as a respectable starting point, then thats when Ill be really satisfied.

We might not catch a specific genre, or era, or point in the art history movement, Waters said. I dont think itll ever be perfect. But when it gets to the point where people see it as a respectable starting point, then thats when Ill be really satisfied.

Want to learn more about machine learning? A Tour of the Top 10 Algorithms for Machine Learning Newbies

Original post:
Artnome Wants to Predict the Price of a Masterpiece. The Problem? There's Only One. - Built In