Category Archives: Artificial Intelligence

Tying Artificial intelligence and web scraping together [Q&A] – BetaNews

Artificial intelligence (AI) and machine learning (ML) seem to have piqued the interest of automated data collection providers. While web scraping has been around for some time, AI/ML implementations have appeared in the line of sight of providers only recently.

Aleksandras ulenko, Product Owner at Oxylabs.io, who has been working with these solutions for several years, shares his insights on the importance of artificial intelligence, machine learning, and web scraping.

BN: How has the implementation of AI/ML solutions changed the way you approach development?

AS: AI/ML has an interesting work-payoff ratio. Good models can sometimes take months to write and develop. Until then, you dont really have anything. Dedicated scrapers or parsers, on the other hand, can take up to a day or two. When you have an ML model, however, maintaining it takes a lot less time for the amount of work it covers.

So, theres always a choice. You can build dedicated scrapers and parsers, which will take significant amounts of time and effort to maintain once they start stacking up. The other choice is to have "nothing" for a significant amount of time, but a brilliant solution later on, which will save you tons of time and effort.

Theres some theoretical point where developing custom solutions is no longer worth it. Unfortunately, theres no mathematical formula to arrive at the correct answer. You have to make a decision when all the repetitive tasks are just too much of a hog on resources.

BN: Have these solutions had a visible impact on the deliverability and overall viability of the project?

AS: Getting started with machine learning is tough, though. Its still, comparatively speaking, a niche specialization. In other words, you wont find many developers that dabble in ML, and knowing how hard it can be to find one for any discipline, its definitely a tough river to cross.

Yet, if the business approach to scraping is based on a long-term vision, ML will definitely come in handy sometime down the road. Every good vision has scaling in it and with scaling comes repetitive tasks. These are best handled with machine learning.

Our awesome achievement we call Adaptive Parser is a great example. It was once almost unthinkable that a machine learning model could be of such high benefit. Now the solution can deliver parsed results from a multitude of e-commerce product pages, irrespective of the changes between them or any that happen over time. Such a solution is completely irreplaceable.

BN: In a previous interview, youve mentioned the importance of making things more user-friendly for web scraping solutions. Is there any particular reason you would recommend moving development towards no-code implementations?

AS: Even companies that have large IT departments may have issues with integration. Developers are almost always busy. Taking time out of their schedules for integration purposes is tough. Most end-users of the data Scraper APIs, after all, arent tech-savvy.

Additionally, the departments that would need scraping the most such as marketing, data analytics, etc., might not have enough sway in deciding the roadmaps of developers. As such, even relatively small hurdles can become impactful enough. Scrapers should now be developed with a non-tech user in mind.

There should be plenty of visuals that allow for a simplified construction of workflows with a dashboard thats used to deliver information clearly. Scraping is becoming something done by everyone.

BN: What do you think lies in the future of scraping? Will websites become increasingly protective of their data, or will they eventually forego most anti-scraping sentiment?

AS: There are two of the answers I can give. One is "more of the same". Surely, a boring one, but its inevitable. Delving deeper into scaling and proliferation of web scraping isnt as fun as the next question -- the legal context.

Currently, it seems as if our position in the industry isnt perfectly decided. Case law forms the basis of how we think and approach web scraping. Yet, it all might change on a whim. Were closely monitoring the developments due to the inherent fragility of the situation.

Theres a possibility that companies will realize the value of their data and start selling it on third-party marketplaces. It would reduce the value of web scraping as a whole as you could simply acquire what you need for a small price. Most businesses, after all, need the data and the insights, not web scraping. Its a means to an end.

Theres a lot of potential in the grand vision of Web 3.0 -- the initiative to make the whole Web interconnected and machine-readable. If this vision came to life, the whole data gathering landscape would be vastly transformed: the Web would become much easier to explore and organize, parsing would become a thing of the past, and webmasters would get used to the idea of their data being consumed by non-human actors.

Finally, I think user-friendliness will be the focus in the future. I dont mean just the no-code part of scraping. A large part of getting data is exploration -- finding where and how its stored and getting to it. Customers will often formulate an abstract request and developers will follow up with methods to acquire what is needed.

In the future, I expect, the exploration phase will be much simpler. Maybe well be able to take the abstract requests and turn them into something actionable through an interface. In the end, web scraping is breaking away from its shell of being something code-ridden or hard to understand and evolving into a daily activity for everyone.

Photo Credit: Photon photo/Shutterstock

Read more:
Tying Artificial intelligence and web scraping together [Q&A] - BetaNews

Apple is Using Artificial Intelligence and Music to Win the Music App Arms Race – The Debrief

Apples acquisition of the London-based company AI Musicmade headlines recently in the world of business, as well as artificial intelligence (AI.)For years, the company has been using artificial intelligence and music to develop next-level customized playlists for listeners. The interface between music and artificial intelligence that AI Music has to offer may provide a significant boost to Apples presence within the commercial music industry, and could even help it outperform its competition within the music app arms race.

The relationship between music and artificial intelligence spans severaldecades, originating in 1960 when Russian researcher R. Kh. Zaripov published the first algorithmic music that was composed on the Ural-1 computer. Since then, advancements in the AI systems have allowed it to show real promise for music composition, as in 1997, an AI program called Experiments in Music Intelligence (EMI) seemed to outperform a human composer when composing a piece imitating the style of Bach. Only last year did an artificial intelligence program help to finish Ludwig Beethovens last symphony, using his other compositions as data to make the piece sound similar to the rest of his works.

Currently, there are many universities studying artificial intelligence and music, including Carnegie Mellon University, Princeton University, and Queen Mary University in London. All of these universities use different AI programs, but they all study the real-time composition and performance of music created by AI. Studying this process can give insights into the science of musical composition, as well as the psychological effects of music on our brains.

Artificial intelligence not only creates impressive music but can also help create fresh and engaging music playlists for listeners. Because artificial intelligence works by using old data sets to predict new outcomes, it can track a users listening preferences and create a customized playlist based on this data. This can encourage longer listener usage as well as better overall engagement, which could give the Apple Music app the success its looking for.

Apple already was looking into bolstering its music platform when it previously acquired the music streaming company Primephonic. Now with AI Music, Apple may be working to use this new technology to boost its current audio products including Apple Music, HomePool Mini, or even the Apple Fitness+ app.

Because Apple offers a music and podcasting platform, it is in direct competition with other companies offering similar products, such as Spotify or Pandora. Currently, Spotify has 365 million active monthly users, of which over 50% pay for Spotify Premium. In contrast, Apple has only 98 million subscribers as of 2021. According to one expert, Apple music seems to have more subscribers in the U.S while Spotify has more listeners in Europe and South America. As Apple has fewer listeners overall, it may be hoping to leverage this new acquisition, and the power of artificial intelligence to win the music apps arms race. It will be interesting to see how the other companies respond to Apples new acquisition, or whether AI continues to become a larger part of this industry.

Kenna Castleberry is a staff writer at the Debrief and the Science Communicator at JILA (a partnership between the University of Colorado Boulder and NIST). She focuses on deep tech, the metaverse, and quantum technology. You can find more of her work at her website: https://kennacastleberry.com/

Continued here:
Apple is Using Artificial Intelligence and Music to Win the Music App Arms Race - The Debrief

Toronto tech institute tracking long COVID with artificial intelligence, social media – The Globe and Mail

The Vector Institute has teamed up with Telus Corp., Deloitte and Roche Canada to help health care professionals learn more about the symptoms of long COVID.Nathan Denette/The Canadian Press

A Toronto tech institute is using artificial intelligence and social media to track and determine which long-COVID symptoms are most prevalent.

The Vector Institute, an artificial intelligence organization based at the MaRS tech hub in Toronto, has teamed up with telecommunications company Telus Corp., consulting firm Deloitte and diagnostics and pharmaceuticals business Roche Canada to help health care professionals learn more about the symptoms that people with a long-lasting form of COVID experience.

They built an artificial intelligence framework that used machine learning to locate and process 460,000 Twitter posts from people with long COVID defined by the Canadian government as people who show symptoms of COVID-19 for weeks or months after their initial recovery.

Lest we forget: We need to support the veterans who survived their war against COVID-19

Opera voice coach helps long-term COVID-19 sufferers

The framework parsed through tweets to determine which are first-person accounts about long COVID and then tallied up the symptoms described. It found fatigue, pain, brain fog, anxiety and headaches were the most common symptoms and that many with long COVID experienced several symptoms at once.

Replicating that research without AI would have taken a huge amount of hours worked and staff members, who would have had to manually locate hundreds of thousands of social-media posts or people and siphon out those without long-COVID or first-person accounts and count symptoms.

AI is very good at taking large sets of large amounts of data to find patterns, said Cameron Schuler, Vectors chief commercialization officer and vice-president of industry innovation. Its for stuff that is way too big for any human to actually be able to hold this in their brain.

The framework speeds up the research process around a virus that is quickly evolving and still associated with so many unknowns.

So far, long COVID isnt well understood. Theres no uniform way to diagnose it nor a single treatment to ease or cure it. Information is key to giving patients better outcomes and ensuring hospitals arent overwhelmed in the coming years.

A survey conducted in May, 2021, of 1,048 Canadians with long COVID, also known as post-COVID syndrome, found more than 100 symptoms or difficulties with everyday activities.

COVID-19 can affect people for the long haul and theyre getting the short shrift

Canada should lead the effort to help COVID long-haulers

About 80 per cent of adults surveyed by Viral Neuro Exploration, COVID Long Haulers Support Group Canada and Neurological Health Charities Canada reported one or more symptoms between four and 12 weeks after they were first infected.

Sixty per cent reported one or more symptoms in the long term. The symptoms were so severe that about 10 per cent are unable to return to work in the long term.

Researchers and those behind the technology are hopeful it will quickly contribute to the worlds fight against long COVID, but are already imagining ways they can advance the framework even further or apply it to other situations.

This is a novel kind of tool, said Dr. Angela Cheung, a senior physician scientist at the University Health Network, who is running two large studies on long COVID.

Im not aware of anyone else having done this and so I think it really may be quite useful going forward in health research.

Researchers say preliminary uses of the framework show it can help uncover patterns related to symptom frequencies, co-occurrence and distribution over time.

It could also be applied to other health events such as emerging infections or rare diseases or the effects of booster shots on infection.

Sign up for the Coronavirus Update newsletter to read the days essential coronavirus news, features and explainers written by Globe reporters and editors.

This content appears as provided to The Globe by the originating wire service. It has not been edited by Globe staff.

See the rest here:
Toronto tech institute tracking long COVID with artificial intelligence, social media - The Globe and Mail

How AI is Helping Minimize Waste in the Clothing Industry – Analytics Insight

Pawan Gupta explains the need of leveraging artificial intelligence to reduce wastage

Artificial Intelligence is a phrase that has been around for many years now. It started as fiction in movies and popular literature and steadily became a trendy word to describe intelligent machines. But today its becoming indispensable, crossing over from fiction to fact, across industries.

The pandemic had a lot to do with this rising popularity and usefulness. At least 40% of active fashion consumers today are already availing of online services even as you read these words, increasing opportunities for deploying AI and deriving benefits. Without mistake, it can be said that we are in the midst of the fourth industrial revolution, where brands are gearing up to embrace the benefits of Artificial Intelligence and other new-age technologies.

Artificial Intelligence is used in different ways in the fashion industry. The industry is faced with several challenges, sustainability being the top on the list. But considering how widespread and popular fast-fashion trends are, change is a long-term endeavor. However, with the help of Artificial Intelligence, accurate predictions of trends, understanding customer preferences, managing workflows, and efficient supply chain are now reducing overproduction and minimizing wastage.

Tackling overproduction and overstocking has become a necessary concern among producers globally. United Nations estimated that the global fashion industry losing at least $500 billion annually due to a lack of and widespread use of recycling practices. Improper clothing disposal is also adding much to the loss.

Overproduction of overstocking is common among producers, especially due to a lack of Data Intelligence tools. The balance sheet of demand and supply is often off balance because proper assessment of trends and demands is either not done or is not done accurately or efficiently. To identify a few factors which create surplus stocks:

The answer to the above challenges can well be found in the use of Artificial Intelligence. AI tools help in market research and fetch real-time information. The data gathered can be aptly assimilated into a real-time prediction system, in which case inventory management can become both manageable and sans waste. The data collected and stored can also be used for future references.

Customer behaviour prediction is another important contribution of AI to combat waste in the clothing industry. Customer behaviour is always changing, and one can never rely on any one-time data when it comes to this. Hence AI can be integrated into the retailers planning policy as the data analyzed can be used to adapt to the latest demand patterns.

Demand patterns can be predicted with the help of algorithms. It can even start on any social media platform. The algorithm gives accurate data about the market and hence can drive the retailers to make the correct decisions about how much to produce. This is what we now call smart demand forecasting or smart demand prediction. Customers purchasing history can also be traced with the help of AI and can help the retailers to cater exactly to their demands. This may lead to lesser returns and help the customers make the right decisions.

Artificial Intelligence has become a part of the fashion industry in a way no one had previously predicted. We are denizens of a virtual world now and AI tools are constantly transforming the way we are manufacturing and marketing the products. Future predictions about robots being used for cutting and sewing are already in place. With the use of AI and new-age technologies, we can expect a reduction in wastage by about 60 to 70% as the processes will be automated with the highest possible accuracy.

Pawan Gupta, CEO & Co-founder at Fashinza

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.

Read the rest here:
How AI is Helping Minimize Waste in the Clothing Industry - Analytics Insight

Life and health insurers to use advanced artificial intelligence to reduce benefits fraud – Canada NewsWire

TORONTO, Feb. 14, 2022 /CNW/ - The Canadian Life and Health Insurance Association (CLHIA) is pleased to announce the launch of an industry initiative to pool claims data and use advanced artificial intelligence tools to enhance the detection and investigation of benefits fraud.

Every insurer in Canada has their own internal analytics to detect fraud within their book of business. This new initiative, led by the CLHIA and its technology provider Shift Technology will deploy advanced AI to analyze industry-wide anonymized claim data. By identifying patterns across millions of records, the program is enhancing the effectiveness of benefits fraud investigations across the industry.

We expect that the initiative will expand in scope over the coming years to include even more industry data.

"Fraudsters are taking increasingly sophisticated steps to avoid detection," said Stephen Frank, CLHIA's President and CEO. "This technology will give insurers the edge they need to identify patterns and connect the dots across a huge pool of claims data over time, leading to more investigations and prosecutions."

"The capability for individual insurers to identify potential fraud has already proven incredibly beneficial," explained Jeremy Jawish, CEO and co-founder of Shift Technology. "Through the work Shift Technology is doing with the CLHIA, we are expanding that benefit across all member organizations, and providing a valuable fraud fighting solution to the industry at large."

Insurers paid out nearly $27 billion in supplementary health claims in 2020. Employers and insurers lose what is estimated to be millions of dollars each year to fraudulent group health benefits claims. The costs of fraud are felt by insurers, employers and employees and put the sustainability of group benefits plans at risk.

About CLHIAThe CLHIA is a voluntary association whose member companies account for 99 per cent of Canada's life and health insurance business. These insurers provide financial security products including life insurance, annuities (including RRSPs, RRIFs and pensions) and supplementary health insurance to over 29 million Canadians. They hold over $1 trillion in assets in Canada and employ more than 158,000 Canadians. For more information, visit http://www.clhia.ca.

About Shift TechnologyShift Technology delivers the only AI-native decision automation and optimization solutions built specifically for the global insurance industry. Addressing several critical processes across the insurance policy lifecycle, the Shift Insurance Suite helps insurers achieve faster, more accurate claims and policy resolutions. Shift has analyzed billions of insurance transactions to date and was presented Frost & Sullivan's 2020 Global Claims Solutions for Insurance Market Leadership Award. For more information, visit http://www.shift-technology.com.

SOURCE Canadian Life and Health Insurance Association Inc.

For further information: Kevin Dorse, Assistant Vice President, Strategic Communications and Public Affairs, CLHIA, (613) 691-6001, [emailprotected]; Rob Morton, Corporate Communications, Shift Technology, 617-416-9216, [emailprotected]

See the rest here:
Life and health insurers to use advanced artificial intelligence to reduce benefits fraud - Canada NewsWire

Artificial intelligence and big data can help preserve wildlife – Innovation Origins

A team of experts in artificial intelligence and animal ecology have put forth a new, cross-disciplinary approach intended to enhance research on wildlife species and make more effective use of the vast amounts of data now being collected thanks to new technology, as announced in a press release by cole Polytechnique Fdrale de Lausanne (EPFL), a Swiss technology institute, which contributed to the study. The results were published in Nature Communications.

The field of animal ecology has entered the era of big data and the Internet of Things. Unprecedented amounts of data are now being collected on wildlife populations, thanks to sophisticated technology such as satellites, drones and terrestrial devices like automatic cameras and sensors placed on animals or in their surroundings. These data have become so easy to acquire and share that they have shortened distances and time requirements for researchers while minimizing the disrupting presence of humans in natural habitats. Today, a variety of AI programs are available to analyze large datasets, but theyre often general in nature and ill-suited to observing the exact behavior and appearance of wild animals. A team of scientists from EPFL and other universities has outlined a pioneering approach to resolve that problem and develop more accurate models by combining advances in computer vision with the expertise of ecologists. Their findings open up new perspectives on the use of AI to help preserve wildlife species.

Wildlife research has gone from local to global. Modern technology now offers revolutionary new ways to produce more accurate estimates of wildlife populations, better understand animal behavior, combat poaching and halt the decline in biodiversity. Ecologists can use AI, and more specifically computer vision, to extract key features from images, videos and other visual forms of data in order to quickly classify wildlife species, count individual animals, and glean certain information, using large datasets. The generic programs currently used to process such data often work like black boxes and dont leverage the full scope of existing knowledge about the animal kingdom. Whats more, theyre hard to customize, sometimes suffer from poor quality control, and are potentially subject to ethical issues related to the use of sensitive data. They also contain several biases, especially regional ones; for example, if all the data used to train a given program were collected in Europe, the program might not be suitable for other world regions.

We wanted to get more researchers interested in this topic and pool their efforts so as to move forward in this emerging field. AI can serve as a key catalyst in wildlife research and environmental protection more broadly, says Prof. Devis Tuia, the head of EPFLs Environmental Computational Science and Earth Observation Laboratory and the studys lead author. If computer scientists want to reduce the margin of error of an AI program thats been trained to recognize a given species, for example, they need to be able to draw on the knowledge of animal ecologists. These experts can specify which characteristics should be factored into the program, such as whether a species can survive at a given latitude, whether its crucial for the survival of another species (such as through a predator-prey relationship) or whether the species physiology changes over its lifetime. We used this approach to improve a bear-recognition program a few years ago, says Prof. Mackenzie Mathis, a neuroscientist at EPFL and co-author of the study. A researcher studying bear DNA had installed automatic cameras in bear habitats in order to recognize individual animals. But bears shed half of their body fat when they hibernate, meaning the generic programs she used were no longer able to recognize the bears once the season changed. We therefore added criteria to the program that can not only look at whether an animal has a given characteristic but also be tweaked manually to allow for possible deviations.

The idea of forging stronger ties between computer vision and ecology came up as Tuia, Mathis and others discussed their research challenges at various conferences over the past two years. They saw that such collaboration could be extremely useful in preventing certain wildlife species from going extinct. A handful of initiatives have already been rolled out in this direction; some of them are listed in theNature Communicationsarticle. For instance, Tuia and his team at EPFL have developed a program that can recognize animal species based on drone images. It was tested recently on a seal population. Meanwhile, Mathis and her colleagues have unveiled an open-source software package called DeepLabCut that allows scientists to estimate and track animal poses with remarkable accuracy. Its already been downloaded 300,000 times. DeepLabCut was designed for lab animals but can be used for other species as well. Researchers at other universities have developed programs too, but its hard for them to share their discoveries since no real community has yet been formed in this area. Other scientists often dont know these programs exist or which one would be best for their specific research.

That said, initial steps towards such a community have been taken through various online forums. TheNature Communicationsarticle aims for a broader audience, however, consisting of researchers from around the world. A community is steadily taking shape, says Tuia. So far weve used word of mouth to build up an initial network. We first started two years ago with the people who are now the articles other lead authors: Benjamin Kellenberger, also at EPFL; Sara Beery at Caltech in the US; and Blair Costelloe at the Max Planck Institute in Germany.

Also interesting:Artificial Intelligence identifies gene signature of COVID-19Artificial intelligence for the care of green zones

Excerpt from:
Artificial intelligence and big data can help preserve wildlife - Innovation Origins

Artificial intelligence to be used for inspecting bridges – Innovation Origins

SwissInspect, a start-up from Swiss technology institute cole Polytechnique Fdrale (EPFL) in Lausanne, Switzerland has developed a novel bridge-inspection system that combines structural engineering with drone technology, artificial intelligence and computer vision. SwissInspect is the result of research at Earthquake Engineering and Structural Dynamics Laboratory (EESD) in collaboration with Swiss Data Science Center (SDSC) on the image-based inspection and monitoring of structural elements. The company plans to test its system on around 50 bridges in Switzerland, according to a press release.

Switzerlands bridges are currently inspected every two to five years using conventional visual inspection. But SwissInspect hopes to change all that with its new technology, which provides more objective evaluations and could be applied to other types of structures like tunnels, dams and buildings. The companys approach combines structural engineering, computer vision, and artificial intelligence to make infrastructure inspections safer, more objective, and efficient.

The startup has recently won a CHF 300,000 InnoSuisse grant to inspect around 50 bridges across Switzerland over a period of 18 months. SwissInspect has also just won theVenture Kickgrant of CHF 10000, which will help the startup develop its business. This project involves the Earthquake Engineering and Structural Dynamics Laboratory (EESD) and the Geodetic Engineering Laboratory (TOPO), both at EPFLs School of Architecture, Civil and Environmental Engineering (ENAC), providing outstanding expertise on image-based surveys using drones (UAVs).

Our goal at SwissInspect is to give engineers and infrastructure owners a system they can use to plan maintenance and repair work more efficiently, says Amir Rezaie, who holds a PhD in civil engineering and is the CEO of SwissInspect. We do not want to be a data collector or a 3D visualization platform, we transform raw data into actionable information. From images, they detect various types of damage, including cracking, spalling, efflorescence, rust, etc. They also provide a physics-based classification of damage, which is crucial information to evaluate the structural health of a bridge.

When it comes to inspection, traceability is critical, which we seek to provide by creating the digital twin of bridges, says Amir Rezaie. He also points out that in the future, other sources of information could be added to the digital twins, such as data from sensors installed directly on a bridge.Another advantage of SwissInspects system relative to visual methods is that the technology allows inspections to be carried out more frequently. Thats especially important in light of climate change, as infrastructure will be increasingly exposed to alternating periods of flooding and drought as well as higher relative humidity levels that could fasten the degradation of materials.

Also interesting:

Predicting eye movements with Artificial IntelligenceArtificial Intelligence identifies gene signature of COVID-19

See the original post here:
Artificial intelligence to be used for inspecting bridges - Innovation Origins

Inside the EU’s rocky path to regulate artificial intelligence – International Association of Privacy Professionals

In April last year, the European Commission published its ambitious proposal to regulate Artificial Intelligence. The regulation was meant to be the first of its kind, but the progress has been slow so far due to the file's technical, political and juridical complexity.

Meanwhile, the EU lost its first-mover advantage as other jurisdictions like China and Brazil have managed to pass their legislation first. As the proposal is entering a crucial year, it is high time to take stock of the state of play, the ongoing policy discussions, notably around data, and potential implications for businesses.

For the European Parliament, delays have been mainly due to more than six months of political disputes between lawmakers over who was to take the lead in the file. The result was a co-lead between the centrists and the center-left, sidelining the conservative European People's Party.

Members of European Parliament are now trying to make up for lost time. The first draft of the report is planned for April, with discussions on amendments throughout the summer. The intention is to reach a compromise by September and hold the final vote in November.

The timeline seems particularly ambitious since co-leads involve double the number of people, inevitably slowing down the process. The question will be to what extent the co-rapporteurs will remain aligned on the critical political issues as the center-right will try to lure the liberals into more business-friendly rules.

Meanwhile, the EU Council made some progress on the file, however, limited by its highly technical nature. It is telling that even national governments, which have significantly more resources than MEPs, struggle to understand the new rules' full implications.

Slovenia, which led the diplomatic talks for the second half of 2021, aimed to develop a compromise for 15 articles, but only covered the first seven. With the beginning of the French presidency in January, the file is expected to move faster as Paris aims to provide a full compromise by April.

As the policy discussions made some progress in the EU Council, several sticking points emerged. The very definition of AI systems is problematic, as European governments distinguish them from traditional software programs or statistical methods.

The diplomats also added a new category for "general purpose" AI, such as synthetic data packages or language models. However, there is still no clear understanding of whether the responsibility should be attributed upstream, to the producer, or downstream, to the provider.

The use of real-time biometric recognition systems has primarily monopolized the public debate, as the commission's proposal falls short of a total ban for some crucial exceptions, notably terrorist attacks and kidnapping. In October, lawmakers adopted a resolution pushing for a complete ban, echoing the argument made by civil society that these exceptions provide a dangerous slippery slope.

By contrast, facial recognition technologies are increasingly common in Europe. A majority of member states wants to keep or even expand the exceptions to border control, with Germany so far relatively isolated in calling for a total ban.

"The European Commission did propose a set of criteria for updating the list of high-risk applications. However, it did not provide a justification for the existing list, which might mean that any update might be extremely difficult to justify," Lilian Edwards, a professor at Newcastle University, said.

Put differently, since the reasoning behind the lists of prohibited or high-risk AI uses are largely value-based, they are likely to remain heatedly debated points point through the whole legislative process.

For instance, the Future of Life Institute has been arguing for a broader definition of manipulation, which might profoundly impact the advertising sector and the way online platforms currently operate.

A dividing line that is likely to emerge systematically in the debate is the tension between the innovation needs of the industry, as some member states already stressed, and ensuring consumer protection in the broadest sense, including the use of personal data.

This underlying tension is best illustrated in the ongoing discussion for the report of the parliamentary committee on Artificial Intelligence in a Digital Age, which are progressing in parallel to the AI Act.

In his initial draft, conservative MEP Axel Voss attacked the General Data Protection Regulation, presenting AI as part of a technological race where Europe risks becoming China's "economic colony" if it did not relax its privacy rules.

The report faced backlash from left-to-center policymakers, who saw it as an attempt to water down the EU's hard-fought data protection law. For progressive MEPs, data-hungry algorithms fed with vast amounts of personal data might not be desirable, and they draw a parallel with their activism in trying to curb personalized advertising.

"Which algorithms do we train with vast amounts of personal data? Likely those that automatically classify, profile or identify people based on their personal details often with huge consequences and risks of discrimination or even manipulation. Do we really want to be using those, let alone 'leading' their development?" MEP Kim van Sparrentak said.

However, the need to find a balance with data protection has also been underlined by Bojana Bellamy, president of the Centre for Information Policy Leadership, who notes how some fundamental principles of the GDPR would be in contradiction with the AI regulation.

In particular, a core principle of the GDPR is data minimization, namely that only the personal data strictly needed for completing a specific task is processed and should not be retained for longer than necessary. Conversely, the more AI-powered tools receive data, the more robust and accurate they become, leading (at least in theory) to a fairer and non-biased outcome.

For Bojana, this tension is due to a lack of a holistic strategy in the EU's hectic digital agenda, arguing that policymakers should follow a more result-oriented approach to what they are trying to achieve. These contradicting notions might fall on the industry practitioners, which might be requested to square a fair and unbiased system while also minimizing the amount of personal data collected.

The draft AI law includes a series of obligations for system providers, namely the organizations that make the AI applications available on the market or put them into services. These obligations will need to be operationalized, for instance, what it means to have a "fair" system, to what length should "transparency" go and how is "robustness" defined.

In other words, providers will have to put a system in place to manage risks and ensure compliance with support from their suppliers. For instance, a supplier of training data would need to detail how the data was selected and obtained, how it was categorized and the methodology used to ensure representativeness.

In this regard, the AI Act explicitly refers to harmonized standards that industry practitioners must develop to exchange information to make the process cost-efficient. For example, the Global Digital Foundation, a digital policy network, is already working on an industry coalition to create a relevant framework and toolset to share information consistently across the value chain.

In this context, European businesses fear that if the EU's privacy rules are not effectively incorporated in the international standards, they could be put at a competitive disadvantage. The European Tech Alliance, a coalition of EU-born heavyweights such as Spotify and Zalando, voiced concerns that the initial proposal did not include an assessment for training dataset collected in third countries that might use data collected via practices at odds with the GDPR.

Adopting industry standards creates a presumption of conformity, minimizing the risk and costs for compliance. These incentives are so strong that harmonized standards tend to become universally adopted by industry practitioners, as the cost for departing from them become prohibitive. Academics have defined standardization as the "real rulemaking" of the AI regulation.

"The regulatory approach of the AI Act, i.e. standards compliance, is not a guarantee of low barriers for the SMEs. On the contrary, standards compliance is often perceived by SMEs as a costly exercise due to expensive conformity assessment that needs to be carried out by third parties," Sebastiano Toffaletti, secretary-general of the European DIGITAL SME Alliance, said.

By contrast, European businesses that are not strictly "digital" but that could embed AI-powered tools into their daily operations see the AI Act as a way to bring legal clarity and ensure consumer trust.

"The key question is to understand how can we build a sense of trust as a business and how can we translate it to our customers," Nozha Boujemaa, global vice president for digital ethics and responsible AI at IKEA, said.

Photo by Michael Dziedzic on Unsplash

View original post here:
Inside the EU's rocky path to regulate artificial intelligence - International Association of Privacy Professionals

Protect the Value of Trade Secrets Specific to AI/ML Platforms – The National Law Review

Thursday, February 10, 2022

Wepreviously discussedwhich portions of an artificial intelligence/machine-learning (AI/ML) platform can be patented. Under what circumstances, however, is it best to keep at least a portion of the platform a trade secret? And what are some best practices for protecting trade secrets? In this post, we explore important considerations and essential business practices to keep in mind when working to protect the value of trade secrets specific to AI/ML platforms, as well as the pros and cons of trade secret versus patent protection.

What qualifies as a trade secret can be extraordinarily broad, depending on the relevant jurisdiction, as, generally speaking, a trade secret is information that is kept confidential and derives value from being kept confidential. This can potentially include anything from customer lists to algorithms. In order to remain a trade secret, however, the owner of the information must follow specific business practices to ensure the information remains secret. If businesses do not follow the proscribed practices, then the ability to protect the trade secret is waived and its associated value is irretrievably lost. The business practices required are not onerous or complex, and we will discuss these below, but many businesses are unaware of what is required for their specific type of IP and only discover their error when attempting to monetize their inventions or sell their business. To avoid this devastating outcome, we work to arm our clients with the requisite practices and procedures tailored to their specific inventions and relevant markets.

In the context of AI/ML platforms, trade secrets can include the structure of the AI/ML model, formulas used in the model, proprietary training data, a particular method of using the AI/ML model, any output calculated by the AI/ML model that is subsequently converted into an end product for a customer, and similar aspects of the platform. There are myriad ways in which the value of the trade secret may be compromised.

For example, if an AI/ML model is sold as a platform and the platform provides the raw output of the model and a set of training data to the customer, then the raw output and the set of training data would no longer qualify for trade secret protection. Businesses can easily avoid this pitfall by having legally binding agreements in place between the parties to protect the confidentiality and ownership interests involved. Another area in which we frequently see companies waive trade secret protection is where the confidential information that can be independently discovered (such as through reverse-engineering a product). Again, there are practices that businesses can follow to avoid waiving trade secret protection due to reverse-engineering. Owners, therefore, must also be careful in ensuring that the information they seek to protect cannot be discovered through use or examination of the product itself and where that cannot be avoided, ensure that such access is governed by agreements that prohibit such activities, thereby maintaining the right to assert trade secret misappropriation and recover the value of the invention.

To determine if an invention may be protected as a trade secret, courts will typically examine whether the business has followed best practices or reasonable efforts for the type of IP and relevant industries. See e.g. Intertek Testing Services, N.A., Inc. v. Frank Pennisi et al., 443 F. Supp. 3d 303, 323 n.19 (E.D.N.Y. Mar. 9, 2020). What constitutes best practices for a particular type of IP can vary greatly. For example, a court may examine whether those trade secrets were adequately protected. The court may also look to whether the owner created adequate data policies to prevent employees from mishandling trade secrets. See Yellowfin Yachts, Inc. v. Barker Boatworks, LLC, 898 F.3d 1279 (11th Cir. Aug. 7, 2018)(where the court held that requiring password protection to access trade secrets was insufficient without adequate measures to protect information stored on employee devices). If the court decides that the business has not employed best practices, the owner can lose trade secret protection entirely.

Most often, a failure to ensure all parties who may be exposed to trade secrets are bound by a legally-sufficient confidentiality or non-disclosure agreement forces the owner to forfeit their right to trade secret protection for that exposed information. Owners should have experienced legal counsel draft these agreements to ensure that the agreements are sufficient to protect the trade secret and withstand judicial scrutiny; many plaintiffs have learned the hard way that improperly-drafted agreements can affect the trade secret protection afforded to their inventions. See, e.g., BladeRoom Group Ltd. v. Emerson Electric Co., 11 F.4th 1010, 1021 (9th Cir. Aug. 30, 2021)(holding that NDAs with expiration dates also created expiration dates for trade secret protection); Foster Cable Servs., Inc. v. Deville, 368 F. Supp. 3d 1265 (W.D. Ark. 2019)(holding that an overbroad confidentiality agreement was unenforceable); Temurian v. Piccolo, No. 18-cv-62737, 2019 WL 1763022 (S.D. Fla. Apr. 22, 2019)(holding that efforts to protect data through password protection and other means were negated by not requiring employees to sign a confidentiality agreement).

There are many precautions owners can take to protect their trade secrets, which we discuss below:

Confidentiality and Non-Disclosure Agreements:One of the most common methods of protecting trade secrets is to execute robust confidentiality agreements and non-disclosure agreements with everyone who may be exposed to trade secrets, to ensure they have a legal obligation to keep those secrets confidential. Experienced legal counsel who can ensure the agreements are enforceable and fully protect the owner and their trade secrets are essential as there are significant pitfalls in these types of agreements and many jurisdictions have contradicting requirements.

Marketing and Product Development:The AI/ML platform itself should also be constructed and marketed in such a way as to prevent customers from easily discovering the trade secrets, whether through viewing marketing materials, through ordinary use of the platform, or through reverse-engineering of the platform. For example, if an AI/ML platform uses a neural network to classify medical images, and the number of layers used and the weights used by the neural network to calculate output are commercially valuable, the owner should be careful to exclude any details about the layers of the AI/ML model in marketing materials. Further, the owner may want to consider developing the platform in such a way that the neural network is housed internally (protected by various security measures) and therefore not directly accessible by a customer seeking to reverse-engineer the product.

Employee Training:Additionally, owners should also ensure that employees or contractors who may be exposed to trade secrets are trained in how to handle those trade secrets, including how to securely work on or discuss trade secrets, how to handle data on their personal devices (or whether trade secret information may be used on personal devices at all), and other such policies.

Data Security:Owners should implement security precautions (including limiting who can access trade secrets, requiring passwords and other security procedures to access trade secrets, restricting where data can be downloaded and stored, implementing mechanisms to protect against hacking attempts, and similar precautions) to reduce the risk of unintended disclosure of trade secrets. Legal counsel can help assess existing measures to determine whether they are sufficient to protect confidential information under various trade secret laws.

Trade secret protection and patent protection are obtained and maintained in different ways. There are many reasons why trade secret protection may be preferable to patent protection for various aspects of an AI/ML platform, or vice-versa. Below we discuss some criteria to consider before deciding how to protect ones platform.

Protection Eligibility:As noted in ourprevious blog post, patent protection may be sought for many components of an AI/ML platform. There are, however, some aspects of an AI/ML platform that may not be patent-eligible. For example, while the architecture of a ML model may be patentable, specific mathematical components of the model, such as the weight values, mathematical formulas used to calculate weight values in an AI/ML algorithm, or curated training data, may not, on their own, be eligible for patent protection. If the novelty of a particular AI/ML platform is not in how an AI/ML model is structured or utilized, but rather in non-patentable features of the model, trade secret protection can be used to protect this information.

Cost:There are filing fees, prosecution costs, issue fees, and maintenance fees required to obtain and keep patent protection on AI/ML models. Even for an entity that qualifies as a micro-entity under the USPTOs fee schedule, the lifetime cost of a patent could be several thousand dollars in fees, and several thousand dollars in attorneys fees to draft and prosecute the patent. Conversely, the costs of trade secret protection are the costs to implement any of the above methods of keeping critical portions of the AI/ML model secret from others. In many instances, it may be less expensive to rely on trade secret protection, than it may be to obtain patent protection.

Development Timeline:AI/ML models, or software that implements them, may undergo several iterations through the course of developing a product. As it may be difficult to determine which, if any, iterations are worth long-term protection until development is complete, it may be ideal to protect each iteration until the value of each has been determined. However, obtaining patent protection on each iteration may, in some circumstances, be infeasible. For example, once a patent application has been filed, the specification and drawings cannot be amended to cover new, unanticipated iterations of the AI/ML model; a new application that includes the new material would need to be filed, incurring further costs. Additionally, not all iterations will necessarily include changes that can be patented, or it may be unknown until after development how valuable a particular modification is to technology in the industry, making it difficult to obtain patent protection for all iterations of a model or software using the model. In these circumstances, it may be best to use a blend of trade secret and patent protection. For example, iterations of a model or software can be protected via trade secret; the final product, and any critical iterations in between, can subsequently be protected by one or more patents. This allows for a platform to be protected without added costs per iteration, and regardless of the nature of the changes made in each iteration.

Duration of Protection:Patent owners can enjoy protection of their claimed invention for approximately twenty years from the date of filing a patent application. Trade secret protection, on the other hand, lasts as long as an entity keeps the protected features a secret from others. For many entities, the twenty-year lifetime of a patent is sufficient to protect an AI/ML platform, especially if the patent owner anticipates substantially modifying the platform (e.g., to adapt to future needs or technological advances) by the end of the patent term. To the extent any components of the AI/ML platform are unlikely to change within twenty years (for example, if methods used to curate training data are unlikely to change even with future technological advances), it may be more prudent to protect these features as trade secrets.

Risk of Reverse-Engineering:As noted above, trade secrets do not protect inventions that competitors have been able to discover by reverse-engineering an AI/ML product. While an entity may be able to prevent reverse-engineering of some aspects of the invention through agreements between parties with permission to access the AI/ML product or through creative packaging of the product, there are some aspects of the invention (such as the training data that needs to be provided to the platform, end product of the platform, and other features) that may need to remain transparent to a customer, depending on the intended use of the platform. Such features, when patent-eligible, may benefit more from patent protection than from trade secret protection, as a patent will protect the claimed invention even if the invention can be reverse-engineered.

Exclusivity:A patent gives the patent owners the exclusive right to practice or sell their claimed inventions, in exchange for disclosing how their inventions operate. Trade secrets provide no such benefit; to the extent competitors are able to independently construct an AI/ML platform, they are allowed to do so even if an entity has already sold a similar platform protected by trade secret. Thus, to the extent an exclusive right to the AI/ML model or platform is necessary for the commercial viability of the platform or its use, patent protection may be more desirable than trade secret protection.

Trade secret law allows broad protection of information that can be kept secret from others, provided certain criteria are met to ensure the information is adequately protected from disclosure to others. Many aspects of an AI/ML platform can be protected under either trade secret law or patent law, and many aspects of an AI/ML platform may only be protected under trade secret law. It is therefore vital to consider trade secret protection alongside patent protection, to ensure that each component of the platform is being efficiently and effectively protected.

1994-2022 Mintz, Levin, Cohn, Ferris, Glovsky and Popeo, P.C. All Rights Reserved.National Law Review, Volume XII, Number 41

Follow this link:
Protect the Value of Trade Secrets Specific to AI/ML Platforms - The National Law Review

The Role Of Artificial Intelligence in Network Evolution – Analytics Insight

Artificial Intelligence in network evolution makes things much better than what it was in the past.

Internet connectivity has been growing at around 2% between 2015 2019. But over the last 2 years, it has grown by 8% which is a drastic increase in connectivity. Change in professional and personal life demands since the last two years has led to a transition in the users expectations. From work from anywhere to e-healthcare and online education, the transition of everything from offline to online has led to growth in connectivity over the period. Adding to the listed gaming and entertainment have scaled up the expectations of the users by many folds. Customer Experience has taken a centre stage for all the Communication Service Providers. To meet these expectations, modern networks are becoming more complex. Experience Disruption has replaced Service Disruption today.

The new experience paradigm is expected to bring about various changes. Measuring Experience, Troubleshooting these networks with End to End Insights would be a key factor. Machine Reasoning, Machine Learning are going to play a vital role in this Network Evolution. Networks are going to get smarter and adapt to the needs of the consumers.

Artificial Intelligence is going to play a key role in the following areas

Awareness Measurement & Prediction of Experience

Reasoning Root cause Analysis in Networks

Interactive Natural Language Interaction

Mature Intelligence that would Evolve over time and correct decisions

Autonomous Self adjust to the needs of the consumes

This is the new ARIMA of networks.

Awareness Powered by Artificial Intelligence, networks would be completely aware of the type & nature of Connected Devices and their current bandwidth requirements. By understanding the trends, Networks of WiFi connections for home as well as in offices should be able to measure and personalize the experience of each user that comes on board. For certain IoT devices latency could be critical, but for other devices bandwidth. AI will help the networks to become completely aware of these demands. As home and office networks always have many devices working in tandem, it is important that AI optimizes the networks to obtain a collective optimum smooth user experience.

Reasoning: Network Engineers always equip themselves with a lot of Network Monitoring tools which help them to be very confident and in control of the networks. Conventionally, when we face issues with the network, the process of complaining and getting the issue resolved takes a lot of time. Artificial intelligence-powered networks would help reduce the task of dealing with the entire process of complaining to a great extent. The problem could be escalated and necessary troubleshooting could be done in a few clicks. The problem will be recorded as soon as the user faces any issue with the network and diagnosis for the same will take place automatically without any human intervention. The root cause will automatically be identified and the resolution will be accelerated.

Interactive Natural Language Engines have got the power to bring about a great evolution. NLP provides Networks a voice to interact with humans in a way like never before. We have been seeing products like Alexa which are bridging the gap of communication between IoT devices and humans using Voice Interface. Network Admin and Home Users can interact with the network in a similar way. Networks would be able to understand human indentions and adapt accordingly.

Mature AI allows the transfer of the intelligence possessed by the Network Experts to Routers, Switches, and other elementals which are part of the network. Working in tandem with each other and with customer Experience as feedback systems in place, Machine Learning models engage in continuous learning and constantly optimize to maximize the experience.

Autonomous Integrating artificial intelligence in networks gives a switch from traditional reactive methods to proactive methods. Automating the method of finding a problem, diagnosing it, and prescribing a solution, help in the reduction of human interventions. With this proactive approach, we can expect maximum uptime of the network as the solution to the problem is identified and accelerated. This eventually will help the IT department to focus on its core objectives.

Technology evolves over time; it makes things much better than what it was in the past. With these new changes and enhancements, new vulnerabilities arise as well. Nothing has been more frustrating than not being able to connect to the network or getting slow internet despite having connectivity. Other than this, the safety of our information uploaded could be prone to risk. Sudden loss of network in the mid of an urgent task makes a user feel as if the world took a halt. AI has left no industry untouched. The network industry is no exception to that.

Following are a few transformations that have begun with artificial intelligence getting into networks.

Since the pandemic, we saw a great shift across all the major industries to the digital space. This shift has increased the importance of the availability of a superior and consistent network across the globe.

Technology is evolving at an extreme pace; more network transformations in the future will arise. Networks will also start evolving just like humans at a pace that we cannot imagine as the computing powers are growing rapidly. New technologies that are coming to play, can potentially make the networks and devices associated with them mimic human intelligence and reasoning.

Pramod Gummaraj, CEO, Aprecomm

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.

Read more:
The Role Of Artificial Intelligence in Network Evolution - Analytics Insight