Category Archives: Artificial Intelligence

Adversarial attacks in machine learning: What they are and how to stop them – VentureBeat

Elevate your enterprise data technology and strategy at Transform 2021.

Adversarial machine learning, a technique that attempts to fool models with deceptive data, is a growing threat in the AI and machine learning research community. The most common reason is to cause a malfunction in a machine learning model. An adversarial attack might entail presenting a model with inaccurate or misrepresentative data as its training, or introducing maliciously designed data to deceive an already trained model.

As the U.S. National Security Commission on Artificial Intelligences 2019 interim report notes, a very small percentage of current AI research goes toward defending AI systems against adversarial efforts. Some systems already used in production could be vulnerable to attack. For example, by placing a few small stickers on the ground, researchers showed that they could cause a self-driving car to move into the opposite lane of traffic. Other studies have shown that making imperceptible changes to an image can trick a medical analysis system into classifying a benign mole as malignant, and that pieces of tape can deceive a computer vision system into wrongly classifying a stop signas a speed limit sign.

The increasing adoption of AI is likely to correlate with a rise in adversarial attacks. Its a never-ending arms race, but fortunately, effective approaches exist today to mitigate the worst of the attacks.

Attacks against AI models are often categorized along three primary axes influence on the classifier, the security violation, and their specificity and can be further subcategorized as white box or black box. In white box attacks, the attacker has access to the models parameters, while in black box attacks, the attacker has no access to these parameters.

An attack can influence the classifier i.e., the model by disrupting the model as it makes predictions, while a security violation involves supplying malicious data that gets classified as legitimate. A targeted attack attempts to allow a specific intrusion or disruption, or alternatively to create general mayhem.

Evasion attacks are the most prevalent type of attack, where data are modified to evade detection or to be classified as legitimate. Evasion doesnt involve influence over the data used to train a model, but it is comparable to the way spammers and hackers obfuscate the content of spam emails and malware. An example of evasion is image-based spam in which spam content is embedded within an attached image to evade analysis by anti-spam models. Another example is spoofing attacks against AI-powered biometric verification systems..

Poisoning, another attack type, is adversarial contamination of data. Machine learning systems are often retrained using data collected while theyre in operation, and an attacker can poison this data by injecting malicious samples that subsequently disrupt the retraining process. An adversary might input data during the training phase thats falsely labeled as harmless when its actually malicious. For example, large language models like OpenAIs GPT-3 can reveal sensitive, private information when fed certain words and phrases, research has shown.

Meanwhile, model stealing, also called model extraction, involves an adversary probing a black box machine learning system in order to either reconstruct the model or extract the data that it was trained on. This can cause issues when either the training data or the model itself is sensitive and confidential. For example, model stealing could be used to extract a proprietary stock-trading model, which the adversary could then use for their own financial gain.

Plenty of examples of adversarial attacks have been documented to date. One showed its possible to 3D-print a toy turtle with a texture that causes Googles object detection AI to classify it as a rifle, regardless of the angle from which the turtle is photographed. In another attack, a machine-tweaked image of a dog was shown to look like a cat to both computers and humans. So-called adversarial patterns on glasses or clothing have been designed to deceive facial recognition systems and license plate readers. And researchers have created adversarial audio inputs to disguise commands to intelligent assistants in benign-sounding audio.

In apaper published in April, researchers from Google and the University of California at Berkeley demonstrated that even the best forensic classifiers AI systems trained to distinguish between real and synthetic content are susceptible to adversarial attacks. Its a troubling, if not necessarily new, development for organizations attempting to productize fake media detectors, particularly considering the meteoric riseindeepfakecontent online.

One of the most infamous recent examples is Microsofts Tay, a Twitter chatbot programmed to learn to participate in conversation through interactions with other users. While Microsofts intention was that Tay would engage in casual and playful conversation, internet trolls noticed the system had insufficient filters and began feeding Tay profane and offensive tweets. The more these users engaged, the more offensive Tays tweets became, forcing Microsoft to shut the bot down just 16 hours after its launch.

As VentureBeat contributor Ben Dickson notes, recent years have seen a surge in the amount of research on adversarial attacks. In 2014, there were zero papers on adversarial machine learning submitted to the preprint server Arxiv.org, while in 2020, around 1,100 papers on adversarial examples and attacks were. Adversarial attacks and defense methods have also become a highlight of prominent conferences including NeurIPS, ICLR, DEF CON, Black Hat, and Usenix.

With the rise in interest in adversarial attacks and techniques to combat them, startups like Resistant AI are coming to the fore with products that ostensibly harden algorithms against adversaries. Beyond these new commercial solutions, emerging research holds promise for enterprises looking to invest in defenses against adversarial attacks.

One way to test machine learning models for robustness is with whats called a trojan attack, which involves modifying a model to respond to input triggers that cause it to infer an incorrect response. In an attempt to make these tests more repeatable and scalable, researchers at Johns Hopkins University developed a framework dubbed TrojAI, a set of tools that generate triggered data sets and associated models with trojans. They say that itll enable researchers to understand the effects of various data set configurations on the generated trojaned models and help to comprehensively test new trojan detection methods to harden models.

The Johns Hopkins team is far from the only one tackling the challenge of adversarial attacks in machine learning. In February, Google researchers released apaper describing a framework that either detects attacks or pressures the attackers to produce images that resemble the target class of images. Baidu, Microsoft, IBM, and Salesforce offer toolboxes Advbox, Counterfit, Adversarial Robustness Toolbox, and Robustness Gym for generating adversarial examples that can fool models in frameworks like MxNet, Keras, Facebooks PyTorch and Caffe2, Googles TensorFlow, and Baidus PaddlePaddle. And MITs Computer Science and Artificial Intelligence Laboratory recently released a tool called TextFoolerthat generates adversarial text to strengthen natural language models.

More recently, Microsoft, the nonprofit Mitre Corporation, and 11 organizations including IBM, Nvidia, Airbus, and Bosch releasedtheAdversarial ML Threat Matrix, an industry-focused open framework designed to help security analysts to detect, respond to, and remediate threats against machine learning systems. Microsoft says it worked with Mitre to build a schema that organizes the approaches malicious actors employ in subverting machine learning models, bolstering monitoring strategies around organizations mission-critical systems.

The future might bring outside-the-box approaches, including several inspired by neuroscience. For example, researchers at MIT and MIT-IBM Watson AI Lab have found that directly mapping the features of the mammalian visual cortex onto deep neural networks creates AI systems that are more robust to adversarial attacks. While adversarial AI is likely to become a never-ending arms race, these sorts of solutions instill hope that attackers wont always have the upper hand and that biological intelligence still has a lot of untapped potential.

See the rest here:
Adversarial attacks in machine learning: What they are and how to stop them - VentureBeat

Here’s a great toolkit for Artificial Intelligence (AI) governance within your organisation – Lexology

As the deployment of artificial intelligence (AI) technology continues to grow, regulators around the globe continue to grasp with how best to encourage the responsible development and adoption of this technology. Many governments and regulatory bodies have released high level principles on AI ethics and governance, which while earnest leave you asking, where do I start?

However, the UKs Information Commissioners Office (ICO) has recently released a toolkit which takes a more practical how to do it approach. Its still in draft form and the ICO is seeking views to help shape and improve it. The toolkit builds upon the ICOs existing guidance on AI: The Guidance on AI and Data Protection and guidance on Explaining Decisions Made With AI (co-written with The Alan Turing Institute).

The toolkit is focused on assisting risk practitioners assess their AI systems against UK data protection law requirements, rather than AI ethics as a whole (although aspects such as discrimination, transparency, security, and accuracy are included). It is intended to help developers (and deployers) think about the risks of non-compliance with data protection law and offer practical support to organisations auditing compliance of their use of AI. While the toolkit is EU-centric, its still a good guide for Australian organisations grappling with how to embed AI in their businesses.

AI Toolkit: how AI impacts privacy and other considerations

Finally, a toolkit worth its name

The toolkit is constructed as a spreadsheet-based self-assessment tool which walks you through how AI impacts privacy and other considerations, helps you assess the risk in your business, and suggests some strategies. For example:

The toolkit covers 13 key areas including governance issues, contractual and third-party risk, risk of discrimination, maintenance of AI system and infrastructure security and integrity, assessing the need for human review, and other considerations.

To conduct the assessment, users of the toolkit are generally instructed to:

The toolkit is not intended to be used as a finite checklist or tick box exercise, but rather as a framework for analysis for your organisation to consider and capture the key risks and mitigation strategies associated with developing and/or using AI (depending on whether you are a developer, deployer, or both). This approach recognises that the diversity of AI applications, their ability to learn and evolve, and the range of public and commercial settings in which they are deployed, requires a more nuanced and dynamic approach to compliance than past technologies. There are no set and forget approaches to making sure your AI behaves and continues to meets community expectations which will be the ultimate test of accountability for organisations if something goes wrong.

Perhaps the most helpful part of the toolkit is a section reminding of Trade offs: i.e. where organisations will need to weigh up often competing values such as data minimisation and statistical accuracy in making AI design, development and deployment decisions. This brings a refreshingly honest and realistic acknowledgement of the challenges in developing and using AI responsibly typically lacking in the high level AI principles.

What about nearer to home?

Another useful how to guide is from the ever-practical Singaporeans. In early 2020, we saw Singapores Personal Data Protection Commission (PDPC) release the second edition of its Model AI Governance Framework and with it the Implementation and Self-Assessment Guide for Organisations (ISAGO) developed in collaboration with the World Economic Forum; another example of a practical method of encouraging responsible AI adoption.

In Australia, we are yet to see these practical tools released. However, a small start has been made with the Government and Industrys piloting of Australias AI ethics principles.

Read the original here:
Here's a great toolkit for Artificial Intelligence (AI) governance within your organisation - Lexology

How will artificial intelligence change the way we work? Theres good and bad news – The Spinoff

Job loses caused by automation may grab the bulk of the headlines, but more of us may be affected by changes to recruitment and worker surveillance, writes Colin Gavaghan, director of the Centre for Law and Policy in Emerging Technologies

Until recently, a question such as that in the headline has led immediately to discussions about how many jobs will be lost to the technological revolution of artificial intelligence. Over the past few years, though, more of us have started looking at some other aspects of this question. Such as: for those of us still in work, how will things change? What will it be like to work alongside, or under, AI and robots? Or to have decisions about whether were hired, fired or promoted made by algorithms?

Those are some of the questions our multi-disciplinary team at Otago University, funded by the New Zealand Law Foundation, have been trying to answer. Last week, we set out our findings in a new report.

Theres a danger of getting a bit too Black Mirror about these sorts of things, of always seeing the most dystopian possibilities from any new technology. Thats a trap weve tried hard to avoid, because there really are potential benefits in the use of this sort of technology. For one thing, its possible that AI and robots could make some workplaces safer. ACC recently invested in Kiwi robotics company Robotics Plus, for example, whose products are intended to reduce the risk of accidents at ports, forestry sites and sawmills.

Of course, workplace automation can also increase danger. Weve already seen examples of workplace robots causing fatalities. One of our suggestions is that New Zealands work safety rules need to catch up with the sort of robots were likely to be working alongside in the future fencing them off from human workers and providing an emergency off-switch isnt going to be the answer for cobots that are designed to work with and around us.

Physical injuries from robots may present the most visceral image of the risks of workplace automation. Luckily, theyre probably likely to be fairly rare. Far more people, we think, will be affected by algorithmic management the growing range of techniques used to allocate shifts, direct workers and monitor performance.

As with workplace robots, theres potential here for the technology to improve things for workers. One report talked about how it could benefit workers by giving clearer advance notice of when shifts will be and making it easier to swap and change them. Theres no guarantee, though, that algorithmic management tools will be used to benefit workers. Our earlier warning aside, its hard not to feel just a bit Black Mirror when seeing images of Amazon warehouses where workers are micro-managed to an extent beyond the wildest dreams of Ford or Taylor.

An Amazon fulfillment centre in Illinois, USA (Photo: Scott Olson)

A particular concern thats grown during the Covid crisis is the apparently increasing prevalence of workplace surveillance. While by no means a new phenomenon, AI technologies could offer employers the opportunity to monitor their workers more closely and ubiquitously than ever before.

Of course, not all employers will treat their workers like drones. But workplace protection rules dont exist for the good employers. If we want to avoid the soul-crushing erosion of privacy, autonomy and dignity that could accompany the worst abuses of this technology, we think those rules will need to be tightened in various ways.

Concerns about AI in the workplace dont start with algorithmic management, though. A lot of them start before the employment relationship even begins. Increasingly, AI technology is being used in recruitment: from targeted job adverts, to shortlisting of applicants, even to the interview stage, where companies like Hirevue provide algorithms to analyse recorded interviews with candidates.

The use of algorithms in hiring poses a serious risk of reinforcing bias, or of rendering existing bias less visible. Most obviously, theres a risk that algorithms will base their profiles of a good fit for a particular role on the history of people whove occupied that role before. If those people happen to have been overwhelming white, male and middle class well, its not hard to guess how that will probably go. Also, affective recognition software thats been trained on mostly white, neurotypical people could make unfair adverse judgments about people who dont fit into those categories, even if they score highly in the sorts of attributes that really matter. (Hirevue recently stopped using visual analysis for their assessment models, but since these sorts of platforms will obviously have to rely on inferences from something maybe voice inflection or word choices questions about cultural, class or neurodiversity awareness remain.)

But doesnt New Zealand already have laws protecting us against workplace hazards, privacy violations and discrimination? It does indeed. Like almost every other new technology, workplace AI isnt emerging into a legal vacuum. Unfortunately, some of those laws were designed for a different time, which can lead to what tech lawyers call regulatory disconnection when theres a major change to the technologys form or use. For instance, the current rules around workplace robots seem to assume that they can be fenced off from human workers, whereas the cobots that are now coming into use will be working in close proximity to humans.

In other cases, the law seems fine, but the problem is spotting when the technology violates it. Our Human Rights Act prevents discrimination on a whole bunch of grounds, including sex, race and disability, but that wont be much help to someone who has no way of knowing why the algorithm has declined them. It may even be that employers themselves wont know who has been screened out at an early stage, or on what grounds.

As we argue, though, it doesnt have to be that way. Just as workplace robots could reduce injuries and fatalities, so could algorithmic auditing software help to detect and reduce bias in recruitment, promotion, etc. Its not as though humans are perfect at this! Maybe AI could make things better. What we cant do, though, is complacently assume that it will do so.

In April, the EU Commission published a draft law for Europe which would require scrutiny and transparency for certain uses of AI technology. That would include a range of functions related to employment, such as recruitment, promotion and termination; task allocation; and monitoring and evaluating performance. Last year, New York City Council introduced a bill that would provide for algorithmic hiring tools be audited for bias, and their use disclosed to candidates.

Our report calls for New Zealand to take the same kinds of steps. For instance, we propose that consideration should be given to following New Yorks example, and requiring manufacturers of hiring tools to ensure those tools include functionality for bias auditing, so that client companies can readily perform the relevant audits.

Algorithmic impact assessments looking at matters like privacy and worker safety and wellbeing should be conducted before any algorithm is used in high stakes contexts. And weve suggested that there should be important roles for the Office of the Privacy Commissioner and Worksafe NZ in overseeing surveillance technologies.

We think these steps would go some way to ensuring that New Zealand businesses and workers (actual and prospective) could enjoy the benefits of these technologies, while being protected from the worst of the risks.

Our report isnt a prediction about what the future workplace will look like when AI and robots are a regular part of it. How things turn out depends substantially on the sorts of choices we make about how to use these technologies. And were not proposing that we need fear the future or rage against the machines. But we do think we should be keeping a close watchful eye on them. Because you can bet theyll be keeping an eye on us.

Subscribe to Rec Room a weekly newsletter delivering The Spinoffs latest videos, podcasts and other recommendations straight to your inbox.

Link:
How will artificial intelligence change the way we work? Theres good and bad news - The Spinoff

Artificial intelligence and privacy – The Nation

As the fourth industrial revolution (industry 4.o) began, robotics and artificial intelligence started gaining popularity. Artificial intelligence is undoubtedly an advanced concept that is going to bring about ease in the lives of people.

Because of artificial intelligence, efficient utilisation of resources and a decline in production cost will be observed. Amazon, a world-famous online selling platform leverages this technology and it enables its sellers to know their customers and their preferences.

On the other hand, artificial intelligence manipulates the private data and information of users. In terms of privacy, world-famous scandals can be viewed. Firstly Cambridge Analytica, Facebook data scandal proved to be a breach of privacy rules. Facebook users information was sold and misused for political purposes in the USA in 2016. Secondly, Julian Assange was convicted of cybercrime because he was involved in the leakage of inside information of a company that he got with the help of artificial intelligence. It is an alarming situation for every person who is using nodes and gadgets. From family information to our location, everything is available in the cloud (storage). Artificial intelligence has a much brighter side but its impacts on our lives cant be denied. Great care must be taken while inputting data on social media platforms.

SAGAR,

Shahdadkot.

More here:
Artificial intelligence and privacy - The Nation

Of All Things: Artificial intelligence is real | News | montgomerynews.com – Montgomery Newspapers

There seem to be a lot of articles about artificial intelligence in newspapers and magazines these days. Some of the other stuff in print makes me think that what we need is more regular intelligence,

Last week, the legislative branch of the 27-country European Union headquartered in Brussels announced plans to restrict the use of artificial intelligence. Its an attempt to head off abuse of artificial intelligence technology, instead of waiting for it to be a problem the way the United States does.

Artificial intelligencesimulates humanintelligencein computers that are programmed to think and act like human beings. (Hey, what could go wrong?)

Originally,artificial intelligence meant a machine doing something that would have previously needed human intelligence.From what Im reading these days, I worry that the artificial intelligence may be more intelligent than the human kind.

All of the major computer companies seem to offervirtual personal assistants (Microsoft Cortana, Apple Siri, Amazon Alexa and Google Assistant, for instance.)

Alexa, for another instance, can handle your e-mail, your shopping list, the radio and television, cooking, a wake-up call, communication with friends and family, and generally canrun your life.

Its hard to believe (at least for an old guy like me) to read about some of the things artificial intelligence can do.

For instance, some artificial intelligence systems can allow you todeposit checks in the bank from your living room, and, if necessary, some can decipher the handwriting on the check.

Artificial intelligence can also detect fraudulent use of a credit card by observing the users normal credit card spending patterns.

Youre likely to run into that sort of electronic voodoo any time in these ever-increasing days of artificial intelligence.

The intelligence algorithms can detect and remove hate speech, faster than a human censor can. They are able to identify key words and phrases.

Google maps, Im told, not only tell you how to drive to a destination, but, thanks to an artificial intelligence algorithm, tell you what time youll get there, based on traffic conditions.

The Google app algorithm remembers the edges of buildings that have been fed into the system after the owner has manually identified them.

Another feature is the electronic (or possibly voodoo again) recognizing and understanding of handwritten house numbers.(On paper, I presume, not on the houses.)

The scary thing about the foregoing is that the people who devise, and write about, all this new technology claim that the field of artificial intelligence is still in its infancy. More programs are still to come, they tell us, that will much more accurately replicate human capabilities.

I wonder how long it will be before the computers tell us to just go home and take a nap, and theyll take care of everything.

Next thing you know, dear reader, weekly columns like this may be turned out by artificial intelligence, instead of the good old fashioned writers like me. Please dont tell me that you wont know the difference.

Read the rest here:
Of All Things: Artificial intelligence is real | News | montgomerynews.com - Montgomery Newspapers

Artificial Intelligence and Machine Learning Drive the Future of Supply Chain Logistics – Supply and Demand Chain Executive

Artificial intelligence (AI) is more accessible than ever and is increasingly used to improve business operations and outcomes, not only in transportation and logistics management, but also in diverse fields like finance, healthcare, retail and others. An Oxford Economics and NTT DATA survey of 1,000 business leaders conducted in early 2020 reveals that 96% of companies were at least researching AI solutions, and over 70% had either fully implemented or at least piloted the technology.

Nearly half of survey respondents said failure to implement AI would cause them to lose customers, with 44% reporting their companys bottom line would suffer without it.

Simply put, AI enables companies to parse vast quantities of business data to make well-informed and critical business decisions fast. And, the transportation management industry specifically is using this intelligence and its companion technology, machine learning (ML), to gain greater process efficiency and performance visibility driving impactful changes bolstering the bottom line.

McKinsey research reveals that 61% of executives report decreased costs and 53% report increased revenues as a direct result of introducing AI into their supply chains. For supply chains, lower inventory-carrying costs, inventory reductions and lower transportation and labor costs are some of the biggest areas for savings captured by high volume shippers. Further, AI boost supply chain management revenue in sales, forecasting, spend analytics and logistics network optimization.

For the trucking industry and other freight carriers, AI is being effectively applied to transportation management practices to help reduce the amount of unprofitable empty miles or deadhead trips a carrier makes returning to domicile with an empty trailer after delivering a load. AI also identifies other hidden patterns in historical transportation data to determine the optimal mode selection for freight, most efficient labor resource planning, truck loading and stop sequences, rate rationalization and other process improvement by applying historical usage data to derive better planning and execution outcomes.

The ML portion of this emerging technology helps organizations optimize routing and even plan for weather-driven disruptions. Through pattern recognition, for instance, ML helps transportation management professionals understand how weather patterns affected the time it took to carry loads in the past, then considers current data sets to make predictive recommendations.

The Coronavirus disease (COVID-19) put a tremendous amount of pressure on many industries the transportation industry included but it also presented a silver lining -- the opportunity for change. Since organizations are increasingly pressed to work smarter to fulfill customers expectations and needs, there is increased appetite to retire inefficient legacy tools and invest in new processes and tech tools to work more efficiently.

Applying AI and ML to pandemic-posed challenges can be the critical difference between accelerating or slowing growth for transportation management professionals. When applied correctly, these technologies improve logistics visibility, offer data-driven planning insights and help successfully increase process automation.

Like many emerging technologies promising transformation, AI and ML have, in many cases, been misrepresented or worse, overhyped as panaceas for vexing industry challenges. Transportation logistics organizations should be prudent and perform due diligence when considering when and how to introduce AI and ML to their operations. Panicked hiring of data scientists to implement expensive, complicated tools and overengineered processes can be a costly boondoggle and can sour the perception of the viability of these truly powerful and useful tech tools. Instead, organizations should invest time in learning more about the technology and how it is already driving value for successful adopters in the transportation logistics industry. What are some steps a logistics operation should take as they embark on an AI/ML initiative?

Remember that the quality of your data will drive how fast or slow your AI journey will go. The lifeblood of an effective AI program (or any big data project) is proper data hygiene and management. Unfortunately, compiling, organizing and accessing this data is a major barrier for many. According to a survey conducted by OReilly, 70% of respondents report that poorly labeled data and unlabeled data are a significant challenge. Other common data quality issues respondents cited include poor data quality from third-party sources (~42%), disorganized data stores and lack of metadata (~50%) and unstructured, difficult-to-organize data (~44%).

Historically slow-to-adopt technology, the transportation industry has recently begun realizing the imperative and making up ground with 60% of an MHI and Deloitte poll respondents expecting to embrace AI in the next five years. Gartner predicts that by the end of 2024, 75% of organizations will move from piloting to operationalizing AI, driving a five times increase in streaming data and analytics infrastructures.

For many transportation management companies, accessing, cleansing and integrating the right data to maximize AI will be the first step. AI requires large volumes of detailed data and varied data sources to effectively identify models and develop learned behavior.

Before jumping on the AI bandwagon too quickly, companies should assess the quality of their data and current tech stacks to determine what intelligence capabilities are already embedded.

And, when it comes to investing in newer technologies to pave the path toward digital transformation, choose AI-driven solutions that do not require you to become a data scientist.

If youre unsure how to start, consider partnering with a transportation management system (TMS) partner with a record of experience and expertise in applying AI to transportation logistics operations.

Read more:
Artificial Intelligence and Machine Learning Drive the Future of Supply Chain Logistics - Supply and Demand Chain Executive

Global Artificial Intelligence in Healthcare Markets Report 2021: Growing Investment in AI Healthcare Start-ups & Increasing Cross-Industry…

DUBLIN, May 14, 2021 /PRNewswire/ -- The "Artificial Intelligence in Healthcare Market Forecast to 2027 - COVID-19 Impact and Global Analysis by Component, Application, End User, and Geography" report has been added to ResearchAndMarkets.com's offering.

Research and Markets Logo

Robot Assisted Surgery Segment to Grow at Faster CAGR During 2020-2027

Artificial Intelligence (AI) in Healthcare Market is expected to reach US$ 107,797.82 million by 2027 from US$ 3,991.23 million in 2019; it is estimated to grow at a CAGR of 49.8% from 2020 to 2027.

The report highlights trends prevailing in the market, and the factors driving and hindering the market growth. The growth of the artificial intelligence in healthcare market is attributed to the rising application of artificial intelligence in healthcare, growing investment in AI healthcare start-ups, and increasing cross-industry partnerships and collaborations. However, dearth of skilled AI workforce and imprecise regulatory guidelines for medical software is the major factor hindering the market growth.

Based on application, the artificial intelligence in healthcare market is segmented into robot assisted surgery, virtual assistants, administrative workflow assistants, connected machines, diagnosis, clinical trials, fraud detection, cybersecurity, dosage error reduction, and others. The clinical trials segment held the largest market share in 2019, and the robot assisted surgery segment is estimated to register the highest CAGR during the forecast period. Rising adoption of robotic surgeries due to better surgical outcomes offer lucrative opportunities for the growth of robotic assisted surgery segment.

The artificial intelligence in healthcare market is expected to witness substantial growth post-pandemic. The global healthcare infrastructure has observed that, in order to develop and maintain sustainable healthcare setup, utilization of computational technologies such as artificial intelligence becomes crucial. Moreover, majority of the market players have focused on development of AI-powered models to fight against coronavirus pandemic.

Story continues

In addition, several number of research centers and governments have actively participated in the building of robust AI technologies which are assisting the healthcare professionals to work efficiently even under shortage of resources. These factors will eventually drive the market growth.

Microsoft, Koninklijke Philips N.V., Intel Corporation, General Electric Company, Alphabet Inc., NVIDIA CORPORATION, Nuance Communications, Inc., Siemens Healthineers AG, Arterys Inc., and Johnson & Johnson Services, Inc. are among the leading companies operating in the artificial intelligence in healthcare market.

Key Topics Covered:

1. Introduction1.1 Scope of the Study1.2 Research Report Guidance1.3 Market Segmentation

2. Artificial Intelligence in Healthcare Market - Key Takeaways

3. Research Methodology

4. Global Artificial Intelligence in Healthcare - Market Landscape4.1 Overview4.2 PEST Analysis4.3 Expert Opinions

5. Artificial Intelligence in Healthcare Market - Key Market Dynamics5.1 Market Drivers5.1.1 Rising Application of Artificial Intelligence (AI) in Healthcare5.1.2 Growing Investment in AI Healthcare Start ups5.1.3 Increasing Cross-Industry Partnerships and Collaborations5.2 Market Restraints5.2.1 Dearth of Skilled AI Workforce and Imprecise Regulatory Guidelines for Medical Software5.3 Market Opportunities5.3.1 Increasing Potential in Emerging Economies5.4 Future Trends5.4.1 AI in Epidemic Outbreak Prediction and Response5.5 Impact Analysis

6. Artificial Intelligence in Healthcare Market - Global Analysis6.1 Global Artificial Intelligence in Healthcare Market Revenue Forecast And Analysis6.2 Global Artificial Intelligence in Healthcare Market, By Geography - Forecast And Analysis6.3 Market Positioning of Key Players

7. Artificial Intelligence in Healthcare Market Analysis - By Component7.1 Overview7.2 Artificial Intelligence in Healthcare Market Revenue Share, by Component (2019 and 2027)7.3 Software Solution7.4 Hardware7.5 Services

8. Artificial Intelligence in Healthcare Market Analysis - By Application8.1 Overview8.2 Artificial Intelligence in Healthcare Market Revenue Share, by Application (2019 and 2027)8.3 Robot Assisted Surgery8.4 Virtual Assistants8.5 Administrative Workflow Assistants8.6 Connected Machines8.7 Diagnosis8.8 Clinical Trials8.9 Fraud Detection8.10 Cybersecurity8.11 Dosage Error Reduction

9. Artificial Intelligence in Healthcare Market Analysis - By End User9.1 Overview9.2 Artificial Intelligence in Healthcare Market, by End-User, 2019 and 2027 (%)9.3 Hospitals & Healthcare Providers9.4 Patients9.5 Pharma and Biotech Companies9.6 Healthcare Payers

10. Global Artificial Intelligence in Healthcare Market - Geographical Analysis

11. Impact of COVID-19 Pandemic on Global Artificial Intelligence in Healthcare Market11.1 North America: Impact Assessment of COVID-19 Pandemic11.2 Europe: Impact Assessment Of COVID-19 Pandemic11.3 Asia-Pacific: Impact Assessment of COVID-19 Pandemic11.4 Middle East and Africa: Impact Assessment of COVID-19 Pandemic11.5 South and Central America: Impact Assessment of COVID-19 Pandemic

12. Artificial Intelligence (AI) in Healthcare Market -Industry Landscape12.1 Overview12.2 Growth Strategies in the Artificial Intelligence in Healthcare Market, 2019-202012.3 Inorganic Growth Strategies12.3.1 Overview12.4 Organic Growth Strategies12.4.1 Overview

13. Company Profile13.1 Key Facts13.2 Business Description13.3 Products and Services13.4 Financial Overview13.5 SWOT Analysis13.6 Key Developments

Microsoft

Koninklijke Philips N.V.

Intel Corporation

General Electric Company

Alphabet Inc.

NVIDIA CORPORATION

Nuance Communications, Inc.

Siemens Healthineers AG

Arterys Inc.

Johnson & Johnson Services, Inc.

For more information about this report visit https://www.researchandmarkets.com/r/59o3z6

Media Contact:

Research and Markets Laura Wood, Senior Manager press@researchandmarkets.com

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716

Cision

View original content:http://www.prnewswire.com/news-releases/global-artificial-intelligence-in-healthcare-markets-report-2021-growing-investment-in-ai-healthcare-start-ups--increasing-cross-industry-partnerships-and-collaborations-301291588.html

SOURCE Research and Markets

Continued here:
Global Artificial Intelligence in Healthcare Markets Report 2021: Growing Investment in AI Healthcare Start-ups & Increasing Cross-Industry...

Artificial intelligence taking over DevOps functions, survey confirms – ZDNet

The pace of software releases has only accelerated, and DevOps is the reason things have sped up. Now, artificial intelligence and machine learning are also starting to play a role in this acceleration of code releases.

That's the word from GitLab's latest surveyof 4,300 developers and managers, which finds some enterprises are releasing code ten times faster than in previous surveys. Almost all respondents, 84%, say they're releasing code faster than before, and 57% said code is being released twice as fast, from 35% a year ago. Close to one in five, 19%, say their code goes out the door ten times faster.

Tellingly, 75% are using AI/ML or bots to test and review their code before release, up from 41% just one year ago. Another 25% say they now have full test automation, up from 13%.

About 21% of survey respondents say the pace of releases has accelerated with the addition of source code management to their DevOps practice (up from 15% last year), the survey's authors add. Another 18% added CI and 13% added CD. Nearly 12% say adding a DevOps platform has sped up the process, while just over 10% have added automated testing.

Developers' roles are shifting toward the operations side as well, the survey shows. Developers are taking on test and ops tasks, especially around cloud, infrastructure and security. At least 38% of developers said they now define or create the infrastructure their app runs on. About 13% monitor and respond to that infrastructure. At least 26% of developers said they instrument the code they've written for production monitoring -- up from just 18% last year.

Fully 43% of our survey respondents have been doing DevOps for between three and five years -- "that's the sweet spot where they've known success and are well-seasoned," the survey's authors point out. In addition, they add, "this was also the year where practitioners skipped incremental improvements and reached for the big guns: SCM, CI/CD, test automation, and a DevOps platform."

Industry leaders concur that DevOps has significantly boosted enterprise software delivery to new levels, but caution that it still tends to be seen as an IT activity, versus a broader enterprise initiative. "Just like any agile framework, DevOps requires buy-in," says Emma Gautrey, manager of development operations at Aptum. "If the development and operational teams are getting along working in harmony that is terrific, but it cannot amount to much if the culture stops at the metaphorical IT basement door. Without the backing of the whole of the business, continuous improvement will be confined to the internal workings of a single group."

DevOps is a commitment to quick development/deployment cycles, "enhanced by, among other things, an enhanced technical toolset -- source code management, CI/CD, orchestration," says Matthew Tiani, executive vice president at iTech AG. But it takes more than toolsets, he adds. Successful DevOps also incorporates "a compatible development methodology such as agile and scrum, and an organization commitment to foster and encourage collaboration between development and operational staff."

Then organizations aspects of DevOps tend to be more difficult, Tiani adds. "Wider adoption of DevOps within the IT services space is common because the IT process improvement goal is more intimately tied to the overall organizational goals. Larger, more established companies may find it hard to implement policies and procedures where a complex organizational structure impedes or even discourages collaboration. In order to effectively implement a DevOps program, an organization must be willing to make the financial and human investments necessary for maintaining a quick-release schedule."

What's missing from many current DevOps efforts is "the understanding and shared ownership of committing to DevOps," says Gautrey. "Speaking to the wider community, there is often a sense that the tools are the key, and that once in place a state of enlightenment is achieved. That sentiment is little different from the early days of the internet, where people would create their website once and think 'that's it, I have web presence.'"

That's where the organization as a whole needs to be engaged, and this comes to fruition "with build pipelines that turn red the moment an automated test fails, and behavioral-driven development clearly demonstrating the intentions of the software," says Gautrey. "With DevOps, there is a danger in losing interaction with individuals over the pursuit of tools and processes. Nothing is more tempting than to apply a blanket ruling over situations because it makes the automation processes consistent and therefore easier to manage. Responding to change means more than how quickly you can change 10 servers at once. Customer collaboration is key."

More here:
Artificial intelligence taking over DevOps functions, survey confirms - ZDNet

Global Artificial Intelligence in Healthcare Markets to 2027: Robot Assisted Surgery Segment to Register the Highest Growth Rate -…

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence in Healthcare Market Forecast to 2027 - COVID-19 Impact and Global Analysis by Component, Application, End User, and Geography" report has been added to ResearchAndMarkets.com's offering.

Robot Assisted Surgery Segment to Grow at Faster CAGR During 2020-2027

Artificial Intelligence (AI) in Healthcare Market is expected to reach US$ 107,797.82 million by 2027 from US$ 3,991.23 million in 2019; it is estimated to grow at a CAGR of 49.8% from 2020 to 2027.

The report highlights trends prevailing in the market, and the factors driving and hindering the market growth. The growth of the artificial intelligence in healthcare market is attributed to the rising application of artificial intelligence in healthcare, growing investment in AI healthcare start-ups, and increasing cross-industry partnerships and collaborations. However, the dearth of skilled AI workforce and imprecise regulatory guidelines for medical software are the major factors hindering the market growth.

Based on application, the artificial intelligence in healthcare market is segmented into robot assisted surgery, virtual assistants, administrative workflow assistants, connected machines, diagnosis, clinical trials, fraud detection, cybersecurity, dosage error reduction, and others. The clinical trials segment held the largest market share in 2019, and the robot assisted surgery segment is estimated to register the highest CAGR during the forecast period. Rising adoption of robotic surgeries due to better surgical outcomes offer lucrative opportunities for the growth of robotic assisted surgery segment.

The artificial intelligence in healthcare market is expected to witness substantial growth post-pandemic. The global healthcare infrastructure has observed that, in order to develop and maintain sustainable healthcare setup, utilization of computational technologies such as artificial intelligence becomes crucial. Moreover, majority of the market players have focused on development of AI-powered models to fight against coronavirus pandemic.

In addition, several number of research centers and governments have actively participated in the building of robust AI technologies which are assisting the healthcare professionals to work efficiently even under shortage of resources. These factors will eventually drive the market growth.

Microsoft, Koninklijke Philips N.V., Intel Corporation, General Electric Company, Alphabet Inc., NVIDIA CORPORATION, Nuance Communications, Inc., Siemens Healthineers AG, Arterys Inc., and Johnson & Johnson Services, Inc. are among the leading companies operating in the artificial intelligence in healthcare market.

Key Topics Covered:

1. Introduction

1.1 Scope of the Study

1.2 Research Report Guidance

1.3 Market Segmentation

2. Artificial Intelligence in Healthcare Market - Key Takeaways

3. Research Methodology

4. Global Artificial Intelligence in Healthcare - Market Landscape

4.1 Overview

4.2 PEST Analysis

4.3 Expert Opinions

5. Artificial Intelligence in Healthcare Market - Key Market Dynamics

5.1 Market Drivers

5.1.1 Rising Application of Artificial Intelligence (AI) in Healthcare

5.1.2 Growing Investment in AI Healthcare Start ups

5.1.3 Increasing Cross-Industry Partnerships and Collaborations

5.2 Market Restraints

5.2.1 Dearth of Skilled AI Workforce and Imprecise Regulatory Guidelines for Medical Software

5.3 Market Opportunities

5.3.1 Increasing Potential in Emerging Economies

5.4 Future Trends

5.4.1 AI in Epidemic Outbreak Prediction and Response

5.5 Impact Analysis

6. Artificial Intelligence in Healthcare Market - Global Analysis

6.1 Global Artificial Intelligence in Healthcare Market Revenue Forecast And Analysis

6.2 Global Artificial Intelligence in Healthcare Market, By Geography - Forecast And Analysis

6.3 Market Positioning of Key Players

7. Artificial Intelligence in Healthcare Market Analysis - By Component

7.1 Overview

7.2 Artificial Intelligence in Healthcare Market Revenue Share, by Component (2019 and 2027)

7.3 Software Solution

7.4 Hardware

7.5 Services

8. Artificial Intelligence in Healthcare Market Analysis - By Application

8.1 Overview

8.2 Artificial Intelligence in Healthcare Market Revenue Share, by Application (2019 and 2027)

8.3 Robot Assisted Surgery

8.4 Virtual Assistants

8.5 Administrative Workflow Assistants

8.6 Connected Machines

8.7 Diagnosis

8.8 Clinical Trials

8.9 Fraud Detection

8.10 Cybersecurity

8.11 Dosage Error Reduction

9. Artificial Intelligence in Healthcare Market Analysis - By End User

9.1 Overview

9.2 Artificial Intelligence in Healthcare Market, by End-User, 2019 and 2027 (%)

9.3 Hospitals & Healthcare Providers

9.4 Patients

9.5 Pharma and Biotech Companies

9.6 Healthcare Payers

10. Global Artificial Intelligence in Healthcare Market - Geographical Analysis

11. Impact of COVID-19 Pandemic on Global Artificial Intelligence in Healthcare Market

11.1 North America: Impact Assessment of COVID-19 Pandemic

11.2 Europe: Impact Assessment Of COVID-19 Pandemic

11.3 Asia-Pacific: Impact Assessment of COVID-19 Pandemic

11.4 Middle East and Africa: Impact Assessment of COVID-19 Pandemic

11.5 South and Central America: Impact Assessment of COVID-19 Pandemic

12. Artificial Intelligence (AI) in Healthcare Market -Industry Landscape

12.1 Overview

12.2 Growth Strategies in the Artificial Intelligence in Healthcare Market, 2019-2020

12.3 Inorganic Growth Strategies

12.3.1 Overview

12.4 Organic Growth Strategies

12.4.1 Overview

13. Company Profile

13.1 Key Facts

13.2 Business Description

13.3 Products and Services

13.4 Financial Overview

13.5 SWOT Analysis

13.6 Key Developments

For more information about this report visit https://www.researchandmarkets.com/r/3obq2z

Read this article:
Global Artificial Intelligence in Healthcare Markets to 2027: Robot Assisted Surgery Segment to Register the Highest Growth Rate -...

Here’s What the Dreams of Google’s Artificial Intelligence Look Like – Analytics Insight

What if computers had the ability to dream? They can, in reality. Googles innovative DeepDream software is turning Artificial Intelligence neural networks inside out to comprehend how computers think.

When a bunch of artificial brains at Google began producing surreal images from otherwise standard photos, engineers contrasted what they saw to dreamscapes. Their image-generation method was termed Inceptionism and the code that powered it was called DeepDream.

Wikipedia says DeepDream is a computer vision program created by Google engineer Alexander Mordvintsev that uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like hallucinogenic appearance in the deliberately over-processed images.

Color scrolls, spinning shapes, stretched faces, swirling eyeballs, and awkward patterns of shadow and light feature in the computer-generated images. The computers seemed to be hallucinating in an astonishingly human manner. The aim of the project was to see how well a neural network could identify different animals and environments by having the machine explain what is observed.

So, what really is going on in the dreaming neural networks and what does this mean for the future of Artificial intelligence?

The result reveals a lot about where artificial intelligence is headed, as well as why it could be more imaginative, ambiguous, and unpredictable than wed like.

The Google artificial neural network is modeled after the central nervous system of animals and functions similarly to a computer brain. When engineers feed a picture to the network, the first layer of neurons examines it. This layer then communicates with the next layer, which attempts to represent the image. This process continues 10 to 30 rounds, with each layer defining and alienating main elements until the picture is deduced. The neural network then informs us what the entity is that it has valiantly attempted to identify, often with little progress. This is the method for recognizing images.

After that, the Google team realized they could reverse the procedure. They hoped to learn more about what qualified features the networks knew and didnt by giving it complete freedom and asking it to interpret and improve an input picture in such a way as to evoke a specific interpretation.

What happened next was quite remarkable. The researchers discovered that these neural networks could not only distinguish between different images but that they also had enough knowledge to produce images, culminating in these unexpected computational representations. The network, for example, produced these unusual images in response to the teams requests for common objects such as insects, bananas, and much more.

According to IFL Science, computers have the ability to see images in objects in a way that artists can only dream of replicating. It sees buildings within clouds, temples in trees, and birds in leaves. Highly detailed elements seem to pop up out of nowhere. This processed image of a cloudy sky proves that Googles artificial neural network is the champion of finding pictures in a cloudy sky.

This technique, which creates images where there arent any, is aptly called inceptionism. There is an inceptionism gallery where you can explore the computers artwork.

Finally, the designers gave the computer full, free reign over its artwork. The final pieces were beautiful pictures derived from a mechanical mind what the engineers are calling dreams. The blank canvas was simply an image of white noise. The computer pulled out patterns from the noise and created dreamscapes: pictures that could only come from an infinite imagination.

Share This ArticleDo the sharing thingy

Read more here:
Here's What the Dreams of Google's Artificial Intelligence Look Like - Analytics Insight