Page 1,538«..1020..1,5371,5381,5391,540..1,5501,560..»

Artificial Intelligence (AI) in Clinical Trials Market is Projected to Reach $4.8 billion by 2027- Exclusive Report by MarketsandMarkets -…

Chicago, Oct. 18, 2022 (GLOBE NEWSWIRE) -- According to the new market research report "Artificial Intelligence (AI) in Clinical Trials Market by Offering (Software, Services), Technology (Machine Learning, Deep Learning, Supervised), Application (Cardiovascular, Metabolic, Oncology), End User (Pharma, Biotech, CROs) - Global Forecasts to 2027", is projected to reach USD 4.8 billion by 2027 from USD 1.5 billion in 2022, at a CAGR of 25.6% during the forecast period.

Download PDF Brochure: https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=42687548

Scope of the Artificial Intelligence (AI) in Clinical Trials Market Report:

The growth of this market is driven by the growing need to control development costs & reduce time involved in drug development, and Increasing adoption of AI based platform to improve productivity and efficiency of clinical trials, On the other hand, a lack of data sets in the field of clinical trials and the inadequate availability of skilled labor are some of the factors challenging the growth of the market

Services segment is expected to grow at the highest rate during the forecast periodBased on offering, the AI in clinical trials market is segmented into software and services. In 2021, the services segment accounted for the largest market share of the global AI in clinical trials services market and also expected to grow at the highest CAGR during the forecast period. The benefits associated with AI services and the strong demand for AI services among end users are the key factors driving the growth of this market segment.

Machine learning technology segment accounted for the largest share of the global AI in clinical trials market

Based on technology, the Artificial Intelligence (AI) in Clinical Trials Market is segmented into machine learning and other technologies. The machine learning segment accounted for the largest share of the global market in 2021 and expected to grow at the highest CAGR during the forecast period. The machine learning technology segment further segmented into deep learning, supervised learning, and other machine learning technologies. Deep learning segment accounted for the largest share of the market in 2021, and this segment also expected to grow at the highest CAGR during the forecast period.

Request Sample Pages: https://www.marketsandmarkets.com/requestsampleNew.asp?id=42687548

The oncology application segment accounted for the largest share of the AI in clinical trials market in 2021

On the basis of application, the Artificial Intelligence (AI) in Clinical Trials Market is segmented into neurological diseases and condition, cardiovascular diseases, metabolic diseases, infectious disease, immunology diseases, and other applications. The oncology segment accounted for the largest share of the market in 2021, owing to the increasing demand for effective cancer drugs and a large number of drug trials in the field of oncology is contributing to the adoption of AI-enabled technologies in this application area. Also, many players are developing and adopting oncology-based AI tools for clinical trials, thus impelling the segment growth. The infectious diseases segment is estimated to register the highest CAGR during the forecast period, owing to the increasing number of clinical trials for vaccine and drugs for covid-19 and other infectious disease and rising investment in R&D for infectious diseases.

Pharmaceutical & biotechnology companies segment accounted for the largest share of the global AI in clinical trials market

On the basis of end user, the Artificial Intelligence (AI) in Clinical Trials Market is segmented into pharmaceutical & biotechnology companies, CROs, and other end users. The pharmaceutical & biotechnology companies segment accounted for the largest market share of Artificial Intelligence in Clinical Trials Market, in 2021. Factors such as increasing adoption of AI enabled technologies to improve productivity and efficiency of clinical trials. Furthermore, growing cross industry collaborations and partnership for leverging the AI solution for R&D and the overall development process. Hence driving the growth among this end user segment.

Speak to Analyst: https://www.marketsandmarkets.com/speaktoanalystNew.asp?id=42687548

Geographical Growth Scenario:

North America accounted for the largest share of the global AI in clinical trials market in 2021 and also expected to grow at the highest CAGR during the forecast period. North America, which comprises the US, and Canada forms the largest market for AI in clinical trials. These countries have been early adopters of AI technology in clinical trials and development. Presence of key established players, well-established pharmaceutical and biotechnology industry, and high focus on R&D & substantial investment are some of the key factors responsible for the large share and high growth rate of this market

Key Players:

Prominent players in this Artificial Intelligence in Clinical Trials Market are IBM corporation, Exscientia, Saama Technologies, Unlearn.AI, Inc., BioSymetrics, Euretos, Trials.Ai, Insilico Medicine, Ardigen, Pharmaseal, Koninklijke Philips N.V., Intel, Numerate, AiCure, LLC., Envisagenics, NURITAs, BioAge Labs, Inc., Symphony AI., Median Technologies, Innoplexus, Antidote Technologies, Inc., GNS Healthcare, Koneksa Health, Halo Health Systems, and DEEP LENS AI. Players adopted organic as well as inorganic growth strategies such as product upgrades, collaborations, agreements, partnerships, and acquisitions to increase their offerings, cater to the unmet needs of customers, increase their profitability, and expand their presence in the global market.

Browse Adjacent Markets@ Healthcare IT Market Research Reports & Consulting

The rest is here:
Artificial Intelligence (AI) in Clinical Trials Market is Projected to Reach $4.8 billion by 2027- Exclusive Report by MarketsandMarkets -...

Read More..

Military researchers to brief industry on artificial intelligence (AI), sensors, and autonomy program – Military & Aerospace Electronics

ARLINGTON, Va. U.S. military researchers will brief industry next month on an upcoming project to develop new kinds of artificial intelligence (AI) and machine autonomy for battle management and sensor fusion.

Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., issued a special notice (DARPA-SN-23-06) on Monday for the Artificial Intelligence Reinforcements (AIR) project.

The DARPA AIR initiative seeks to fill gaps in research on developing and deploying tactical autonomy capability in real-world military operations. Industry briefings will be from 8:30 a.m. to 5 p.m. on Monday 14 Nov. 2022 at Amentums Ballston Conference Center, 4121 Wilson Blvd., in Arlington, Va.

AIR will focus on previously avoided dimensions to enable tactical autonomy in integrated sensors, scalability to large engagements, adaptability to changing conditions, and the ability to learn predictive models that incorporate uncertain knowledge of adversary and self, as well as deceptive effects.

Related: Marines ask Sentient Vision for artificial intelligence (AI) and machine autonomy for unmanned reconnaissance

AIR will pair existing, maturing, and emerging algorithmic approaches with expert human feedback to evolve the cooperative autonomous behaviors rapidly that solve previously avoided challenges.

AIR will address two technical areas: creating fast and accurate models that capture uncertainty and automatically improve with more data; and developing AI-driven algorithmic approaches to real-time distributed autonomous tactical execution within uncertain, dynamic, and complex operational environments.

The AIR program also will develop ways to design, test, and implement future iterations of AIR software.

Related: Artificial intelligence (AI) to enable manned and unmanned vehicles adapt to unforeseen events like damage

Briefings will be at Amentums Ballston Conference Center on the second floor of 4121 Wilson Blvd. in Arlington, Va., on Monday 14 Nov. 2022 from 8:30 a.m. to 5:00 p.m. Check-in begins at 8 a.m.

Briefings will include information that is International Traffic in Arms (ITAR)-restricted, so attendance is limited to U.S. citizens or U.S. permanent residents representing U.S. companies. Briefings will be classified at the Collateral SECRET level and will require security clearances.

Those interested in attending should register online at https://creative.gryphontechnologies.com/darpa/tto/air/pd/. Those seeking to attend should fax their security clearances and visit requests to Amentum at (571) 428-4358. Registration closes on 4 Nov. 2022.

Related: Researchers ask industry for enabling technologies in artificial intelligence (AI) and machine automation

Those attending may meet individually with the Air program manager, Lt. Col. Ryan "Hal" Hefron, on Tuesday 15 Nov. 2022. Email DARPA-SN-2306@ darpa.mil to request an individual session.

One-on-one meetings will be at DARPA at 675 North Randolph St. in Arlington, Va., and will require security clearance. Fax clearance/visit requests to DARPA at (703) 528-3655 or send via encrypted e-mail to VWC@darpa.mil.

Email questions or concerns to Lt. Col. Ryan Hefron at DARPA-SN-23-06@darpa.mil. More information is online at https://sam.gov/opp/1b972abff6de4a2fbf7999af316e52c0/view.

The rest is here:
Military researchers to brief industry on artificial intelligence (AI), sensors, and autonomy program - Military & Aerospace Electronics

Read More..

Artificial Intelligence In Manufacturing Market is expected to generate a revenue of USD 52.37 Billion by 2030, Globally, at 47.80% CAGR: Verified…

The industry's atomization and adoption of IoT, growing complex data sets, falling hardware costs, and increased computing power are driving Artificial Intelligence Adoption In Manufacturing Market.

JERSEY CITY, N.J., Oct. 18, 2022 /PRNewswire/ -- Verified Market Research recently published a report, "Artificial Intelligence In Manufacturing Market" By Component (Hardware, Software), By Technology (Deep Learning, Machine Learning), By End-User Industry (Healthcare, Manufacturing), and By Geography.

As per the deep research carried out by Verified Market Research, the global Artificial Intelligence In Manufacturing Market size was valued at USD 1.56 Billion in 2022 and is projected to reach USD 52.37 Billion by 2030, growing at a CAGR of 47.80% from 2023 to 2030.

Download PDF Brochure: https://www.verifiedmarketresearch.com/download-sample/?rid=6834

Browse in-depth TOC on "Artificial Intelligence In Manufacturing Market"

202 - Pages126 Tables37 Figures

Global Artificial Intelligence In Manufacturing Market Overview

Artificial intelligence (AI) technology enables machines to perform tasks that were previously performed by humans. It creates machines that can learn, plan, recognise speech, and solve problems. One of the primary goals of artificial intelligence is the development of intelligent machines and smart systems. It is useful in a variety of industries, such as gaming, expert systems, vision systems, intelligent robots, and natural language processing, to name a few. The robotics industry is being transformed by artificial intelligence (AI), which incorporates computer vision and machine learning. AI-powered automation has numerous potential applications in a variety of industries, including material processing, aviation, healthcare, agriculture, and energy. Artificial intelligence (AI) is used to detect and automate equipment issues.

AI technology's expanding applications and simple deployment methods have piqued the government's interest, resulting in increased government investment in AI and related technologies. Artificial intelligence (AI) has been adopted by a variety of industries, including aerospace, healthcare, manufacturing, and automotive, as a result of advancements in deep learning and Artificial Neural Networks (ANN). There is a growing demand for artificial intelligence industrial solutions as more data must be examined and interpreted. The development of more dependable cloud computing infrastructures, as well as advances in dynamic artificial intelligence solutions, have a significant impact on the market's potential for growth. AI is being used to accelerate commercial processes, automate risky jobs, and supplement or replace skilled labour across the board.

Key Developments

Key Players

The major players in the market are Google LLC, Microsoft, Advanced Micro Devices, Arm Limited, Atomwise, Inc., Clarifai, Inc, Enlitic, Inc., International Business Machines Corporation, IBM Watson Health, and Intel Corporation.

Verified Market Research has segmented the Global Artificial Intelligence In Manufacturing Market On the basis of Component, Technology, End-User Industry, and Geography.

Browse Related Reports:

Automotive Artificial Intelligence Market By Technology (Computer Vision, Context Awareness), By Process (Data Mining, Image Recognition), By Application (Semi-autonomous Driving, Human Machine Interface), By Geography, And Forecast

Artificial Intelligence SAAS Market By Organization Size (Large Enterprise, Small & Medium Enterprise), By Geography, And Forecast

Competitive Intelligence Tools Market By Product (Clouds-Based, On-Premise), By Application (Large Companies, Small And Medium-Sized Companies), By Geography, And Forecast

Artificial Intelligence Platforms Market By Product (On-premise, Cloud-based), By Application (Voice Processing, Text Processing, Image Processing) By Geography, And Forecast

Top 10 Automotive Artificial Intelligence Companies gearing towards driverless mobility solutions

Visualize Artificial Intelligence In Manufacturing Market using Verified Market Intelligence -:

Verified Market Intelligence is our BI Enabled Platform for narrative storytelling in this market. VMI offers in-depth forecasted trends and accurate Insights on over 20,000+ emerging & niche markets, helping you make critical revenue-impacting decisions for a brilliant future.

VMI provides a holistic overview and global competitive landscape with respect to Region, Country, Segment, and Key players of your market. Present your Market Report & findings with an inbuilt presentation feature saving over 70% of your time and resources for Investor, Sales & Marketing, R&D, and Product Development pitches. VMI enables data delivery In Excel and Interactive PDF formats with over 15+ Key Market Indicators for your market.

About Us

Verified Market Research is a leading Global Research and Consulting firm servicing over 5000+ customers. Verified Market Research provides advanced analytical research solutions while offering information enriched research studies. We offer insight into strategic and growth analyses, Data necessary to achieve corporate goals and critical revenue decisions.

Our 250 Analysts and SME's offer a high level of expertise in data collection and governance use industrial techniques to collect and analyze data on more than 15,000 high impact and niche markets. Our analysts are trained to combine modern data collection techniques, superior research methodology, expertise and years of collective experience to produce informative and accurate research.

We study 14+ categories from Semiconductor & Electronics, Chemicals, Advanced Materials, Aerospace & Defense, Energy & Power, Healthcare, Pharmaceuticals, Automotive & Transportation, Information & Communication Technology, Software & Services, Information Security, Mining, Minerals & Metals, Building & construction, Agriculture industry and Medical Devices from over 100 countries.

Contact Us

Mr. Edwyne FernandesVerified Market ResearchUS: +1 (650)-781-4080UK: +44 (753)-715-0008APAC: +61 (488)-85-9400US Toll Free: +1 (800)-782-1768Email: [emailprotected]Web: https://www.verifiedmarketresearch.com/Follow Us: LinkedIn | Twitter

Logo: https://mma.prnewswire.com/media/1315349/Verified_Market_Research_Logo.jpg

SOURCE Verified Market Research

Link:
Artificial Intelligence In Manufacturing Market is expected to generate a revenue of USD 52.37 Billion by 2030, Globally, at 47.80% CAGR: Verified...

Read More..

New Book Heralds a New Era in Healthcare with Artificial Intelligence Already Transforming the Patient Experience – PR Newswire

AI is not going to replace physicians, but physicians who use AI will replace those who don't.

MIAMI, Oct. 18, 2022 /PRNewswire/ --The new book "How AI Can Democratize Healthcare" by Michael Ferro and Robin Farmanfarmaian dives into the cutting edge of technology moving care from the clinic to where the patient is located, including their home, office, or even traveling. Predictive software, AI voice technology, and digital therapeutics are just some of the innovations detailed in this provocative new title that will shape tomorrow's future today.

Whether people are sick or well, AI-based software programs can monitor conditions in a real-world environment and can be a daily presence in someone's life, delivering personalized interventions precisely when needed. The incremental cost to provide an AI software program to each additional person is negligible so it can scale quickly, and has no geographical boundaries, making worldwide adoption almost effortless.

New ways to improve patient outcomes are in high demand and the major healthcare stakeholders are innovating to bring much needed improvements. AI advancements across Vocal Biomarkers, Remote Patient Monitoring, Digital Therapeutics, Voice Recognition, Decision Support Tools, Virtual Reality, and Predictive Care are converging together to create Ambient Healthcare Computing: the ever-present healthcare assistant that monitors, analyzes, and provides the right interventions to the right person at the right time.

Simply put, AI is revolutionizing the Where, When, What, and How people access healthcare. Legacy systems composed of trained healthcare professionals and physical clinics are a limited and expensive resource. With location removed from the equation, shifting care to the point of the patient increases access and affordability, in effect democratizing healthcare. Further, the negligible incremental cost to provide an AI-based software program per person makes AI infinitely scalable and accessible.

About the Authors:

Healthcare tech entrepreneur and author Michael Ferro is the Founder and CEO Merrick Ventures, a Miami-based PE Firm focused on AI and democratizing healthcare. Michael has built multiple companies that he took public or were acquired for over $1B, including Merge Healthcare and Click Commerce. With the Michael and Jacqueline Ferro Foundation, he has donated millions, including $2M to Northwestern for entrepreneurship and $1M to theMelanoma Research Alliance(MRA). Ferro has won more than 15 awards, including Forbes Tech's 100 Highest Rollers, the Technology Entrepreneur of the Year from Ernst & Young, and was a Nominee for an Emmy Award for Best Documentary.

Professional speaker and entrepreneur Robin Farmanfarmaian has given over 180 talks in 15 countries on technology in healthcare. As an entrepreneur, she has worked with over 20 companies in pharma, device, and AI focused on major diseases including oncology, neuro, diabetes, and more. A misdiagnosis as a teenager led to 43 hospitalizations, six major surgeries, and multiple organs removed. At age 26, Robin fired her healthcare team and took control of her health, including taking herself off high dose opioids. She rebuilt her care team, was diagnosed correctly, and went into remission overnight with the right medication. "How AI Can Democratize Healthcare" is a followup book to Robin's 2015 book, "The Patient as CEO: How Tech Empowers the Healthcare Consumer."

Advance Praise for "How AI Can Democratize Healthcare"

"Artificial Intelligence is going to very rapidly transform medicine, along with ubiquitous cameras and all kinds of sensors. If you want to understand this disruption to our healthcare, Michael and Robin will explain how this will impact you as well as positively impacting billions of lives." Ray Kurzweil, inventor, author, and futurist

"How AI Can Democratize Healthcare'' by Michael Ferro and Robin Farmanfarmaian is easy to read and chock-full of great examples of startups in healthcare AI. We're in the first inning of AI in healthcare, and this book points to a very exciting future. Congratulations on a great book!" John E. Kelly III, PhD, IBM EVP - Retired

"There is overwhelming complexity at the intersection of artificial intelligence, medical technology, human factors and regulatory context. There are almost too many new health care products, services and companies to sort through. Thankfully, How AI Can Democratize Healthcare gives us the old-fashioned sort of intelligence enhancement, clear prose, that makes our collective future both understandable and optimistic. A must read for citizens, investors and policy-makers alike." Bing Gordon, General Partner & Chief Product Officer, Kleiner Perkins

SOURCE Robin Farmanfarmaian

More here:
New Book Heralds a New Era in Healthcare with Artificial Intelligence Already Transforming the Patient Experience - PR Newswire

Read More..

Oracle joins up with Nvidia to boost its artificial intelligence capabilities – The National

US software company Oracle announced a multiyear partnership with Nvidia a global leader in artificial intelligence hardware and software that designs and manufactures graphics processing units (GPUs) for various industries to boost its cloud infrastructure.

Under the partnership announced in parallel with the opening of the Oracle Cloud World event in Las Vegas, Nevada Oracle will use tens of thousands of Nvidia's GPUs to accelerate the pace of computing and AI advancements in its cloud infrastructure.

Following the announcement, Oracles stock was trading slightly up at $67.03 at 5.40pm New York time, while Nvidia was trading up at $119.67 a share.

The Texas-based company intends to bring the full Nvidia computing stack including GPUs, systems and software to Oracle Cloud Infrastructure (OCI).

GPUs can process various tasks simultaneously, making them useful for machine learning, video editing and gaming applications.

Nvidia is a global leader in AI hardware and software. Reuters

OCI is adding tens of thousands more Nvidia GPUs including the A100 and upcoming H100 to its capacity, Oracle said in a statement.

About a month ago, the US restricted Nvidia from exporting its A100 and H100 chips, designed to speed up machine-learning tasks, to China and Russia.

Combined with OCIs AI cloud infrastructure, cluster networking and storage, this partnership provides enterprises a broad, easily accessible portfolio of options for AI training and deep learning inference at scale, Oracle said.

To drive long-term success in todays business environment, organisations need answers and insight faster than ever, the company's chief executive Safra Catz said.

Our expanded alliance with Nvidia will deliver the best of both companies expertise to help customers across industries from health care and manufacturing to telecommunications and financial services overcome the multitude of challenges they face.

The Oracle and Nvidia partnership comes as more companies integrate AI and machine-learning tools to streamline their operations and as AI models become more complex.

The companies did not disclose the financial details of the deal.

US technology company Oracle announced a series of new cloud-focused products at Oracle Cloud World on Tuesday. Reuters

Accelerated computing and AI are key to tackling rising costs in every aspect of operating businesses, California-based Nvidias founder and chief executive Jensen Huang said.

Enterprises are increasingly turning to cloud-first AI strategies that enable fast development and scalable deployment. Our partnership with Oracle will put Nvidia AI within easy reach for thousands of companies.

The global AI market is expected to grow at an annual rate of more than 38 per cent from 2022 to 2030, from $93.5 billion last year, Grand View Research reported.

AI will be the common theme in the top 10 technology trends in the next few years, and these are expected to quicken breakthroughs across key economic sectors and society, Alibaba Damo Academy the global research arm of Chinese company Alibaba Group said in a report.

Updated: October 19, 2022, 12:30 PM

More:
Oracle joins up with Nvidia to boost its artificial intelligence capabilities - The National

Read More..

The Regulation of Artificial Intelligence in Canada and Abroad: Comparing the Proposed AIDA and EU AI Act – Fasken

Laws governing technology have historically focused on the regulation of information privacy and digital communications. However, governments and regulators around the globe have increasingly turned their attention to artificial intelligence (AI) systems. As the use of AI becomes more widespread and changes how business is done across industries, there are signs that existing declarations of principles and ethical frameworks for AI may soon be followed by binding legal frameworks. [1]

On June 16, 2022, the Canadian government tabled Bill C-27, the Digital Charter Implementation Act, 2022. Bill C-27 proposes to enact, among other things, the Artificial Intelligence and Data Act (AIDA). Although there have been previous efforts to regulate automated decision-making as part of federal privacy reform efforts, AIDA is Canadas first effort to regulate AI systems outside of privacy legislation. [2]

If passed, AIDA would regulate the design, development, and use of AI systems in the private sector in connection with interprovincial and international trade, with a focus on mitigating the risks of harm and bias in the use of high-impact AI systems. AIDA sets out positive requirements for AI systems as well as monetary penalties and new criminal offences on certain unlawful or fraudulent conduct in respect of AI systems.

Prior to AIDA, in April 2021, the European Commission presented a draft legal framework for regulating AI, the Artificial Intelligence Act (EU AI Act), which was one of the first attempts to comprehensively regulate AI. The EU AI Act sets out harmonized rules for the development, marketing, and use of AI and imposes risk-based requirements for AI systems and their operators, as well as prohibitions on certain harmful AI practices.

Broadly speaking, AIDA and the EU AI Act are both focused on mitigating the risks of bias and harms caused by AI in a manner that tries to be balanced with the need to allow technological innovation. In an effort to be future-proof and keep pace with advances in AI, both AIDA and the EU AI Act define artificial intelligence in a technology-neutral manner. However, AIDA relies on a more principles-based approach, while the EU AI Act is more prescriptive in classifying high-risk AI systems and harmful AI practices and controlling their development and deployment. Further, much of the substance and details of AIDA are left to be elaborated in future regulations, including the key definition of high risk AI systems to which most of AIDAs obligations attach.

The table below sets out some of the key similarities and differences between the current drafts of AIDA and the EU AI Act.

High-risk system means:

The EU AI Act does not apply to:

AIDA does not stipulate an outright ban on AI systems presenting an unacceptable level of risk.

It does, however, make it an offence to:

The EU AI Act prohibits certain AI practices and certain types of AI systems, including:

Persons who process anonymized data for use in AI systems must establish measures (in accordance with future regulations) with respect to:

High-risk systems that use data sets for training, validation and testing must be subject to appropriate data governance and management practices that address:

Data sets must:

Transparency. Persons responsible for high-impact systems must publish on a public website a plain-language description of the AI system which explains:

Transparency. AI systems which interact with individuals and pose transparency risks, such as those that incorporate emotion recognition systems or risks of impersonation or deception, are subject to additional transparency obligations.

Regardless of whether or not the system qualifies as high-risk, individuals must be notified that they are:

Persons responsible for AI systems must keep records (in accordance with future regulations) describing:

High-risk AI systems must:

Providers of high-risk AI systems must:

The Minister of Industry may designate an official to be the Artificial Intelligence and Data Commissioner, whose role is to assist in the administration and enforcement of AIDA. The Minister may delegate any of their powers or duties under AIDA to the Commissioner.

The Minister of Industry has the following powers:

The European Artificial Intelligence Board will assist the European Commission in providing guidance and overseeing the application of the EU AI Act. Each Member State will designate or establish a national supervisory authority.

The Commission has the authority to:

Persons who commit a violation of AIDA or its regulations may be subject to administrative monetary penalties, the details of which will be establish by future regulations. Administrative monetary penalties are intended to promote compliance with AIDA.

Contraventions to AIDAs governance and transparency requirements can result in fines:

Persons who commit more serious criminal offences (e.g., contravening the prohibitions noted above or obstructing or providing false or misleading information during an audit or investigation) may be liable to:

While both acts define AI systems relatively broadly, the definition provided in AIDA is narrower. AIDA only encapsulates technologies that process data autonomously or partly autonomously, whereas the EU AI Act does not stipulate any degree of autonomy. This distinction in AIDA is arguably a welcome divergence from the EU AI Act, which as currently drafted would appear to include even relatively innocuous technology, such as the use of a statistical formula to produce an output. That said, there are indications that the EU AI Acts current definition may be modified before its final version is published, and that it will likely be accompanied by regulatory guidance for further clarity. [4]

Both acts are focused on avoiding harm, a concept they define similarly. The EU AI Act is, however, slightly broader in scope as it considers serious disruptions to critical infrastructure a harm, whereas AIDA is solely concerned with harm suffered by individuals.

Under AIDA, high-impact systems will be defined in future regulations, so it is not yet possible to compare AIDAs definition of high-impact systems to the EU AI Acts definition of high-risk systems. The EU AI Act identifies two categories of high-risk systems. The first category is AI systems intended to be used as safety components of products, or as products themselves. The second category is AI systems listed in an annex to the act and which present a risk to the health, safety, or fundamental rights of individuals. It remains to be seen how Canada would define high-impact systems, but the EU AI Act provides an indication of the direction the federal government could take.

Similarly, AIDA also defers to future regulations with respect to risk assessments, while the proposed EU AI Act sets out a graduated approach to risk in the body of the act. Under the EU AI Act, systems presenting an unacceptable level of risk are banned outright. In particular, the EU AI Act explicitly bans manipulative or exploitive systems that can cause harm, real-time biometric identification systems used in public spaces by law enforcement, and all forms of social scoring. AI systems presenting low or minimal risk are largely exempt from regulations, except for transparency requirements.

AIDA only imposes transparency requirements on high-impact AI systems, and does not stipulate an outright ban on AI systems presenting an unacceptable level of risk. It does, however, empower the Minister of Industry to order that a high-impact system presenting a serious risk of imminent harm cease being used.

AIDAs application is limited by the constraints of the federal governments jurisdiction. AIDA broadly applies to actors throughout the AI supply chain from design to delivery, but only as their activities relate to international or interprovincial trade and commerce. AIDA does not expressly apply to intra-provincial development and use of AI systems. Government institutions (as defined under the Privacy Act) are excluded from AIDAs scope, as are products, services, and activities that are under the direction or control of specified federal security agencies.

The EU AI Act specifically applies to providers (although this may be interpreted broadly) and users of AI systems, including government institutions but excluding where AI systems are exclusively developed for military purposes. The EU AI Act also expressly applies to providers and users of AI systems insofar as the output produced by those systems is used in the EU.

AIDA is largely silent on requirements with respect to data governance. In its current form, it only imposes requirements on the use of anonymized data in AI systems, most of which will be elaborated in future regulations. AIDAs data governance requirements will apply to anonymized data used in the design, development, or use of any AI system, whereas the EU AI Acts data governance requirements will apply only to high-impact systems.

The EU AI Act sets the bar very high for data governance. It requires that training, validation, and testing datasets be free of errors and complete. In response to criticisms of this standard for being too strict, the European Parliament has introduced an amendment to the act that proposes to make error-free and complete datasets an overall objective to the extent possible, rather than a precise requirement.

While AIDA and the EU AI Act both set out requirements with respect to assessment, monitoring, transparency, and data governance, the EU AI Act imposes a much heavier burden on those responsible for high-risk AI systems. For instance, under AIDA, persons responsible for such systems will be required to implement mitigation, monitoring, and transparency measures. The EU AI Act goes a step further by putting high-risk AI systems through a certification scheme, which requires that the responsible entity conduct a conformity assessment and draw up a declaration of conformity before the system is put into use.

Both acts impose record-keeping requirements. Again, the EU AI Act is more prescriptive, but contrary to AIDA, its requirements will only apply to high-risk systems, whereas AIDAs record-keeping requirements would apply to all AI systems.

Finally, both acts contain notification requirements that are limited to high-impact (AIDA) and high-risk (EU AI Act) systems. AIDA imposes a slightly heavier burden, requiring notification for all uses that are likely to result in material harm. The EU AI Act only requires notification if a serious incident or malfunction has occurred.

Both AIDA and the EU AI Act provide for the creation of a new monitoring authority to assist with administration and enforcement. The powers attributed to these entities under both acts are similar.

Both acts contemplate significant penalties for violations of their provisions. AIDAs penalties for more serious offences up to $25 million CAD or 5% of the offenders gross global revenues from the preceding financial year are significantly greater than those found in Quebecs newly revised privacy law and the EUs General Data Protection Regulation (GDPR). The EU AI Acts most severe penalty is higher than both the GDPR and AIDAs most severe penalty: up to 30 million or 6% of gross global revenues from the preceding financial year for non-compliance with prohibited AI practices or the quality requirements set out for high-risk AI systems.

In contrast to the EU AI Act, AIDA also introduces new criminal offences for the most serious offences committed under the act.

Finally, the EU AI Act would also grant discretionary power to Member States to determine additional penalties for infringements of the act.

While both AIDA and the EU AI Act have broad similarities, it is impossible to predict with certainty how similar they could eventually be, given that so much of AIDA would be elaborated in future regulations. Further, at the time of writing, Bill C-27 has only completed first reading, and is likely to be subject to amendments as it makes its way through Parliament.

It is still unclear how much influence the EU AI Act will have on AI regulations globally, including in Canada. Regulators in both Canada and the EU may aim for a certain degree of consistency. Indeed, many have likened the EU AI Act to the GDPR, in that it may set global standards for AI regulation just as the GDPR did for privacy law.

Regardless of the fates of AIDA and the EU AI Act, organizations should start considering how they plan to address a future wave of AI regulation.

For more information on the potential implications of the new Bill C-27, Digital Charter Implementation Act, 2022, please see our bulletin,The Canadian Government Undertakes a Second Effort at Comprehensive Reform to Federal Privacy Law, on this topic.

[1]There have been a number of recent developments in AI regulation, including the United Kingdoms Algorithmic Transparency Standard, Chinas draft regulations on algorithmic recommendation systems in online services, the United States Algorithmic Accountability Act of 2022, and the collaborative effort between Health Canada, the FDA and the United Kingdoms Medicines and Healthcare Products Regulatory Agency to publish Guiding Principles on Good Machine Learning Practice for Medical Device Development.

[2]In the public sphere, the Directive on Automated Decision-Makingguides the federal governments use of automated decision systems.

[3]This prohibition is subject to three exhaustively listed and narrowly defined exceptions where the use of such AI systems is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks: (1) the search for potential victims of crime, including missing children; (2) certain threats to the life or physical safety of individuals or a terrorist attack; and (3) the detection, localization, identification or prosecution of perpetrators or suspects of certain particularly reprehensible criminal offences.

[4]As an indication of potential changes, the Slovenian Presidency of the Council of the European Union tabled a proposed amendment to the act in November 2021 that would effectively narrow the scope of the regulation to machine learning.

Original post:
The Regulation of Artificial Intelligence in Canada and Abroad: Comparing the Proposed AIDA and EU AI Act - Fasken

Read More..

Gradient AI and Duck Creek Technologies Deliver Integrated Workers’ Compensation Underwriting and Claims Management Solutions, Leveraging Artificial…

LAS VEGAS--(BUSINESS WIRE)--Gradient AI, a leading enterprise software provider of artificial intelligence (AI) solutions in the insurance industry, and Duck Creek Technologies, a leading technology solutions provider to the property and casualty (P&C) insurance industry, today announced integrated AI workers compensation solutions and their first joint customer, Builders Mutual Insurance.

The joint offerings bring Gradient AI's state-of-the art AI solutions to insurance carriers that leverage Duck Creeks platform for their operations. This powerful combination allows underwriters and claims adjusters to uncover and analyze key drivers of workers compensation policy and claim risks, better assess them, and minimize their claims exposure, within the platform they already use.

Builders Mutual Insurance, a leading insurer for the construction industry in the Southeast, has adopted the joint solution to streamline claims, better identify risks and save training time. As adjusters enter their notes into the platform, the AI model learns and becomes even more accurate.

Builders Mutual is also using the solution to address one of the most pressing issues in the insurance industry today, the rising talent shortage. During the next 15 years, 50% of the current insurance workforce will retire; leaving more than 400,000 open positions unfilled, according to the U.S. Bureau of Labor Statistics. It will be challenging to replace these insurance workers leaving a significant talent gap. According to an Aon and Jacobson Group recent study, 53% of P&C insurance companies plan to aggressively hire within the next 12 months.

To address this challenge in its own operations, Builders is using the joint Duck Creek and Gradient AI solution to leverage the knowledge of its most seasoned people and convert it to institutional knowledge. Builders Mutuals agents now have access to the AI model enhanced by the unstructured data from experienced agents notes over many years. This knowledge base allows agents to leverage the experience of a seasoned adjuster that would otherwise have been lost.

Our organization continually looks for ways to give adjusters the tools to help them identify risks and best serve injured workers so they can recover quickly and return to work as soon as possible, said Ken Bunn, vice president of Claims, Builders Mutual Insurance. The integration of Duck Creek and Gradient AI helps with both of those goals, allowing adjusters to work more efficiently and effectively. It also saves us significant time when training new adjusters and helps keep our quality of service consistent as seasoned adjusters retire.

Bunn added, Previously, new adjusters would have to sit beside an experienced adjuster for years to observe how they were handling claims. Now, thanks to this integrated solution, our newest adjusters can learn quickly and have guardrails in place as they are making decisions.

These integrated solutions are delivering a truly reimagined experience for workers compensation underwriting and claims management, said Rohit Bedi, chief revenue officer of Duck Creek. Builders Mutuals adjusters now gain insights from AI solutions that are fully integrated into our Duck Creek platform, which is already a part of their normal workflow. They are getting these insights where they can have the greatest impact, at the point of decision.

The integration of Gradient AIs technology and Duck Creeks platform empower workers compensation underwriting and claims teams to process and prioritize workplace injuries more effectively and efficiently, said Stan Smith, CEO of Gradient. Builders Mutual is an innovator leveraging technology to deliver a better customer experience and achieve a better return on risk."

About Duck Creek Technologies

Duck Creek Technologies (NASDAQ: DCT) is the intelligent solutions provider defining the future of the property and casualty (P&C) and general insurance industry. We are the platform upon which modern insurance systems are built, enabling the industry to capitalize on the power of the cloud to run agile, intelligent, and evergreen operations. Authenticity, purpose, and transparency are core to Duck Creek, and we believe insurance should be there for individuals and businesses when, where, and how they need it most. Our market-leading solutions are available on a standalone basis or as a full suite, and all are available via Duck Creek OnDemand. Visit http://www.duckcreek.com to learn more. Follow Duck Creek on our social channels for the latest information LinkedIn and Twitter.

About Gradient AI

Gradient AI is a leading provider of proven artificial intelligence (AI) solutions for the insurance industry. Its solutions improve loss ratios and profitability by predicting underwriting and claim risks with greater accuracy, as well as reducing quote turnaround times and claim expenses through intelligent automation. Unlike other solutions that use a limited claims and underwriting dataset, Gradient's software-as-a-service (SaaS) platform leverages a vast dataset comprised of tens of millions of policies and claims. It also incorporates numerous other features including economic, health, geographic and demographic information. Customers include some of the most recognized insurance carriers, MGAs, TPAs, risk pools, PEOs and large self-insureds across all major lines of insurance. By using Gradient AIs solutions, insurers of all types achieve a better return on risk. To learn more about Gradient, please visit: https://www.gradientai.com/.

Excerpt from:
Gradient AI and Duck Creek Technologies Deliver Integrated Workers' Compensation Underwriting and Claims Management Solutions, Leveraging Artificial...

Read More..

How AI and VSaaS are Improving Safety in the Construction Sector – Spiceworks News and Insights

Video surveillance plays a vital role in the construction sector, and the rise of cloud video analytics, AI technology, and video surveillance as a service (VSaaS) offerings has the potential to take this to the next level. In this article, Logan Bell, head of product for Cloudview, shares the uses of AI and VSaaS and other benefits in the construction sector and how they are combined to improve the safety of the construction sector.

Health and safety are critical components of any business, but those in the construction sector need to take additional steps to look out for the safety of workers and protect their own construction firm from the potential financial, legal and reputational consequences associated with failures or shortcomings, in this area.

Video surveillance can play a vital role here, and the rise of cloud video analytics, artificial intelligence technology, and video surveillance as a service (VSaaS) offerings has the potential to take this to the next level. Lets look at precisely how AI and VSaaS are combined to improve safety within the construction sector.

First, it is essential to define precisely what is meant by artificial intelligence and video surveillance as a service. The former refers to technology that allows computers to perform complex actions, which have traditionally relied upon human intelligence, and is especially useful for automation and analysis of data.

VSaaS, on the other hand, refers to cloud-based surveillance services offered by third-party service providers. With a cloud-based video management system, users can remotely access the data captured by their surveillance system. Data is also stored in the cloud rather than on-site, allowing more footage to be stored while improving accessibility.

The two technologies can combine to provide cloud video analytics offerings. This will feed data from surveillance cameras through a layer of AI, pattern recognition, machine learning, and similar technologies to extract meaning from it and alert users to any actions, scenes, or situations deemed worthy of attention. As explained in an article from SDM, cloud deployment allows more processing power to be used for analytics purposes.

See More: Is Your Organization Ready to Secure Your Cloud Operations?

While a remote video surveillance system with cloud analytics capabilities can be helpful across many industries, the technology is especially valuable within construction, where there is such a strong emphasis placed on health and safety. In particular, AI and VSaaS can be used to great effect in the following areas:

One of the biggest ways in which AI and VSaaS are improving safety within the construction sector is by helping to keep construction sites secure. As a post for IIoT World explains, video analytics can be used to detect line crossing, loitering, increases in capacity, objects being taken, and a variety of other unwanted behaviors.

In situations where a construction site needs to be left unattended, a remote video surveillance system running AI analytics can detect intruders, alert site managers to items being taken, and continually monitor the construction site for other activities with no need for rest, allowing for better protection and much faster responses.

Video analytics also has the potential to identify individuals through facial recognition, and this can be used to manage access to the construction site. Regardless of whether it is during working hours or outside of them, keeping the site secure from unwanted intruders can prevent vandalism and keep workers safe.

While it is important to keep construction sites safe from intruders, it is also essential that steps are taken to minimize unwanted behaviors from construction workers too. A cloud-based video management system with AI-powered analytics can be trained to detect the presence of hard hats and flag situations where workers are not wearing them.

Beyond this, video analytics can be used to detect risky behaviors during the construction process itself. This can highlight dangerous acts or mistakes that might end up costly in the long term, but it can also help to prevent issues where the actual quality of the construction work may put people in jeopardy.

In the past, surveillance footage has been viewed after the fact. Yet, the rise of VSaaS and video analytics technology means that risky behavior can be detected in real-time, and site managers can take swift or even preventative action.

Modern VSaaS offerings still provide the conventional benefits of a surveillance system, such as the deterrent effect and the ability to provide evidence in the event of theft or vandalism, but these newer systems also provide some additional noteworthy benefits. For instance, actually identifying perpetrators becomes easier because cloud storage expands data limits, meaning footage can be recorded in 4K quality using multiple surveillance cameras.

In the event that an accident happens on a construction site, facial recognition technology can help to provide a better understanding of precisely what happened, who was involved, and what the response was.

As an article from Health & Safety Matters highlights, it is also common for construction workers to attend work while injured or experiencing ill health, and this is a growing problem that can have severe long-term consequences. Analytics can help to detect the presence of injuries or illnesses so that workers can be managed appropriately.

Construction workers face serious risks in their day-to-day lives, and firms employing these workers must take the right steps to keep them as safe as possible. Fortunately, VSaaS, AI, and cloud technology are helping to modernize on-site surveillance, allowing for real-time responses to unwanted behaviors and significant events.

How are you using AI and VSaaS to enhance your stance on safety and security? Share with us on Facebook, Twitter, and LinkedIn.

Image Source: Shutterstock

The rest is here:
How AI and VSaaS are Improving Safety in the Construction Sector - Spiceworks News and Insights

Read More..

How Can Artificial Intelligence Help With Suicidal Ideation? – Theravive

A new study published in the Journal of Psychiatric Research looked at the performance of machine learning models in predicting suicidal ideation, attempts, and deaths.

My study sought to quantify the ability of existing machine learning models to predict future suicide-related events, study author Karen Kusuma told us. While there are other research studies examining a similar question, my study is the first to use clinically relevant and statistically appropriate performance measures for the machine learning studies.

The utility of artificial intelligence has been a controversial topic in psychiatry, and medicine overall. Some studies have demonstrated better performance with machine learning methods, while others have not. Kusuma began the study expecting that machine learning models would perform well.

Suicide is a leading cause of years of life lost across most of Europe, central Asia, southern Latin America, and Australia (Naghavi, 2019; Australian Bureau of Statistics, 2020), Kusuma told us. Standard clinical practice dictates that people seeking help for suicide-related issues need to be first administered with a suicide risk assessment. However, research has found that suicide risk predictions tend to be inaccurate.

Only five per cent of people ordinarily classified as high risk died by suicide, while around half of those who died by suicide would normally be categorised as low risk (Large, Ryan, Carter, & Kapur, 2017). Unfortunately, there has been no improvement in suicide prediction research in the last fifty years (Franklin et al., 2017).

Some researchers have claimed that machine learning will become an efficient and effective alternative to current suicide risk assessments (e.g. Fonseka et al., 2019), Kusuma told us, so I wanted to examine the potential of machine learning quantitatively, while evaluating the methodology currently used in the literature.

Researchers searched for relevant studies across four research databases and identified 56 relevant studies. From there, 54 models from 35 studies had sufficient data, and were included in the quantitative analyses.

We found that machine learning models achieved a very good overall performance according to clinical diagnostic standards, Kusuma told us. The models correctly predicted 66% of the people who would experience a suicide-related event (i.e. ideation, attempt, or death), and correctly predicted 87% of the people who would not experience a suicide-related event.

However, there was a high prevalence of risk of bias in the research, with many studies processing or analysing the data inappropriately. This isnt a finding specific to machine learning research, but a systemic issue caused largely by a publish-or-perish culture in academia.

I did expect machine learning models to do well, so I think this review establishes a good benchmark for future research, Kusuma told us. I do believe that this review shows the potential of machine learning to transform the future of suicide risk prediction. Automated suicide risk screening would be quicker and more consistent than current methods.

This could potentially identify many people at risk of suicide without them having to reach out proactively. However, researchers need to be careful to minimise data leakage, which would skew performance measures. Furthermore, many iterations of development and validation need to take place to ensure that the machine learning models can predict suicide risk in previously unseen populations.

Prior to deployment, researchers also need to ascertain if artificial intelligence would work in an equitable manner across people from different backgrounds, Kusuma told us. For example, a study has found their machine learning models performed better in predicting deaths by suicide in White patients, as opposed to Black and American Indian/ Alaskan Native patients (Coley et al., 2022).

That isnt to say that artificial intelligence is inherently discriminatory, Kusuma explained, but there is less data available for minorities, which often means lower performance in those populations. Its possible that models need to be developed and validated separately for people of different demographic characteristics.

Machine learning is an exciting innovation in suicide research, Kusuma told us. An improvement in suicide prediction abilities would mean that resources could be allocated to those who need them the most.

Categories: Depression , Stress , Suicide | Tags: suicide, depression, machine

Patricia Tomasi is a mom, maternal mental health advocate, journalist, and speaker. She writes regularly for the Huffington Post Canada,focusing primarily on maternal mental health after suffering from severe postpartum anxiety twice. You can find her Huffington Post biography here. Patricia is also a Patient Expert Advisor for the North American-based,Maternal Mental Health Research Collectiveand is the founder of the online peer support group -Facebook Postpartum Depression & Anxiety Support Group - with over 1500 members worldwide. Blog:www.patriciatomasiblog.wordpress.com Email:tomasi.patricia@gmail.com

Read more from the original source:
How Can Artificial Intelligence Help With Suicidal Ideation? - Theravive

Read More..

Meta Has Developed AI for Real-Time Translation of Hokkien – Gizmodo

Metas Hokkien translator is the first speech-to-speech translator, but the AI can only translate one sentence at a time. Screenshot: Meta

Meta is chugging along on their Universal Speech Translator, which hopes to train an artificial intelligence to translate hundreds of languages in real time. Today, the tech giant claims to have generated the first artificial intelligence to translate Hokkien, which is a language primarily spoken and not written.

Hokkien is a language that is spoken by approximately 49 million people in countries like China, Taiwan, Singapore, Malaysia, and the Phillippines. Typically, training an AI to understand human speechand in Metas case, translationresearchers will feed the computer a large dataset of written transcripts. But Meta says that Hokkien is once of nearly 3,500 languages that are primarily spoken, meaning Hokkien does not have a large enough dataset to train the artificial intelligence since the language does not have a unified writing system.

As such, Meta focused on a speech-to-speech approach, as explained in the companys press release. Without going into too much detail, Meta explained that the input speech was translated into a sequence of acoustic sounds, which was then used to create waveforms of the language. Those waveforms were then coupled with Mandarin, which Meta identifies as a related language.

Meta says that the Hokkien translator is still a work in progress as the artificial intelligence can only translate one sentence at a time, but is being released as open-source so other researchers can build upon its work. The company is also releasing SpeechMatrix which is a large collection of speech-to-speech translations developed through our innovative natural language processing toolkit.

G/O Media may get a commission

Metas efforts at building tech to understand human language has a bit of a wonky past. The company released BlenderBot 3 earlier this year to show their attempt at creating an artificial intelligence chatbot. A previous investigation by Gizmodo found that the bots favorite movie was Mean Girls and that it really wanted you to know that racism is bad.

Read more:
Meta Has Developed AI for Real-Time Translation of Hokkien - Gizmodo

Read More..