Category Archives: Artificial Intelligence

Artificial Intelligence (AI) and the Church of Africa: An interview with Fr Joel Nkongolo – Vatican News – English

Congolese national and Claretian priest Fr Joel Nkongolo recently spoke to Fr Paul Samasumo of Vatican News about AIs implications or possible impact on the African Church. Fr Nkongolo is currently based in Nigeria.

Fr Paul Samasumo Vatican City.

How would you define or describe Artificial Intelligence (AI)?

Artificial Intelligence encompasses a wide range of technologies and techniques that enable machines to mimic human cognitive functions. Machine learning, a subset of AI, allows systems to learn from data without being explicitly programmed. For example, streaming platforms like Netflix use recommendation algorithms to analyze users viewing history and suggest relevant content. Computer vision technology, another aspect of AI, powers facial recognition systems used in security and authentication applications.

Should we be worried and afraid of Artificial Intelligence?

While AI offers numerous benefits, such as improved efficiency, productivity, and innovation, it also raises legitimate concerns. One concern is job displacement, as automation could replace certain tasks traditionally performed by humans. For instance, a study by the McKinsey Global Institute suggests that up to 800 million jobs could be automated by 2030. Additionally, there are ethical concerns surrounding AI, such as algorithmic bias, which can perpetuate discrimination and inequality. For example, facial recognition systems have been found to exhibit higher error rates for people with darker skin tones, leading to unfair treatment in areas like law enforcement.

Africas journey towards embracing AI seems relatively slow. Is this a good or bad thing?

Africas adoption of AI has been relatively slow compared to other regions, attributed to factors such as limited infrastructure, digital literacy, and funding. However, this cautious approach can also be viewed as an opportunity to address underlying challenges and prioritize ethical considerations. For example, Ghana recently established two AI Centres to develop AI capabilities while ensuring ethical AI deployment. By taking a deliberate approach, African countries can tailor AI solutions to address local needs and minimize potential negative impacts.

How do you see Artificial Intelligence affecting or impacting the Church in Africa and elsewhere? Should the Church be worried about Artificial Intelligence?

AI can enhance various aspects of Church operations, such as automating administrative tasks, analyzing congregation demographics for targeted outreach, and providing personalized spiritual guidance through chatbots. However, there are ethical considerations, such as ensuring data privacy and maintaining human connection amid technological advancements. For example, sections of the Church of England utilize AI-powered chatbots to engage with congregants online, offering pastoral support and prayer. While AI can augment the Churchs outreach efforts, its essential to maintain human oversight and uphold ethical standards in its use.

How can the Church influence ethical behaviour and good social media conduct?

The Church can leverage its moral authority to promote ethical behaviour and responsible social media use. For instance, Pope Francis has spoken out against the spread of fake news and social media polarisation, emphasizing the importance of truth and dialogue. Additionally, initiatives like Digital Catholicism involve leveraging online media technologies as tools for evangelization while simultaneously spreading the message of faith in cyberspace itself. So, by modelling ethical behaviour and offering guidance on digital citizenship, the Church can foster a culture of respect, empathy, and truthfulness in online interactions.

How can parents, guardians, teachers, parish priests, or pastors help young people avoid becoming enslaved by these technologies?

Adults play a crucial role in guiding young peoples use of technology and promoting healthy digital habits. For example, parents and teachers can educate children about the risks of excessive screen time and the importance of balance in their online and offline activities. They can also set limits on device usage, encourage outdoor play, and foster face-to-face social interactions. Moreover, religious leaders can incorporate teachings on mindfulness, self-discipline, and responsible stewardship of technology into their spiritual guidance, helping young people cultivate a healthy relationship with digital media.

Can individuals and society do anything to protect themselves from potential AI harm or abuse by non-democratic governments?

Individuals and civil society organizations can take proactive measures to safeguard against AI abuse by authoritarian regimes. For example, they can advocate for legislation and regulations that protect digital rights, privacy, and freedom of expression. Tools like virtual private networks (VPNs) and encrypted messaging apps can help individuals circumvent government surveillance and censorship. Moreover, international collaboration and solidarity among democratic nations can amplify efforts to hold oppressive regimes accountable for AI misuse and human rights violations.

What would your advice be to those working in education or schools regarding teaching about AI?

Educators have a vital role in preparing students for the AI-driven future by fostering critical thinking, creativity, and ethical decision-making skills. For example, integrating AI literacy into the curriculum can help students understand how AI works, its societal impacts, and ethical considerations. Projects like Googles AI for Social Good initiative provide educational resources and tools for teaching AI concepts in schools. By empowering students to become responsible AI users and innovators, educators can effectively equip them to navigate the opportunities and challenges of the digital age.

Fr Nkongolo, thank you for your time and help in navigating these issues.

Fr. Paul Samasumo, these examples, comparisons, and statistics illustrate the multifaceted nature of AI and its implications for society, including the Church and education. I hope they provide a comprehensive perspective on these complex issues.

Read the original:
Artificial Intelligence (AI) and the Church of Africa: An interview with Fr Joel Nkongolo - Vatican News - English

A.I. Has a Measurement Problem – The New York Times

Theres a problem with leading artificial intelligence tools like ChatGPT, Gemini and Claude: We dont really know how smart they are.

Thats because, unlike companies that make cars or drugs or baby formula, A.I. companies arent required to submit their products for testing before releasing them to the public. Theres no Good Housekeeping seal for A.I. chatbots, and few independent groups are putting these tools through their paces in a rigorous way.

Instead, were left to rely on the claims of A.I. companies, which often use vague, fuzzy phrases like improved capabilities to describe how their models differ from one version to the next. And while there are some standard tests given to A.I. models to assess how good they are at, say, math or logical reasoning, many experts have doubts about how reliable those tests really are.

This might sound like a petty gripe. But Ive become convinced that a lack of good measurement and evaluation for A.I. systems is a major problem.

For starters, without reliable information about A.I. products, how are people supposed to know what to do with them?

I cant count the number of times Ive been asked in the past year, by a friend or a colleague, which A.I. tool they should use for a certain task. Does ChatGPT or Gemini write better Python code? Is DALL-E 3 or Midjourney better at generating realistic images of people?

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

See the rest here:
A.I. Has a Measurement Problem - The New York Times

BrightNight Unveils PowerAlpha Artificial Intelligence Software Platform at the "AI: Powering the New Energy Era … – PR Newswire

Trailblazing integrated AI platform ushers in an era of firm, affordable, clean energy.

WEST PALM BEACH, Fla., April 17, 2024 /PRNewswire/ -- BrightNight, the next-generation global renewable power producer built to deliver clean and dispatchable solutions, today introduced PowerAlpha, the proprietary software platform that uses cutting-edge Artificial Intelligence, data analytics, and cloud computing to design, optimize, and operate renewable power plants with industry-leading economics.

PowerAlpha was unveiled today at the "AI: Powering the New Energy Era"summit in Washington, D.C. Sponsored by BrightNight and hosted by the Foundation for American Science and Technology, it is the first independent industry summit in North America exclusively dedicated to exploring the influence of Artificial Intelligence on the energy sector. The summit brought together senior public officials, policymakers, energy industry leaders, experts, academia, and investors, including co-sponsors NVIDIA, IBM, C3.ai and Qcells, and representatives from the Department of Energy, U.S. Congress, Intel, Alphabet (Google), EEI, Bank of America, KPMG, EPRI, and other organizations.

BrightNight's PowerAlpha platform accelerates the global clean energy transition and decarbonization by ensuring the generation of reliable power at the lowest cost attainable in various geographies and power grids globally. Its benefits as realized in the design, optimize, and operate stages within the project's lifecycle include:

PowerAlpha's unique dispatch hybrid controls interface can also be integrated with utilities' Energy Management Systems for even more operational efficiency and integrated asset management solutions. Also, with the increasing demand for more power to support the growing use of AI applications and corresponding data centers, PowerAlpha is leveraging AI capabilities to drive greater efficiencies, better designs, and higher capacity renewable projects.

Martin Hermann, CEO of BrightNight, said: "Through relentless innovation, the dream of a sustainable clean energy transition, where renewable power is affordable and reliable, is now within reach. The integration of Artificial Intelligence with renewable energy offers solutions to numerous challenges, from demand forecasting and smart grid management to labor shortages and the design of efficient power projects. I am proud that BrightNight is at the forefront of this technological revolution, bringing the clean energy transition closer to reality than ever before."

Kiran Kumaraswamy, CTO of BrightNight, said: "PowerAlpha provides a fully integrated platform that is highly differentiated in the marketplace in its ability to design, optimize, and operate hybrid renewable power projects with cutting edge capabilities enabled by AI and utilizing patent-pending algorithms. Our team is redefining industry standards, optimizing renewable projects globally to integrate new load like data centers and increase utilization of existing infrastructure."

BrightNight has delivered a number of PowerAlpha projects and use cases in the U.S. and globally, where it is helping customers optimize existing transmissions infrastructure, integrate long and short duration storage solutions, increase capacity, and improve dispatchability of renewable assets, as well as repower existing projects.

BrightNight's is also using PowerAlpha to develop projects with its partner in India and the Philippines, ACEN, one of Asia's leading renewable companies. ACEN Group CIO Patrice Clausse said, "PowerAlpha has been an important enabler for our investment decision-making. We feel confident leveraging its capabilities to simulate the performance of numerous hybrid renewable power plant configurations incorporating solar, wind, and energy storage and identify the most efficient configuration. This has enabled us to optimize our plant configuration and was a critical component in our recent tenderwins."

For more information about PowerAlpha and BrightNight's 37 GW renewable power portfolio, please see http://www.brightnightpower.com/poweralpha or contact [emailprotected].

About BrightNight

BrightNight is the first global renewable integrated power company designed to provide utility and commercial and industrial customers with clean, dispatchable renewable power solutions. BrightNight works with customers across the U.S. and Asia Pacific to design, develop, and operate safe, reliable, large-scale renewable power projects optimized through its proprietary software platform PowerAlpha to better manage the intermittent nature of renewable energy. Its deep customer engagement process, team of proven power experts, and industry-leading solutions enable customers to overcome challenging energy sustainability standards, rapidly changing grid dynamics, and the transition away from fossil fuel generation. To learn more, please visit:www.brightnightpower.com

SOURCE BrightNight

See original here:
BrightNight Unveils PowerAlpha Artificial Intelligence Software Platform at the "AI: Powering the New Energy Era ... - PR Newswire

Artificial Intelligence Feedback on Physician Notes Improves Patient Care – NYU Langone Health

Artificial intelligence (AI) feedback improved the quality of physician notes written during patient visits, with better documentation improving the ability of care teams to make diagnoses and plan for patients future needs, a new study finds.

Since 2021, NYU Langone Health has been using pattern-recognizing, machine-learning AI systems to grade the quality of doctors clinical notes. At the same time, NYU Langone created data informatics dashboards that monitor hundreds of measures of safety and the effectiveness of care. The informatics team over time trained the AI models to track in dashboards how well doctors notes achieved the 5 Cs: completeness, conciseness, contingency planning, correctness, and clinical assessment.

Now, a new case study, published online April 17 in NEJM Catalyst Innovations in Care Delivery, shows how notes improved by AI, in combination with dashboard innovations and other safety initiatives, resulted in an improvement in care quality across four major medical specialties: internal medicine, pediatrics, general surgery, and the intensive care unit.

This includes improvements across the specialties of up to 45 percent in note-based clinical assessments (that is, determining diagnoses) and reasoning (making predictions when diagnoses are unknown). In addition, contingency planning to address patients future needs saw improvements of up to 34 percent.

Last year, NYU Langone added to this long-standing effort a newer form of AI that develops likely options for the next word in any sentence based on how billions of people used language on the internet over time. A result of this next-word prediction is that generative AI chatbots like GPT-4 can read physician notes and make suggestions.In a pilot within the case study, the research team supercharged their machine-learning AI model, which can only give physicians a grade on their notes, by integrating a chatbot that added an accurate written narrative of issues with any note.

The NYU Langone case study also showed that GPT-4 or other large language models could provide a method for assessing the 5Cs across medical specialties without specialized training in each. Researchers say that the generalizability of GPT-4 for evaluating note quality supports its potential for application at many health systems.

Our study provides evidence that AI can improve the quality of medical notes, a critical part of caring for patients, said lead study author Jonah Feldman, MD, medical director of clinical transformation and informatics within NYU Langones Medical Center Information Technology (MCIT) Department of Health Informatics. This is the first large-scale study to show how a healthcare organization can use a combination of AI models to give note feedback that significantly improves care quality.

Poor note quality in healthcare has been a growing concern since the enactment of the Health Information Technology for Economic and Clinical Health (HITECH) Act in 2009. The act gave incentives to healthcare systems to switch from paper to electronic health records (EHR), enabling improved patient safety and coordination between healthcare providers.

A side effect of EHR adoption, however, has been that physician clinical notes are now four times longer on average in the United States than in other countries. Such note bloat has been shown to make it harder for collaborating clinicians to understand diagnoses described by their colleagues, say the study authors. Issues with note quality has been shown in the field to lead to missed diagnoses and delayed treatments, and there is no universally accepted methodology for measuring it. Further, evaluation of note quality by human peers is time-consuming and hard to scale up to the organizational level, the researchers say.

The effort captured in the new NYU Langone case study outlines a structured approach for organizational development of AI-based note quality measurement, a related system for process improvement, and a demonstration of AI-fostered clinician behavioral change in combination with other safety programs. The study also details how AI-generated note quality measurement helped to foster adoption of standard workflows, a significant driver for quality improvement.

Each of the four medical specialties that participated in the study achieved the institutional goal, which was that more than 75 percent of inpatient history and physical exams and consult notes, were being completed using standardized workflows that drove compliance with quality metrics. This represented an improvement from the previous share of less than 5 percent.

Our study represents the founding stage of what will undoubtedly be a national trend to leverage cutting-edge tools to ensure clinical documentation of the highest qualitymeasurably and reproducibly, said study author Paul A. Testa, MD, JD, MPH, chief medical information officer for NYU Langone. The clinical note can be a foundational toolif accurate, accessible, and effectiveto truly influence clinical outcomes by meaningfully engaging patients while ensuring documentation integrity.

Along with Dr. Feldman and Dr. Testa, the current studys authors from NYU Langone were Katherine Hochman, MD, MBA, Benedict Vincent Guzman, Adam J. Goodman, MD, and Joseph M. Weisstuch, MD.

Greg Williams Phone: 212-404-3500 Gregory.Williams@NYULangone.org

Read more:
Artificial Intelligence Feedback on Physician Notes Improves Patient Care - NYU Langone Health

Small Businesses Face Uphill Battle in AI Race, Says AI Index Head – PYMNTS.com

Small and medium-sized businesses will struggle to keep pace with tech giants like OpenAI in developing their own artificial intelligence (AI) models, according to a new report from Stanford University.

In an interview,Nestor Maslej, the editor-in-chief of Stanfords newly released 2024 AI Index Report, highlighted the studys findings on the growing AI divide between large and small companies. While tech behemoths pour billions into AI R&D, smaller firms lack the resources and talent to compete head-on.

A small or even medium-sized business will not be able to train a frontier foundation model that can compete with the likes of GPT-4, Gemini or Claude, Maslej said. However, there are some fairly competent open-source models, such as Llama 2 and Mistral, that are freely accessible. A lot can be done with these kinds of open-source models, and they are likely to continue improving over time. In a few years, there may be an open, relatively low-parameter model that works as well as GPT-4 does today.

Astudy from PYMNTS last year highlighted that generative AI technologies such as OpenAIs ChatGPT could significantly enhance productivity, yet they also risk disrupting employment patterns.

A major takeaway from the report is the possible disconnect between AI benchmarks and actual business requirements in the real world.

To me, it is less about improving the models on these tasks and more about asking whether the benchmarks we have are even well-suited to evaluate the business utility of these systems, Maslej stated. The current benchmarks may not be well-aligned with the real-world needs of businesses.

The report indicated that while private investment in AI generally declined last year, funding for generative AI experienced a dramatic surge, growing nearly eightfold from 2022 to $25.2 billion. Leading players in the generative AI industry, including OpenAI, Anthropic, Hugging Face and Inflection, reported substantial increases in their fundraising efforts.

Maslej highlighted that while the costs of adopting AI are considerable, they are overshadowed by the expenses associated with training the systems.

Adoption is less of a cost problem because the real cost lies in training the systems. Most companies do not need to worry about training their own models and can instead adopt existing models, which are available either freely through open source or through relatively cost-accessible APIs, he explained.

The report also calls for standardized benchmarks in responsible AI development. Maslej imagines a future where common benchmarks allow businesses to easily compare and choose AI models that match their ethical standards. Standardization would make it simpler for businesses to more confidently ascertain how various AI models compare to one another, he stated.

Balancing profit with ethical concerns emerges as a key challenge. The report shows that while many businesses are concerned about issues like privacy and data governance, fewer are taking concrete steps to mitigate these risks. The more pressing question is whether businesses are actually taking steps to address some of these concerns, Maslej noted.

Measuring AIs impact on worker productivity across different industries remains complex. It is possible to measure productivity within various industries; however, comparing productivity gains across industries is more challenging, Maslej said.

Looking ahead, the report highlights the need for businesses to navigate an increasingly complex regulatory landscape. On Tuesday, Utah Sen. Mitt Romney and several Senate colleagues unveiled a plan to guard against the potential dangers of AI. These include threats in biological, chemical, cyber and nuclear areas by increasing federal regulation of advanced technological developments.

Maslej emphasized the importance of staying vigilant. Navigating this issue will be challenging. The regulatory standards for AI are still unclear.

As public awareness of AI grows, Maslej believes that businesses must address concerns about job displacement and data privacy. As people become more aware of AI, how can businesses proactively address nervousness, especially regarding job displacement and data privacy? he posed as a crucial question for the industry to consider.

The 2024 AI Index Report is meant to guide businesses and society in navigating the rapid advancements in artificial intelligence. Maslej concluded, The AI landscape is evolving at an unprecedented pace, presenting both immense opportunities and daunting challenges.

Go here to see the original:
Small Businesses Face Uphill Battle in AI Race, Says AI Index Head - PYMNTS.com

NSA Warns of AI Cybersecurity Risks, Urges Businesses to Bolster Defenses – PYMNTS.com

The National Security Agency (NSA) is sounding the alarm on the cybersecurity risks posed by artificial intelligence (AI) systems, releasing new guidance to help businesses protect their AI from hackers.

As AI increasingly integrates into business operations, experts warn that these systems are particularly vulnerable to cyberattacks. The NSAs Cybersecurity Information Sheetprovides insights into AIs unique security challenges and offers steps companies can take to harden their defenses.

AI brings unprecedented opportunity but also can present opportunities for malicious activity. NSA is uniquely positioned to provide cybersecurity guidance, AI expertise, and advanced threat analysis, NSA Cybersecurity Director Dave Luber said Monday (April 15) in anews release.

The report suggested that organizations using AI systems should put strong security measures in place to protect sensitive data and prevent misuse. Key measures include conducting ongoing compromise assessments, hardening the IT deployment environment, enforcing strict access controls, using robust logging and monitoring and limiting access to model weights.

AI is vulnerable to hackers due to its complexity and the vast amounts of data it can process,Jon Clay, vice president of threat intelligence at the cybersecurity company Trend Micro,told PYMNTS. AI is software, and as such, vulnerabilities are likely to exist which can be exploited by adversaries.

Asreported by PYMNTS, AI is revolutionizing how security teams approach cyber threats by accelerating and streamlining their processes. Through its ability to analyze large datasets and identify complex patterns, AI automates the early stages of incident analysis, enabling security experts to start with a clear understanding of the situation and respond more quickly.

Cybercrime continues to rise with the increasing embrace of a connected global economy. According to an FBI report, the U.S. alone saw cyberattack losses exceed $10.3 billion in 2022.

AI systems are particularly prone to attacks due to their dependency on data for training models, according to Clay.

Since AI and machine learning depend on providing and training data to build their models, compromising that data is an obvious way for bad actors to poison AI/ML systems, Clay said.

He emphasized the risks of these hacks, explaining that they can lead to stolen confidential data, harmful commands being inserted and biased results. These issues could upset users and even lead to legal problems.

Clay also pointed out the challenges in detecting vulnerabilities in AI systems.

It can be difficult to identify how they process inputs and make decisions, making vulnerabilities harder to detect, he said.

He noted that hackers are looking for ways to get around AI security to change its results, and this method is being talked about more in secret online forums.

When asked about measures businesses can implement to enhance AI security, Clay emphasized the necessity of a proactive approach.

Its unrealistic to ban AI outright, but organizations need to be able to manage and regulate it, he said.

Clay recommended adopting zero-trust security modelsand using AI to enhance safety measures. This method means AI can help analyze emotions and tones in communications and check web pages to stop fraud. He also stressed the importance of strict access rules andmulti-factor authenticationto protect AI systems from unauthorized access.

As businesses embrace AI for enhanced efficiency and innovation, they also expose themselves to new vulnerabilities, Malcolm Harkins, chief security and trust officer at the cybersecurity firm HiddenLayer, told PYMNTS.

AI was the most vulnerable technology deployed in production systems because it was vulnerable at multiple levels, Harkins added.

Harkins advised businesses to take proactive measures, such as implementing purpose-built security solutions, regularly assessing AI models robustness, continuous monitoring and developing comprehensive incident response plans.

If real-time monitoring and protection were not in place, AI systems would surely be compromised, and the compromise would likely go unnoticed for extended periods, creating the potential for more extensive damage, Harkins said.

See more here:
NSA Warns of AI Cybersecurity Risks, Urges Businesses to Bolster Defenses - PYMNTS.com

Artificial intelligence studio for college students opens in Warner Robins at the VECTR Center – 13WMAZ.com

The AI-Enhanced Robotic Manufacturing training program offered at Central Georgia Technical College is preparing student veterans and active duty service members.

Author: 13wmaz.com

Published: 8:43 AM EDT April 17, 2024

Updated: 8:43 AM EDT April 17, 2024

See original here:
Artificial intelligence studio for college students opens in Warner Robins at the VECTR Center - 13WMAZ.com

Generative Artificial Intelligence Revolution Heats Up in Asia/Pacific, with IDC expecting a 95.4% CAGR in 2027 … – EMSNow

SINGAPORE IDCs latestWorldwide AI and Generative AI Spending Guidereveals that the Asia/Pacific* region is witnessing an unprecedented surge in Generative AI (GenAI) adoption, including software, services, and hardware for AI-centric** systems with spending projected to soar to $26 billion by 2027, with a compound annual growth rate (CAGR) of 95.4 percent for the period 2022-2027. This surge underscores the regions pivotal role in driving the next wave of AI innovation and technological advancement.

GenAI is a branch of computer science involving unsupervised and semi-supervised algorithms that enable computers to create new content using previously created content, such as text, audio, video, images, and code, in response to short prompts. IDC believes GenAI will be a trigger technology to transition to a new chapter in the move toward automation for both internal and external parties across generic productivity, business functionspecific enhancements, or industry-specific tasks.

We anticipate that Asia/Pacific will experience a surge in the adoption of Generative AI, with growth rates expected to match those of North America, largely due to enterprises investing heavily in developing data and infrastructure platforms tailored for GenAI applications. We forecast that this investment in GenAI will reach its zenith within the next two years, followed by a period of stabilization. China is projected to maintain its position as the dominant market for GenAI, while Japan and India are set to become the most rapidly expanding markets in the forthcoming years,Deepika Giri, Head of Research, Big Data & AI, IDC APJ.

Unlocking the vast potential of GenAI, the Asia/Pacific region is poised for a transformative journey across various sectors. With robust digital infrastructure and growing investments in technology, Asia/Pacific emerges as a pivotal player in this dynamic landscape. Strategic investment in hardware, software, and associated services for GenAI is crucial to sustaining and propelling this progress. From software development to customer service, GenAI is revolutionizing industries, ushering in a new era of innovation in Asia/Pacific.

IT spending in GenAI technology progresses through three distinct stages. Initially, during the GenAI Foundation Build phase, attention is directed towards enhancing core infrastructure, investing in IaaS, and bolstering security software. Subsequently, in the Broad Adoption phase, the focus shifts towards the widespread adoption of open-source AI platforms offered as-a-service, playing a fundamental role in digital business control planes. Finally, the Unified AI Services phase sees a surge in spending as organizations rapidly integrate GenAI to gain a competitive edge, diverging from the typical slower growth observed in new technology markets.

GenAI isnt a fleeting trend. Its capacity to generate entirely new content, across various mediums, such as images, videos, code, and marketing materials, promises substantial efficiency gains and paves the way for innovative creative opportunities, granting a competitive advantage, saysVinayaka Venkatesh, Senior Market Analyst, IT Spending Guides, Customer Insights & Analysis, IDC Asia/Pacific. A significant portion of organizations have either already adopted Generative AI or are in the initial stages of experimenting with models,Vinayaka Venkateshends.

The financial services sector is experiencing rapid growth in Generative AI adoption in Asia. It is projected to reach $4.3 billion by 2027 with a remarkable CAGR of 96.7%. Within this industry, GenAI is being utilized internally to enhance operations efficiency, automate repetitive tasks, and optimize back-office processes such as fraud detection and the creation of intricate documents. Generative AI-powered solutions provide tailored financial services like personalized planning tools and reports, which dynamically adjust to meet customers evolving needs. Furthermore, the integration of GenAI yields substantial benefits to profitability by cutting costs, driving revenue generation, and enhancing productivity across various functions such as DevOps, marketing, and legal compliance.

The software and information services industry stands as the second-largest adopter of GenAI, embracing its versatility across sectors such as marketing, data analytics, and software development. Within marketing, GenAI can streamline content creation for websites, blogs, and social media platforms, optimizing marketing strategies and enhancing audience engagement. In data-driven fields like machine learning and analytics, GenAI proves invaluable for generating synthetic data, enriching existing datasets, and improving model performance and resilience. Additionally, in software development, these tools aid developers by automating coding tasks, generating prototypes, and accelerating the software development lifecycle, leading to heightened productivity and efficiency.

As the third-largest adopter of GenAI, governments across the Asia-Pacific region have a substantial opportunity to transform their operations and service delivery. This technology holds the potential to enhance efficiency, transparency, and citizen engagement. Governments are well-placed to spearhead efforts in advancing education and training in GenAI, thereby catalyzing the creation of new job prospects, and stimulating the growth of technology innovation hubs. These hubs will function as focal points for state-of-the-art training, bolstering skill sets, and nurturing the emergence of future AI professionals, including scientists, engineers, technicians, and specialists.

In the rapidly evolving Asia/Pacific retail market, characterized by diverse consumer preferences and advancing digital technologies, retailers are increasingly turning to GenAI to gain a competitive advantage. GenAI enables enhanced personalization, tailoring experiences to individual preferences, while also boosting efficiency by automating tasks like product design and content creation, thereby accelerating time-to-market. Furthermore, retailers leverage GenAI to create dynamic visual content and interactive experiences, fostering heightened customer engagement and loyalty.

IDCsWorldwide AI and Generative AI Spending Guidemeasures spending for technologies that analyze, organize, access, and provide advisory services based on a range of unstructured information. The Spending Guide quantifies the AI opportunity by providing data for 38 use cases across 27 industries in nine regions and 32 countries. Data is also available for the related hardware, software, and services categories. The AI and Generative AI Spending Guide is produced to provide the latest market developments through an accurate and quality forecast. During the period between updates, IDCs AI and Generative AI analyst teams conduct primary and secondary research to support this data product. Research in the period from August 2023 to February 2024 resulted in multiple additions and enhancements to the data. In this release of the AI and GenAI Spending Guide, we distilled leading forecasts such as IDCs Worldwide Black Book and IDCs Worldwide ICT Spending Guide, as well as AI and generative AI research led by IDCs AI Council of senior researchers globally.

20% of Asia/Pacific organizations are planning to build their own generative AI models. Explore IDCs latest eBook to stay equipped for the GenAI revolution. Download now:bit.ly/ genai -build-buy

**Taxonomy Note:The IDCWorldwide AI and Generative AI Spending Guideuses a precise definition of what constitutes an AI Application in which the application must have an AI component that is crucial to the application without this AI component the application will not function. This distinction enables the Spending Guide to focus on those software applications that are strongly AI-centric. In comparison, the IDCWorldwide Semiannual Artificial Intelligence Trackeruses a broad definition of AI Applications that includes applications where the AI component is non-centric, or not fundamental, to the application. This enables the inclusion of vendors that have incorporated AI capabilities into their software, but the applications are not exclusively used for AI functions. In other words, the application will function without the inclusion of the AI component.

Read more from the original source:
Generative Artificial Intelligence Revolution Heats Up in Asia/Pacific, with IDC expecting a 95.4% CAGR in 2027 ... - EMSNow

Artificial Intelligence Amplifies State Tax Audits on High Earners – WebProNews

As fears about artificial intelligence (AI) veer from job displacement to broader societal control, state tax departments harness this potent technology to boost audits on high earners significantly. Robert Frank of CNBC highlights how already high-taxed Democrate-controlled states like New York and California are increasingly deploying AI to scrutinize the tax declarations of the wealthy, intensifying efforts to reclaim unreported income.

In the past year, high-tax states have issued a surge in audit letters, with figures marking a 56% increase from the previous year. The targets? Affluent individuals who have relocated across state lines during the pandemic and remote workers whose physical locations do not align with their companys base.

AIs role in these audits is groundbreaking and unnerving for those it targets. By analyzing vast datasets, AI systems identify patterns and anomalies in tax returns more efficiently than human auditors ever could. This capability is instrumental in tracking high earners who might have underreported their incomes or falsely claimed to have moved permanently to tax-haven states.

Accountants and tax lawyers confirm that the rate of audits has escalated dramatically over the last six months. Tax authorities are challenging the permanence of moves made during the COVID-19 pandemic, insisting that many owe state taxes irrespective of their new residences. Furthermore, states are scrutinizing remote workers who, despite working entirely out-of-state, are employed by companies based in places like New York.

The fiscal implications for states are significant. With California facing a $38 billion deficit and New York bracing for a $10 billion shortfall next year, the financial incentive to pursue wealthy taxpayers is compelling. The infusion of $80 billion into the IRS, earmarked for enforcement, means that high earners are likely to face audits from both state and federal levels.

Questions linger about the efficacy and fairness of AI-driven audits. Critics ask whether these automated systems might overreach or misinterpret complex tax data, potentially leading to wrongful accusations. Yet, proponents argue that AI could revolutionize tax enforcement by uncovering hidden patterns of evasion that would be impossible for human auditors to detect.

As states and the IRS increasingly rely on artificial intelligence to bolster their audits, the landscape of tax enforcement is undergoing a profound transformation. This shift promises greater efficiency but raises important questions about privacy, fairness, and the transparency of AI algorithms in legal and financial contexts. Whether this trend will lead to a more equitable tax system or merely shift the burden more heavily onto certain groups remains to be seen.

Read more here:
Artificial Intelligence Amplifies State Tax Audits on High Earners - WebProNews

Artificial intelligence in liver cancer new tools for research and patient management – Nature.com

Sung, H. et al. Global Cancer Statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 71, 209249 (2021).

Article PubMed Google Scholar

Rumgay, H. et al. Global, regional and national burden of primary liver cancer by subtype. Eur. J. Cancer 161, 108118 (2022).

Article PubMed Google Scholar

European Association for the Study of the Liver. EASL Clinical Practice Guidelines: management of hepatocellular carcinoma. J. Hepatol. 69, 182236 (2018).

Article Google Scholar

Ducreux, M. et al. The management of hepatocellular carcinoma. Current expert opinion and recommendations derived from the 24th ESMO/World Congress on Gastrointestinal Cancer, Barcelona, 2022. ESMO Open. 8, 101567 (2023).

Article CAS PubMed PubMed Central Google Scholar

Echle, A. et al. Deep learning in cancer pathology: a new generation of clinical biomarkers. Br. J. Cancer 124, 686696 (2021).

Article PubMed Google Scholar

Friemel, J. et al. Intratumor heterogeneity in hepatocellular carcinoma. Clin. Cancer Res. 21, 19511961 (2015).

Article CAS PubMed Google Scholar

Calderaro, J. et al. Histological subtypes of hepatocellular carcinoma are related to gene mutations and molecular tumour classification. J. Hepatol. 67, 727738 (2017).

Article CAS PubMed Google Scholar

Solinas, A. & Calvisi, D. F. Lessons from rare tumors: hepatic lymphoepithelioma-like carcinomas. World J. Gastroenterol. 21, 34723479 (2015).

Article PubMed PubMed Central Google Scholar

Salomao, M., Yu, W. M., Brown, R. S. Jr, Emond, J. C. & Lefkowitch, J. H. Steatohepatitic hepatocellular carcinoma (SH-HCC): a distinctive histological variant of HCC in hepatitis C virus-related cirrhosis with associated NAFLD/NASH. Am. J. Surg. Pathol. 34, 16301636 (2010).

Article PubMed Google Scholar

Limousin, W. et al. Molecular-based targeted therapies in patients with hepatocellular carcinoma and hepato-cholangiocarcinoma refractory to atezolizumab/bevacizumab. J. Hepatol. 79, 14501458 (2023).

Article CAS PubMed Google Scholar

Prueksapanich, P. et al. Liver fluke-associated biliary tract cancer. Gut Liver 12, 236245 (2018).

Article CAS PubMed Google Scholar

European Association for the Study of the Liver. EASL-ILCA clinical practice guidelines on the management of intrahepatic cholangiocarcinoma. J. Hepatol. 79, 181208 (2023).

Article Google Scholar

Vithayathil, M., Bridegwater, J. & Khan, S. A. Medical therapies for intra-hepatic cholangiocarcinoma. J. Hepatol. 75, 981983 (2021).

Article PubMed Google Scholar

Nault, J.-C. & Villanueva, A. Biomarkers for hepatobiliary cancers. Hepatology 73, 115127 (2021).

Article PubMed Google Scholar

Brunt, E. et al. cHCC-CCA: consensus terminology for primary liver carcinomas with both hepatocytic and cholangiocytic differentation. Hepatology 68, 113126 (2018).

Article PubMed Google Scholar

Rinella, M. E. et al. A multisociety Delphi consensus statement on new fatty liver disease nomenclature. Ann. Hepatol. 29, 101133 (2024).

Article PubMed Google Scholar

Wong, V. W.-S., Ekstedt, M., Wong, G. L.-H. & Hagstrm, H. Changing epidemiology, global trends and implications for outcomes of NAFLD. J. Hepatol. 79, 842852 (2023).

Article PubMed Google Scholar

Clements, O., Eliahoo, J., Kim, J. U., Taylor-Robinson, S. D. & Khan, S. A. Risk factors for intrahepatic and extrahepatic cholangiocarcinoma: a systematic review and meta-analysis. J. Hepatol. 72, 95103 (2020).

Article PubMed Google Scholar

Jing, W. et al. Diabetes mellitus and increased risk of cholangiocarcinoma: a meta-analysis. Eur. J. Cancer Prev. 21, 2431 (2012).

Article PubMed Google Scholar

Litjens, G. et al. A survey on deep learning in medical image analysis. Med. Image Anal. 42, 6088 (2017).

Article PubMed Google Scholar

Wagner, S. J. et al. Transformer-based biomarker prediction from colorectal cancer histology: a large-scale multicentric study. Cancer Cell 41, 16501661.e4 (2023).

Article CAS PubMed PubMed Central Google Scholar

Khader, F. et al. Multimodal deep learning for integrating chest radiographs and clinical parameters: a case for transformers. Radiology 309, e230806 (2023).

Article PubMed Google Scholar

Reis-Filho, J. S. & Kather, J. N. Overcoming the challenges to implementation of artificial intelligence in pathology. J. Natl Cancer Inst. 115, 608612 (2023).

Article PubMed PubMed Central Google Scholar

Shmatko, A., Ghaffari Laleh, N., Gerstung, M. & Kather, J. N. Artificial intelligence in histopathology: enhancing cancer research and clinical oncology. Nat. Cancer 3, 10261038 (2022).

Article PubMed Google Scholar

Cheng, N. et al. Deep learning-based classification of hepatocellular nodular lesions on whole-slide histopathologic images. Gastroenterology 162, 19481961.e7 (2022).

Article CAS PubMed Google Scholar

Kiani, A. et al. Impact of a deep learning assistant on the histopathologic classification of liver cancer. NPJ Digit. Med. 3, 23 (2020).

Article PubMed PubMed Central Google Scholar

Calderaro, J. et al. Deep learning-based phenotyping reclassifies combined hepatocellular-cholangiocarcinoma. Nat. Commun. 14, 8290 (2023).

Article CAS PubMed PubMed Central Google Scholar

Chung, T. & Park, Y. N. Up-to-date pathologic classification and molecular characteristics of intrahepatic cholangiocarcinoma. Front. Med. 9, 857140 (2022).

Article Google Scholar

Albrecht, T. et al. Deep learning-enabled diagnosis of liver adenocarcinoma. Gastroenterology 165, 12621275 (2023).

Article PubMed Google Scholar

Lu, M. Y. et al. AI-based pathology predicts origins for cancers of unknown primary. Nature 594, 106110 (2021).

Article CAS PubMed Google Scholar

Saillard, C. et al. Predicting survival after hepatocellular carcinoma resection using deep-learning on histological slides. Hepatology 72, 20002013 (2020).

Article PubMed Google Scholar

Shi, J.-Y. et al. Exploring prognostic indicators in the pathological images of hepatocellular carcinoma based on deep learning. Gut 70, 951961 (2021).

Article CAS PubMed Google Scholar

Xie, J. et al. Survival prediction on intrahepatic cholangiocarcinoma with histomorphological analysis on the whole slide images. Comput. Biol. Med. 146, 105520 (2022).

Article CAS PubMed Google Scholar

Sjblom, N. et al. Automated image analysis of keratin 7 staining can predict disease outcome in primary sclerosing cholangitis. Hepatol. Res. 53, 322333 (2023).

Article PubMed Google Scholar

Cifci, D., Foersch, S. & Kather, J. N. Artificial intelligence to identify genetic alterations in conventional histopathology. J. Pathol. 257, 430444 (2022).

Article PubMed Google Scholar

Campanella, G. et al. H&E-based computational biomarker enables universal EGFR screening for lung adenocarcinoma. Preprint at https://doi.org/10.48550/arXiv.2206.10573 (2022).

Echle, A. et al. Artificial intelligence for detection of microsatellite instability in colorectal cancer a multicentric analysis of a pre-screening tool for clinical application. ESMO Open. 7, 100400 (2022).

Article CAS PubMed PubMed Central Google Scholar

Echle, A. et al. Deep learning for the detection of microsatellite instability from histology images in colorectal cancer: a systematic literature review. ImmunoInformatics 34, 100008 (2021).

Article CAS Google Scholar

Farahmand, S. et al. Deep learning trained on hematoxylin and eosin tumor region of interest predicts HER2 status and trastuzumab treatment response in HER2+ breast cancer. Mod. Pathol. 35, 4451 (2022).

Article CAS PubMed Google Scholar

Fu, Y. et al. Pan-cancer computational histopathology reveals mutations, tumor composition and prognosis. Nat. Cancer 1, 800810 (2020).

Article CAS PubMed Google Scholar

Kather, J. N. et al. Pan-cancer image-based detection of clinically actionable genetic alterations. Nat. Cancer 1, 789799 (2020).

Article CAS PubMed PubMed Central Google Scholar

Zhang, H. et al. Predicting tumor mutational burden from liver cancer pathological images using convolutional neural network. In 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (eds Yoo, I., Bi, J. & Hu, X) 920925 (IEEE, 2019).

Zeng, Q. et al. Artificial intelligence predicts immune and inflammatory gene signatures directly from hepatocellular carcinoma histology. J. Hepatol. 1, 116127 (2022).

Article Google Scholar

Macias, R. I. R. et al. Clinical relevance of biomarkers in cholangiocarcinoma: critical revision and future directions. Gut 71, 16691683 (2022).

CAS PubMed Google Scholar

Zeng, Q. et al. Artificial intelligence-based pathology as a biomarker of sensitivity to atezolizumab-bevacizumab in patients with hepatocellular carcinoma: a multicentre retrospective study. Lancet Oncol. 24, 14111422 (2023).

Article CAS PubMed Google Scholar

Oh, D.-Y. et al. Durvalumab plus gemcitabine and cisplatin in advanced biliary tract cancer. NEJM Evid. https://doi.org/10.1056/EVIDoa2200015 (2022).

Nam, D., Chapiro, J., Paradis, V., Seraphin, T. P. & Kather, J. N. Artificial intelligence in liver diseases: improving diagnostics, prognostics and response prediction. JHEP Rep. 4, 100443 (2022).

Article PubMed PubMed Central Google Scholar

Narita, K. et al. Iodine maps derived from sparse-view kV-switching dual-energy CT equipped with a deep learning reconstruction for diagnosis of hepatocellular carcinoma. Sci. Rep. 13, 3603 (2023).

Article CAS PubMed PubMed Central Google Scholar

Lee, H. J., Kim, J. S., Lee, J. K., Lee, H. A. & Pak, S. Ultra-low-dose hepatic multiphase CT using deep learning-based image reconstruction algorithm focused on arterial phase in chronic liver disease: a non-inferiority study. Eur. J. Radiol. 159, 110659 (2023).

Article PubMed Google Scholar

Liu, F. et al. Deep learning radiomics based on contrast-enhanced ultrasound might optimize curative treatments for very-early or early-stage hepatocellular carcinoma patients. Liver Cancer 9, 397413 (2020).

Article PubMed PubMed Central Google Scholar

Huang, Z. et al. Deep learning-based radiomics based on contrast-enhanced ultrasound predicts early recurrence and survival outcome in hepatocellular carcinoma. World J. Gastrointest. Oncol. 14, 23802392 (2022).

Article PubMed PubMed Central Google Scholar

Mller-Franzes, G. et al. Using machine learning to reduce the need for contrast agents in breast MRI through synthetic images. Radiology 307, e222211 (2023).

Originally posted here:
Artificial intelligence in liver cancer new tools for research and patient management - Nature.com