Category Archives: Artificial Intelligence
Artificial Intelligence (AI) and the Church of Africa: An interview with Fr Joel Nkongolo – Vatican News – English
Congolese national and Claretian priest Fr Joel Nkongolo recently spoke to Fr Paul Samasumo of Vatican News about AIs implications or possible impact on the African Church. Fr Nkongolo is currently based in Nigeria.
Fr Paul Samasumo Vatican City.
How would you define or describe Artificial Intelligence (AI)?
Artificial Intelligence encompasses a wide range of technologies and techniques that enable machines to mimic human cognitive functions. Machine learning, a subset of AI, allows systems to learn from data without being explicitly programmed. For example, streaming platforms like Netflix use recommendation algorithms to analyze users viewing history and suggest relevant content. Computer vision technology, another aspect of AI, powers facial recognition systems used in security and authentication applications.
Should we be worried and afraid of Artificial Intelligence?
While AI offers numerous benefits, such as improved efficiency, productivity, and innovation, it also raises legitimate concerns. One concern is job displacement, as automation could replace certain tasks traditionally performed by humans. For instance, a study by the McKinsey Global Institute suggests that up to 800 million jobs could be automated by 2030. Additionally, there are ethical concerns surrounding AI, such as algorithmic bias, which can perpetuate discrimination and inequality. For example, facial recognition systems have been found to exhibit higher error rates for people with darker skin tones, leading to unfair treatment in areas like law enforcement.
Africas journey towards embracing AI seems relatively slow. Is this a good or bad thing?
Africas adoption of AI has been relatively slow compared to other regions, attributed to factors such as limited infrastructure, digital literacy, and funding. However, this cautious approach can also be viewed as an opportunity to address underlying challenges and prioritize ethical considerations. For example, Ghana recently established two AI Centres to develop AI capabilities while ensuring ethical AI deployment. By taking a deliberate approach, African countries can tailor AI solutions to address local needs and minimize potential negative impacts.
How do you see Artificial Intelligence affecting or impacting the Church in Africa and elsewhere? Should the Church be worried about Artificial Intelligence?
AI can enhance various aspects of Church operations, such as automating administrative tasks, analyzing congregation demographics for targeted outreach, and providing personalized spiritual guidance through chatbots. However, there are ethical considerations, such as ensuring data privacy and maintaining human connection amid technological advancements. For example, sections of the Church of England utilize AI-powered chatbots to engage with congregants online, offering pastoral support and prayer. While AI can augment the Churchs outreach efforts, its essential to maintain human oversight and uphold ethical standards in its use.
How can the Church influence ethical behaviour and good social media conduct?
The Church can leverage its moral authority to promote ethical behaviour and responsible social media use. For instance, Pope Francis has spoken out against the spread of fake news and social media polarisation, emphasizing the importance of truth and dialogue. Additionally, initiatives like Digital Catholicism involve leveraging online media technologies as tools for evangelization while simultaneously spreading the message of faith in cyberspace itself. So, by modelling ethical behaviour and offering guidance on digital citizenship, the Church can foster a culture of respect, empathy, and truthfulness in online interactions.
How can parents, guardians, teachers, parish priests, or pastors help young people avoid becoming enslaved by these technologies?
Adults play a crucial role in guiding young peoples use of technology and promoting healthy digital habits. For example, parents and teachers can educate children about the risks of excessive screen time and the importance of balance in their online and offline activities. They can also set limits on device usage, encourage outdoor play, and foster face-to-face social interactions. Moreover, religious leaders can incorporate teachings on mindfulness, self-discipline, and responsible stewardship of technology into their spiritual guidance, helping young people cultivate a healthy relationship with digital media.
Can individuals and society do anything to protect themselves from potential AI harm or abuse by non-democratic governments?
Individuals and civil society organizations can take proactive measures to safeguard against AI abuse by authoritarian regimes. For example, they can advocate for legislation and regulations that protect digital rights, privacy, and freedom of expression. Tools like virtual private networks (VPNs) and encrypted messaging apps can help individuals circumvent government surveillance and censorship. Moreover, international collaboration and solidarity among democratic nations can amplify efforts to hold oppressive regimes accountable for AI misuse and human rights violations.
What would your advice be to those working in education or schools regarding teaching about AI?
Educators have a vital role in preparing students for the AI-driven future by fostering critical thinking, creativity, and ethical decision-making skills. For example, integrating AI literacy into the curriculum can help students understand how AI works, its societal impacts, and ethical considerations. Projects like Googles AI for Social Good initiative provide educational resources and tools for teaching AI concepts in schools. By empowering students to become responsible AI users and innovators, educators can effectively equip them to navigate the opportunities and challenges of the digital age.
Fr Nkongolo, thank you for your time and help in navigating these issues.
Fr. Paul Samasumo, these examples, comparisons, and statistics illustrate the multifaceted nature of AI and its implications for society, including the Church and education. I hope they provide a comprehensive perspective on these complex issues.
Read the original:
Artificial Intelligence (AI) and the Church of Africa: An interview with Fr Joel Nkongolo - Vatican News - English
A.I. Has a Measurement Problem – The New York Times
Theres a problem with leading artificial intelligence tools like ChatGPT, Gemini and Claude: We dont really know how smart they are.
Thats because, unlike companies that make cars or drugs or baby formula, A.I. companies arent required to submit their products for testing before releasing them to the public. Theres no Good Housekeeping seal for A.I. chatbots, and few independent groups are putting these tools through their paces in a rigorous way.
Instead, were left to rely on the claims of A.I. companies, which often use vague, fuzzy phrases like improved capabilities to describe how their models differ from one version to the next. And while there are some standard tests given to A.I. models to assess how good they are at, say, math or logical reasoning, many experts have doubts about how reliable those tests really are.
This might sound like a petty gripe. But Ive become convinced that a lack of good measurement and evaluation for A.I. systems is a major problem.
For starters, without reliable information about A.I. products, how are people supposed to know what to do with them?
I cant count the number of times Ive been asked in the past year, by a friend or a colleague, which A.I. tool they should use for a certain task. Does ChatGPT or Gemini write better Python code? Is DALL-E 3 or Midjourney better at generating realistic images of people?
We are having trouble retrieving the article content.
Please enable JavaScript in your browser settings.
Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.
Thank you for your patience while we verify access.
Already a subscriber?Log in.
Want all of The Times?Subscribe.
See the rest here:
A.I. Has a Measurement Problem - The New York Times
BrightNight Unveils PowerAlpha Artificial Intelligence Software Platform at the "AI: Powering the New Energy Era … – PR Newswire
Trailblazing integrated AI platform ushers in an era of firm, affordable, clean energy.
WEST PALM BEACH, Fla., April 17, 2024 /PRNewswire/ -- BrightNight, the next-generation global renewable power producer built to deliver clean and dispatchable solutions, today introduced PowerAlpha, the proprietary software platform that uses cutting-edge Artificial Intelligence, data analytics, and cloud computing to design, optimize, and operate renewable power plants with industry-leading economics.
PowerAlpha was unveiled today at the "AI: Powering the New Energy Era"summit in Washington, D.C. Sponsored by BrightNight and hosted by the Foundation for American Science and Technology, it is the first independent industry summit in North America exclusively dedicated to exploring the influence of Artificial Intelligence on the energy sector. The summit brought together senior public officials, policymakers, energy industry leaders, experts, academia, and investors, including co-sponsors NVIDIA, IBM, C3.ai and Qcells, and representatives from the Department of Energy, U.S. Congress, Intel, Alphabet (Google), EEI, Bank of America, KPMG, EPRI, and other organizations.
BrightNight's PowerAlpha platform accelerates the global clean energy transition and decarbonization by ensuring the generation of reliable power at the lowest cost attainable in various geographies and power grids globally. Its benefits as realized in the design, optimize, and operate stages within the project's lifecycle include:
PowerAlpha's unique dispatch hybrid controls interface can also be integrated with utilities' Energy Management Systems for even more operational efficiency and integrated asset management solutions. Also, with the increasing demand for more power to support the growing use of AI applications and corresponding data centers, PowerAlpha is leveraging AI capabilities to drive greater efficiencies, better designs, and higher capacity renewable projects.
Martin Hermann, CEO of BrightNight, said: "Through relentless innovation, the dream of a sustainable clean energy transition, where renewable power is affordable and reliable, is now within reach. The integration of Artificial Intelligence with renewable energy offers solutions to numerous challenges, from demand forecasting and smart grid management to labor shortages and the design of efficient power projects. I am proud that BrightNight is at the forefront of this technological revolution, bringing the clean energy transition closer to reality than ever before."
Kiran Kumaraswamy, CTO of BrightNight, said: "PowerAlpha provides a fully integrated platform that is highly differentiated in the marketplace in its ability to design, optimize, and operate hybrid renewable power projects with cutting edge capabilities enabled by AI and utilizing patent-pending algorithms. Our team is redefining industry standards, optimizing renewable projects globally to integrate new load like data centers and increase utilization of existing infrastructure."
BrightNight has delivered a number of PowerAlpha projects and use cases in the U.S. and globally, where it is helping customers optimize existing transmissions infrastructure, integrate long and short duration storage solutions, increase capacity, and improve dispatchability of renewable assets, as well as repower existing projects.
BrightNight's is also using PowerAlpha to develop projects with its partner in India and the Philippines, ACEN, one of Asia's leading renewable companies. ACEN Group CIO Patrice Clausse said, "PowerAlpha has been an important enabler for our investment decision-making. We feel confident leveraging its capabilities to simulate the performance of numerous hybrid renewable power plant configurations incorporating solar, wind, and energy storage and identify the most efficient configuration. This has enabled us to optimize our plant configuration and was a critical component in our recent tenderwins."
For more information about PowerAlpha and BrightNight's 37 GW renewable power portfolio, please see http://www.brightnightpower.com/poweralpha or contact [emailprotected].
About BrightNight
BrightNight is the first global renewable integrated power company designed to provide utility and commercial and industrial customers with clean, dispatchable renewable power solutions. BrightNight works with customers across the U.S. and Asia Pacific to design, develop, and operate safe, reliable, large-scale renewable power projects optimized through its proprietary software platform PowerAlpha to better manage the intermittent nature of renewable energy. Its deep customer engagement process, team of proven power experts, and industry-leading solutions enable customers to overcome challenging energy sustainability standards, rapidly changing grid dynamics, and the transition away from fossil fuel generation. To learn more, please visit:www.brightnightpower.com
SOURCE BrightNight
See original here:
BrightNight Unveils PowerAlpha Artificial Intelligence Software Platform at the "AI: Powering the New Energy Era ... - PR Newswire
Artificial Intelligence Feedback on Physician Notes Improves Patient Care – NYU Langone Health
Artificial intelligence (AI) feedback improved the quality of physician notes written during patient visits, with better documentation improving the ability of care teams to make diagnoses and plan for patients future needs, a new study finds.
Since 2021, NYU Langone Health has been using pattern-recognizing, machine-learning AI systems to grade the quality of doctors clinical notes. At the same time, NYU Langone created data informatics dashboards that monitor hundreds of measures of safety and the effectiveness of care. The informatics team over time trained the AI models to track in dashboards how well doctors notes achieved the 5 Cs: completeness, conciseness, contingency planning, correctness, and clinical assessment.
Now, a new case study, published online April 17 in NEJM Catalyst Innovations in Care Delivery, shows how notes improved by AI, in combination with dashboard innovations and other safety initiatives, resulted in an improvement in care quality across four major medical specialties: internal medicine, pediatrics, general surgery, and the intensive care unit.
This includes improvements across the specialties of up to 45 percent in note-based clinical assessments (that is, determining diagnoses) and reasoning (making predictions when diagnoses are unknown). In addition, contingency planning to address patients future needs saw improvements of up to 34 percent.
Last year, NYU Langone added to this long-standing effort a newer form of AI that develops likely options for the next word in any sentence based on how billions of people used language on the internet over time. A result of this next-word prediction is that generative AI chatbots like GPT-4 can read physician notes and make suggestions.In a pilot within the case study, the research team supercharged their machine-learning AI model, which can only give physicians a grade on their notes, by integrating a chatbot that added an accurate written narrative of issues with any note.
The NYU Langone case study also showed that GPT-4 or other large language models could provide a method for assessing the 5Cs across medical specialties without specialized training in each. Researchers say that the generalizability of GPT-4 for evaluating note quality supports its potential for application at many health systems.
Our study provides evidence that AI can improve the quality of medical notes, a critical part of caring for patients, said lead study author Jonah Feldman, MD, medical director of clinical transformation and informatics within NYU Langones Medical Center Information Technology (MCIT) Department of Health Informatics. This is the first large-scale study to show how a healthcare organization can use a combination of AI models to give note feedback that significantly improves care quality.
Poor note quality in healthcare has been a growing concern since the enactment of the Health Information Technology for Economic and Clinical Health (HITECH) Act in 2009. The act gave incentives to healthcare systems to switch from paper to electronic health records (EHR), enabling improved patient safety and coordination between healthcare providers.
A side effect of EHR adoption, however, has been that physician clinical notes are now four times longer on average in the United States than in other countries. Such note bloat has been shown to make it harder for collaborating clinicians to understand diagnoses described by their colleagues, say the study authors. Issues with note quality has been shown in the field to lead to missed diagnoses and delayed treatments, and there is no universally accepted methodology for measuring it. Further, evaluation of note quality by human peers is time-consuming and hard to scale up to the organizational level, the researchers say.
The effort captured in the new NYU Langone case study outlines a structured approach for organizational development of AI-based note quality measurement, a related system for process improvement, and a demonstration of AI-fostered clinician behavioral change in combination with other safety programs. The study also details how AI-generated note quality measurement helped to foster adoption of standard workflows, a significant driver for quality improvement.
Each of the four medical specialties that participated in the study achieved the institutional goal, which was that more than 75 percent of inpatient history and physical exams and consult notes, were being completed using standardized workflows that drove compliance with quality metrics. This represented an improvement from the previous share of less than 5 percent.
Our study represents the founding stage of what will undoubtedly be a national trend to leverage cutting-edge tools to ensure clinical documentation of the highest qualitymeasurably and reproducibly, said study author Paul A. Testa, MD, JD, MPH, chief medical information officer for NYU Langone. The clinical note can be a foundational toolif accurate, accessible, and effectiveto truly influence clinical outcomes by meaningfully engaging patients while ensuring documentation integrity.
Along with Dr. Feldman and Dr. Testa, the current studys authors from NYU Langone were Katherine Hochman, MD, MBA, Benedict Vincent Guzman, Adam J. Goodman, MD, and Joseph M. Weisstuch, MD.
Greg Williams Phone: 212-404-3500 Gregory.Williams@NYULangone.org
Read more:
Artificial Intelligence Feedback on Physician Notes Improves Patient Care - NYU Langone Health
Small Businesses Face Uphill Battle in AI Race, Says AI Index Head – PYMNTS.com
Small and medium-sized businesses will struggle to keep pace with tech giants like OpenAI in developing their own artificial intelligence (AI) models, according to a new report from Stanford University.
In an interview,Nestor Maslej, the editor-in-chief of Stanfords newly released 2024 AI Index Report, highlighted the studys findings on the growing AI divide between large and small companies. While tech behemoths pour billions into AI R&D, smaller firms lack the resources and talent to compete head-on.
A small or even medium-sized business will not be able to train a frontier foundation model that can compete with the likes of GPT-4, Gemini or Claude, Maslej said. However, there are some fairly competent open-source models, such as Llama 2 and Mistral, that are freely accessible. A lot can be done with these kinds of open-source models, and they are likely to continue improving over time. In a few years, there may be an open, relatively low-parameter model that works as well as GPT-4 does today.
Astudy from PYMNTS last year highlighted that generative AI technologies such as OpenAIs ChatGPT could significantly enhance productivity, yet they also risk disrupting employment patterns.
A major takeaway from the report is the possible disconnect between AI benchmarks and actual business requirements in the real world.
To me, it is less about improving the models on these tasks and more about asking whether the benchmarks we have are even well-suited to evaluate the business utility of these systems, Maslej stated. The current benchmarks may not be well-aligned with the real-world needs of businesses.
The report indicated that while private investment in AI generally declined last year, funding for generative AI experienced a dramatic surge, growing nearly eightfold from 2022 to $25.2 billion. Leading players in the generative AI industry, including OpenAI, Anthropic, Hugging Face and Inflection, reported substantial increases in their fundraising efforts.
Maslej highlighted that while the costs of adopting AI are considerable, they are overshadowed by the expenses associated with training the systems.
Adoption is less of a cost problem because the real cost lies in training the systems. Most companies do not need to worry about training their own models and can instead adopt existing models, which are available either freely through open source or through relatively cost-accessible APIs, he explained.
The report also calls for standardized benchmarks in responsible AI development. Maslej imagines a future where common benchmarks allow businesses to easily compare and choose AI models that match their ethical standards. Standardization would make it simpler for businesses to more confidently ascertain how various AI models compare to one another, he stated.
Balancing profit with ethical concerns emerges as a key challenge. The report shows that while many businesses are concerned about issues like privacy and data governance, fewer are taking concrete steps to mitigate these risks. The more pressing question is whether businesses are actually taking steps to address some of these concerns, Maslej noted.
Measuring AIs impact on worker productivity across different industries remains complex. It is possible to measure productivity within various industries; however, comparing productivity gains across industries is more challenging, Maslej said.
Looking ahead, the report highlights the need for businesses to navigate an increasingly complex regulatory landscape. On Tuesday, Utah Sen. Mitt Romney and several Senate colleagues unveiled a plan to guard against the potential dangers of AI. These include threats in biological, chemical, cyber and nuclear areas by increasing federal regulation of advanced technological developments.
Maslej emphasized the importance of staying vigilant. Navigating this issue will be challenging. The regulatory standards for AI are still unclear.
As public awareness of AI grows, Maslej believes that businesses must address concerns about job displacement and data privacy. As people become more aware of AI, how can businesses proactively address nervousness, especially regarding job displacement and data privacy? he posed as a crucial question for the industry to consider.
The 2024 AI Index Report is meant to guide businesses and society in navigating the rapid advancements in artificial intelligence. Maslej concluded, The AI landscape is evolving at an unprecedented pace, presenting both immense opportunities and daunting challenges.
Go here to see the original:
Small Businesses Face Uphill Battle in AI Race, Says AI Index Head - PYMNTS.com
NSA Warns of AI Cybersecurity Risks, Urges Businesses to Bolster Defenses – PYMNTS.com
The National Security Agency (NSA) is sounding the alarm on the cybersecurity risks posed by artificial intelligence (AI) systems, releasing new guidance to help businesses protect their AI from hackers.
As AI increasingly integrates into business operations, experts warn that these systems are particularly vulnerable to cyberattacks. The NSAs Cybersecurity Information Sheetprovides insights into AIs unique security challenges and offers steps companies can take to harden their defenses.
AI brings unprecedented opportunity but also can present opportunities for malicious activity. NSA is uniquely positioned to provide cybersecurity guidance, AI expertise, and advanced threat analysis, NSA Cybersecurity Director Dave Luber said Monday (April 15) in anews release.
The report suggested that organizations using AI systems should put strong security measures in place to protect sensitive data and prevent misuse. Key measures include conducting ongoing compromise assessments, hardening the IT deployment environment, enforcing strict access controls, using robust logging and monitoring and limiting access to model weights.
AI is vulnerable to hackers due to its complexity and the vast amounts of data it can process,Jon Clay, vice president of threat intelligence at the cybersecurity company Trend Micro,told PYMNTS. AI is software, and as such, vulnerabilities are likely to exist which can be exploited by adversaries.
Asreported by PYMNTS, AI is revolutionizing how security teams approach cyber threats by accelerating and streamlining their processes. Through its ability to analyze large datasets and identify complex patterns, AI automates the early stages of incident analysis, enabling security experts to start with a clear understanding of the situation and respond more quickly.
Cybercrime continues to rise with the increasing embrace of a connected global economy. According to an FBI report, the U.S. alone saw cyberattack losses exceed $10.3 billion in 2022.
AI systems are particularly prone to attacks due to their dependency on data for training models, according to Clay.
Since AI and machine learning depend on providing and training data to build their models, compromising that data is an obvious way for bad actors to poison AI/ML systems, Clay said.
He emphasized the risks of these hacks, explaining that they can lead to stolen confidential data, harmful commands being inserted and biased results. These issues could upset users and even lead to legal problems.
Clay also pointed out the challenges in detecting vulnerabilities in AI systems.
It can be difficult to identify how they process inputs and make decisions, making vulnerabilities harder to detect, he said.
He noted that hackers are looking for ways to get around AI security to change its results, and this method is being talked about more in secret online forums.
When asked about measures businesses can implement to enhance AI security, Clay emphasized the necessity of a proactive approach.
Its unrealistic to ban AI outright, but organizations need to be able to manage and regulate it, he said.
Clay recommended adopting zero-trust security modelsand using AI to enhance safety measures. This method means AI can help analyze emotions and tones in communications and check web pages to stop fraud. He also stressed the importance of strict access rules andmulti-factor authenticationto protect AI systems from unauthorized access.
As businesses embrace AI for enhanced efficiency and innovation, they also expose themselves to new vulnerabilities, Malcolm Harkins, chief security and trust officer at the cybersecurity firm HiddenLayer, told PYMNTS.
AI was the most vulnerable technology deployed in production systems because it was vulnerable at multiple levels, Harkins added.
Harkins advised businesses to take proactive measures, such as implementing purpose-built security solutions, regularly assessing AI models robustness, continuous monitoring and developing comprehensive incident response plans.
If real-time monitoring and protection were not in place, AI systems would surely be compromised, and the compromise would likely go unnoticed for extended periods, creating the potential for more extensive damage, Harkins said.
See more here:
NSA Warns of AI Cybersecurity Risks, Urges Businesses to Bolster Defenses - PYMNTS.com
Artificial intelligence studio for college students opens in Warner Robins at the VECTR Center – 13WMAZ.com
The AI-Enhanced Robotic Manufacturing training program offered at Central Georgia Technical College is preparing student veterans and active duty service members.
Author: 13wmaz.com
Published: 8:43 AM EDT April 17, 2024
Updated: 8:43 AM EDT April 17, 2024
See original here:
Artificial intelligence studio for college students opens in Warner Robins at the VECTR Center - 13WMAZ.com
Artificial Intelligence Amplifies State Tax Audits on High Earners – WebProNews
As fears about artificial intelligence (AI) veer from job displacement to broader societal control, state tax departments harness this potent technology to boost audits on high earners significantly. Robert Frank of CNBC highlights how already high-taxed Democrate-controlled states like New York and California are increasingly deploying AI to scrutinize the tax declarations of the wealthy, intensifying efforts to reclaim unreported income.
In the past year, high-tax states have issued a surge in audit letters, with figures marking a 56% increase from the previous year. The targets? Affluent individuals who have relocated across state lines during the pandemic and remote workers whose physical locations do not align with their companys base.
AIs role in these audits is groundbreaking and unnerving for those it targets. By analyzing vast datasets, AI systems identify patterns and anomalies in tax returns more efficiently than human auditors ever could. This capability is instrumental in tracking high earners who might have underreported their incomes or falsely claimed to have moved permanently to tax-haven states.
Accountants and tax lawyers confirm that the rate of audits has escalated dramatically over the last six months. Tax authorities are challenging the permanence of moves made during the COVID-19 pandemic, insisting that many owe state taxes irrespective of their new residences. Furthermore, states are scrutinizing remote workers who, despite working entirely out-of-state, are employed by companies based in places like New York.
The fiscal implications for states are significant. With California facing a $38 billion deficit and New York bracing for a $10 billion shortfall next year, the financial incentive to pursue wealthy taxpayers is compelling. The infusion of $80 billion into the IRS, earmarked for enforcement, means that high earners are likely to face audits from both state and federal levels.
Questions linger about the efficacy and fairness of AI-driven audits. Critics ask whether these automated systems might overreach or misinterpret complex tax data, potentially leading to wrongful accusations. Yet, proponents argue that AI could revolutionize tax enforcement by uncovering hidden patterns of evasion that would be impossible for human auditors to detect.
As states and the IRS increasingly rely on artificial intelligence to bolster their audits, the landscape of tax enforcement is undergoing a profound transformation. This shift promises greater efficiency but raises important questions about privacy, fairness, and the transparency of AI algorithms in legal and financial contexts. Whether this trend will lead to a more equitable tax system or merely shift the burden more heavily onto certain groups remains to be seen.
Read more here:
Artificial Intelligence Amplifies State Tax Audits on High Earners - WebProNews