Category Archives: Artificial Intelligence

Machine learning and experiment | symmetry magazine – Symmetry magazine

Every day in August of 2019, physicist Dimitrios Tanoglidis would walk to the Plein Air Caf next to the University of Chicago and order a cappuccino. After finding a table, he would spend the next several hours flipping through hundreds of thumbnail images of white smudges recorded by the Dark Energy Camera, a telescope that at the time had observed 300 million astronomical objects.

For each white smudge, Tanoglidis would ask himself a simple yes-or-no question: Is this a galaxy? I would go through about 1,000 images a day, he says. About half of them were galaxies, and the other half were not.

After about a month, Tanoglidiswho was a University of Chicago PhD student at the timehad built up a catalogue of 20,000 low-brightness galaxies.

Then Tanoglidis and his team used this dataset to create a tool that, once trained, could evaluate a similar dataset in a matter of moments. The accuracy of our algorithm was very close to the human eye, he says. In some cases, it was even better than us and would find things that we had misclassified.

The tool they created was based on machine learning, a type of software that learns as it digests data, says Aleksandra Ciprijanovic, a physicist at the US Department of Energys Fermi National Accelerator Laboratory who at the time was one of Tanoglidiss research advisors. Its inspired by how neurons in our brains work, she saysadding that this added brainpower will be essential for analyzing exponentially larger datasets from future astronomical surveys. Without machine learning, wed need a small army of PhD students to give the same type of dataset.

Today, the Dark Energy Survey collaboration has a catalogue of 700 million astronomical objects, and scientists continue to use (and improve) Tanoglidiss tool to analyze images that could show previously undiscovered galaxies.

In astronomy, we have a huge amount of data, Ciprijanovic says. No matter how many people and resources we have, well never have enough people to go through all the data.

Classificationthis is probably a photo of a galaxy versus this is probably not a photo of a galaxywas one of machine learnings earliest applications in science. Over time, its uses have continued to evolve.

Machine learning, which is a subset of artificial intelligence, is a type of software that can, among other things, help scientists understand the relationships between variables in a dataset.

According to Gordon Watts, a physicist at the University of Washington, scientists traditionally figured out these relationships by plotting the data and looking for the mathematical equations that could describe it. Math came before the software, Watts says.

This math-only method is relatively straightforward when looking for the relationship between only a few variables: the pressure of a gas as a function of its temperature and volume, or the acceleration of a ball as a function of the force of an athletes kick and the balls mass. But finding these relationships with nothing but math becomes nearly impossible as you add more and more variables.

A lot of the problems were tackling in science today are very complicated, Ciprijanovic says. Humans can do a good job with up to three dimensions, but how do you think about a dataset if the problem is 50- or 100-dimensional?

This is where machine learning comes in.

Artificial intelligence doesnt care about the dimensionality of the problems, Ciprijanovic says. It can find patterns and make sense of the data no matter how many different dimensions are added.

Some physicists have been using machine-learning tools since the 1950s, but their widespread use in the field is a relatively new phenomenon.

The idea to use a [type of machine learning called a] neural network was proposed to the CDF experiment at the Tevatron in 1989, says Tommaso Dorigo, a physicist at the Italian National Institute for Nuclear Physics, INFN. People in the collaboration were both amused and disturbed by this.

Amused because of its novelty; disturbed because it added a layer of opacity into the scientific process.

Machine-learning models are sometimes called "black boxes" because it is hard to tell exactly how they are handling the data put into them; their large number of parameters and complex architectures are difficult to understand. Because scientists want to know exactly how a result is calculated, many physicists have been skeptical of machine learning and reluctant to implement it into their analyses. In order for a scientific collaboration to sign off on a new method, they first must exhaust all possible doubts, Dorigo says.

Scientists found a reason to work through those doubts after the Large Hadron Collider came online, an event that coincided with the early days of the ongoing boom in machine learning in industry.

Josh Bendavid, a physicist at the Massachusetts Institute of Technology, was an early adopter. When I joined CMS, machine learning was a thing, but seeing limited use, he says. But there was a big push to implement machine learning into the search for the Higgs boson.

The Higgs boson is a fundamental particle that helps explain why some particles have mass while others do not. Theorists predicted its existence in the 1950s, but finding it experimentally was a huge challenge. Thats because Higgs bosons are both incredibly rare and incredibly short-lived, quickly decaying into other particles such as pairs of photons.

In 2010, when the LHC experiments first started collecting data for physics, machine learning was widely used in industry and academia for classification (this is a photo of a cat versus this is not a photo of a cat). Physicists were using machine learning in a similar way (this is a collision with two photons versus this is not a collision with two photons).

But according to Bendavid, simply finding photons was not enough. Pairs of photons are produced in roughly one out of every 100 million collisions in the LHC. But Higgs bosons that decay into pairs of photons are produced in only one of 500 billion. To find Higgs bosons, scientists needed to find sets of photons that had a combined energy close to the mass of the Higgs. This means they needed more complex algorithmsones that could not only recognize photons, but also interpret the energy of photons based on how they interacted with the detector. Its like trying to estimate the weight of a cat in a photograph, Bendavid says.

That became possible when LHC scientists created high-quality detector simulations, which they could use to train their algorithms to find the photons they were looking for, Bendavid says.

Bendavid and his colleagues simulated millions of photons and looked at how they lost energy as they moved through the detector. According to Bendavid, the algorithms they trained were much more sensitive than traditional techniques.

And the algorithms worked. In 2012, the CMS and ATLAS experiments announced the discovery of the Higgs boson, just two years into studying particle collisions at the LHC.

We would have needed a factor of two more data to discover the Higgs boson if we had tried to do the analysis without machine learning, Bendavid says.

After the Higgs discovery, the LHC research program saw its own boom in machine learning. Before 2012, you would have had a hard time to publish something which used neural networks, Dorigo says. After 2012, if you wanted to publish an analysis that didnt use machine learning, youd face questions and objections.

Today, LHC scientists use machine learning to simulate collisions, evaluate and process raw data, tease signal from background, and even search for anomalies. While these advancements were happening at the LHC, scientists were watching closely from another, related field: neutrino research.

Neutrinos are ghostly particles that rarely interact with ordinary matter. According to Jessie Micallef, a fellow at the National Science Foundations Institute for Artificial Intelligence and Fundamental Interactions at MIT, early neutrino experiments would detect only a few particles per year. With such small datasets, scientists could easily reconstruct and analyze events with traditional methods.

That is how Micallef worked on a prototype detector as an intern at Lawrence Berkeley National Laboratory in 2015. I would measure electrons drifting in a little tabletop detector, come back to my computer, and make plots of what we saw, they say. I did a lot of programming to find the best fit lines for our data.

But today, their detectors and neutrino beams are much larger and more powerful. Were talking with people at the LHC about how to deal with pileup, Micallef says.

Neutrino physicists now use machine learning both to find the traces neutrinos leave behind as they pass through the detectors and to extract their properties, such as their energy and flavor. These days, Micallef collects their data, imports it into their computer, and starts the analysis process. But instead of toying with the equations, Micallef says that they let machine learning do a lot of the analysis for them.

At first, it seemed like a whole new world, they saybut it wasnt a magic bullet. Then there was validating the output. I would change one thing, and maybe the machine-learning algorithm would do really good in one area but really bad in another.

My work became thinking about how machine learning works, what its limitations are, and how we can get the most out of it.

Today, Micallef is developing machine-learning tools that will help scientists with some of the unique challenges of working with neutrinosincluding using gigantic detectors to study not just high-powered neutrinos blasting through from outside the Milky Way, but also low-energy neutrinos that could come from nearby.

Neutrino detectors are so big that the sizes of the signals they measure can be tiny by comparison. For instance, the IceCube experiment at the South Pole uses about a cubic kilometer of ice peppered with 5,000 sensors. But when a low-energy neutrino hits the ice, only a handful of those sensors light up.

Maybe a dozen out of 5,000 detectors will see the neutrino, Micallef says. The pictures were looking at are mostly empty space, and machine learning can get confused if you teach it that only 12 sensors out of 5,000 matter.

Neutrino physicists and scientists at the LHC are also using machine learning to give a more nuanced interpretation of what they are seeing in their detectors.

Machine learning is very good at giving a continuous probability, Watts says.

For instance, instead of classifying a particle in a binary method (this event is a muon neutrino versus this event is not a muon neutrino), machine learning can provide an uncertainty associated with its assessment.

This could change the overall outcome of our analysis, Micallef says. If there is a lot of uncertainty, it might make more sense for us to throw that event away or analyze it by hand. Its a much more concrete way of looking at how reliable these methods are and is going to be more and more important in the future.

Physicists use machine learning throughout almost all parts of data collection and analysis. But what if machine learning could be used to optimize the experiment itself? Thats the dream, Watts says.

Detectors are designed by experts with years of experience, and every new detector incrementally improves upon what has been done before. But Dorigo says he thinks machine learning could help detector designers innovate. If you look at calorimeters designed in the 1970s, they look a lot like the calorimeters we have today, Dorigo says. There is no notion of questioning paradigms.

Experiments such as CMS and ATLAS are made from hundreds of individual detectors that work together to track and measure particles. Each subdetector is enormously complicated, and optimizing each ones designnot as an individual component but as a part of a complex ecosystemis nearly impossible. We accept suboptimal results because the human brain is incapable of thinking in 1,000 dimensions, Dorigo says.

But what if physicists could look at the detector wholistically? According to Watts, physicists could (in theory) build a machine-learning algorithm that considers physics goals, budget, and real-world limitations to choose the optimal detector design: a symphony of perfectly tailored hardware all working in harmony.

Scientists still have a long way to go. Theres a lot of potential, Watts says. But we havent even learned to walk yet. Were only just starting to crawl.

They are making progress. Dorigo is a member of the Southern Wide-field Gamma-ray Observatory, a collaboration that wants to build an array of 6,000 particle detectors in the highlands of South America to study gamma rays from outer space. The collaboration is currently assessing how to arrange and place these 6,000 detectors. We have an enormous number of possible solutions, Dorigo says. The question is: how to pick the best one?

To find out, Dorigo and his colleagues took into account the questions they wanted to answer, the measurements they wanted to take, and number of detectors they had available to use. This time, though, they also developed a machine-learning tool that did the sameand found that it agreed with them.

They plugged a number of reasonable initial layouts into the program and allowed it to run simulations and gradually tweak the detector placement. No matter the initial layout, every simulation always converged to the same solution, Dorigo says.

Even though he knows there is still a long way to go, Dorigo says that machine-learning-aided detector design is the future. Were designing experiments today that will operate 10 years from now, he says. We have to design our detectors to work with the analysis tools of the future, and so machine learning has to be an ingredient in those decisions.

See original here:
Machine learning and experiment | symmetry magazine - Symmetry magazine

Meta Says It Plans to Spend Billions More on A.I. – The New York Times

Meta projected on Wednesday that revenue for the current quarter would be lower than what Wall Street anticipated and said it would spend billions of dollars more on its artificial intelligence efforts, even as it reported robust revenue and profits for the first three months of the year.

Revenue for the company, which owns Facebook, Instagram, WhatsApp and Messenger, was $36.5 billion in the first quarter, up 27 percent from $28.6 billion a year earlier and slightly above Wall Street estimates of $36.1 billion, according to data compiled by FactSet. Profit was $12.4 billion, more than double the $5.7 billion a year earlier.

But Metas work on A.I., which requires substantial computing power, comes with a lofty price tag. The Silicon Valley company said it planned to raise its spending forecast for the year to $35 billion to $40 billion, up from a previous estimate of $30 billion to $37 billion. The move was driven by heavy investments in A.I. infrastructure, including data centers; chip designs; and research and development.

Meta also predicted that revenue for the current quarter would be $36.5 billion to $39 billion, lower than analysts expectations.

The combination of higher spending and lighter-than-expected revenue spooked investors, who sent Metas shares down more than 16 percent on Wednesday afternoon after they ended regular trading at $493.50.

Metas earnings should serve as a stark warning for companies reporting this earnings season, said Thomas Monteiro, a senior analyst at Investing.com. While the companys results were robust, it didnt matter as much as the reported lowering revenue expectations for the current quarter, he said, adding, Investors are currently looking at the near future with heavy mistrust.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

See the original post here:
Meta Says It Plans to Spend Billions More on A.I. - The New York Times

Elon Musk’s xAI Close to Raising $6 Billion – PYMNTS.com

Elon Musks artificial intelligence (AI) startup xAI is reportedly close to raising $6 billion from investors.

The funding round would value xAI at $18 billion, Bloomberg reported Friday (April 26).

Silicon Valley venture capital (VC) firm Sequoia Capital has committed to investing in the startup, according to the Financial Times (FT), which reported the same figures as Bloomberg.

Musk has also approached other investors who, like Sequoia Capital, participated in his 2022 acquisition of Twitter, which he later renamed X, the FT reported.

Musk announced the launch of xAI in July 2023 after hinting for months that he wanted to build an alternative to OpenAIs AI-powered chatbot, ChatGPT. He was involved in the creation of OpenAI but left its board in 2018 and has been increasingly critical of the company and cautious about developments around AI in general.

Two days later, during a Twitter Spaces introduction of xAI to the public, Musk said that while he sees the firm in direct competition with larger businesses like OpenAI, Microsoft, Alphabet and Meta, as well as upstarts like Anthropic, his firm is taking a different approach to establishing its foundation model.

AGI [artificial general intelligence] being brute forced is not succeeding, Musk said, adding that while xAI is not trying to solve AGI on a laptop, [and] there will be heavy compute, his team will have free reign to explore ideas other than scaling up the foundational models data parameters.

In November 2023, xAI rolled out its AI model called Grok, saying on its website: Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please dont use it if you hate humor!

The company added that Grok has a real-time knowledge of the world thanks to the Musk-owned social media platform X; will answer spicy questions that are rejected by most of the other AI systems; and upon its launch had capabilities rivaling those of Metas LLaMA 2 AI model and OpenAIs GPT-3.5.

In March, xAI unveiled its open-source AI model. Musk said at the time: We are releasing the base model weights and network architecture of Grok-1, our large language model. Grok-1 is a 314 billion parameter Mixture-of-Experts model trained from scratch by xAI.

View post:
Elon Musk's xAI Close to Raising $6 Billion - PYMNTS.com

Artificial Intelligence (AI) and the Church of Africa: An interview with Fr Joel Nkongolo – Vatican News – English

Congolese national and Claretian priest Fr Joel Nkongolo recently spoke to Fr Paul Samasumo of Vatican News about AIs implications or possible impact on the African Church. Fr Nkongolo is currently based in Nigeria.

Fr Paul Samasumo Vatican City.

How would you define or describe Artificial Intelligence (AI)?

Artificial Intelligence encompasses a wide range of technologies and techniques that enable machines to mimic human cognitive functions. Machine learning, a subset of AI, allows systems to learn from data without being explicitly programmed. For example, streaming platforms like Netflix use recommendation algorithms to analyze users viewing history and suggest relevant content. Computer vision technology, another aspect of AI, powers facial recognition systems used in security and authentication applications.

Should we be worried and afraid of Artificial Intelligence?

While AI offers numerous benefits, such as improved efficiency, productivity, and innovation, it also raises legitimate concerns. One concern is job displacement, as automation could replace certain tasks traditionally performed by humans. For instance, a study by the McKinsey Global Institute suggests that up to 800 million jobs could be automated by 2030. Additionally, there are ethical concerns surrounding AI, such as algorithmic bias, which can perpetuate discrimination and inequality. For example, facial recognition systems have been found to exhibit higher error rates for people with darker skin tones, leading to unfair treatment in areas like law enforcement.

Africas journey towards embracing AI seems relatively slow. Is this a good or bad thing?

Africas adoption of AI has been relatively slow compared to other regions, attributed to factors such as limited infrastructure, digital literacy, and funding. However, this cautious approach can also be viewed as an opportunity to address underlying challenges and prioritize ethical considerations. For example, Ghana recently established two AI Centres to develop AI capabilities while ensuring ethical AI deployment. By taking a deliberate approach, African countries can tailor AI solutions to address local needs and minimize potential negative impacts.

How do you see Artificial Intelligence affecting or impacting the Church in Africa and elsewhere? Should the Church be worried about Artificial Intelligence?

AI can enhance various aspects of Church operations, such as automating administrative tasks, analyzing congregation demographics for targeted outreach, and providing personalized spiritual guidance through chatbots. However, there are ethical considerations, such as ensuring data privacy and maintaining human connection amid technological advancements. For example, sections of the Church of England utilize AI-powered chatbots to engage with congregants online, offering pastoral support and prayer. While AI can augment the Churchs outreach efforts, its essential to maintain human oversight and uphold ethical standards in its use.

How can the Church influence ethical behaviour and good social media conduct?

The Church can leverage its moral authority to promote ethical behaviour and responsible social media use. For instance, Pope Francis has spoken out against the spread of fake news and social media polarisation, emphasizing the importance of truth and dialogue. Additionally, initiatives like Digital Catholicism involve leveraging online media technologies as tools for evangelization while simultaneously spreading the message of faith in cyberspace itself. So, by modelling ethical behaviour and offering guidance on digital citizenship, the Church can foster a culture of respect, empathy, and truthfulness in online interactions.

How can parents, guardians, teachers, parish priests, or pastors help young people avoid becoming enslaved by these technologies?

Adults play a crucial role in guiding young peoples use of technology and promoting healthy digital habits. For example, parents and teachers can educate children about the risks of excessive screen time and the importance of balance in their online and offline activities. They can also set limits on device usage, encourage outdoor play, and foster face-to-face social interactions. Moreover, religious leaders can incorporate teachings on mindfulness, self-discipline, and responsible stewardship of technology into their spiritual guidance, helping young people cultivate a healthy relationship with digital media.

Can individuals and society do anything to protect themselves from potential AI harm or abuse by non-democratic governments?

Individuals and civil society organizations can take proactive measures to safeguard against AI abuse by authoritarian regimes. For example, they can advocate for legislation and regulations that protect digital rights, privacy, and freedom of expression. Tools like virtual private networks (VPNs) and encrypted messaging apps can help individuals circumvent government surveillance and censorship. Moreover, international collaboration and solidarity among democratic nations can amplify efforts to hold oppressive regimes accountable for AI misuse and human rights violations.

What would your advice be to those working in education or schools regarding teaching about AI?

Educators have a vital role in preparing students for the AI-driven future by fostering critical thinking, creativity, and ethical decision-making skills. For example, integrating AI literacy into the curriculum can help students understand how AI works, its societal impacts, and ethical considerations. Projects like Googles AI for Social Good initiative provide educational resources and tools for teaching AI concepts in schools. By empowering students to become responsible AI users and innovators, educators can effectively equip them to navigate the opportunities and challenges of the digital age.

Fr Nkongolo, thank you for your time and help in navigating these issues.

Fr. Paul Samasumo, these examples, comparisons, and statistics illustrate the multifaceted nature of AI and its implications for society, including the Church and education. I hope they provide a comprehensive perspective on these complex issues.

Read the original:
Artificial Intelligence (AI) and the Church of Africa: An interview with Fr Joel Nkongolo - Vatican News - English

A.I. Has a Measurement Problem – The New York Times

Theres a problem with leading artificial intelligence tools like ChatGPT, Gemini and Claude: We dont really know how smart they are.

Thats because, unlike companies that make cars or drugs or baby formula, A.I. companies arent required to submit their products for testing before releasing them to the public. Theres no Good Housekeeping seal for A.I. chatbots, and few independent groups are putting these tools through their paces in a rigorous way.

Instead, were left to rely on the claims of A.I. companies, which often use vague, fuzzy phrases like improved capabilities to describe how their models differ from one version to the next. And while there are some standard tests given to A.I. models to assess how good they are at, say, math or logical reasoning, many experts have doubts about how reliable those tests really are.

This might sound like a petty gripe. But Ive become convinced that a lack of good measurement and evaluation for A.I. systems is a major problem.

For starters, without reliable information about A.I. products, how are people supposed to know what to do with them?

I cant count the number of times Ive been asked in the past year, by a friend or a colleague, which A.I. tool they should use for a certain task. Does ChatGPT or Gemini write better Python code? Is DALL-E 3 or Midjourney better at generating realistic images of people?

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

See the rest here:
A.I. Has a Measurement Problem - The New York Times

BrightNight Unveils PowerAlpha Artificial Intelligence Software Platform at the "AI: Powering the New Energy Era … – PR Newswire

Trailblazing integrated AI platform ushers in an era of firm, affordable, clean energy.

WEST PALM BEACH, Fla., April 17, 2024 /PRNewswire/ -- BrightNight, the next-generation global renewable power producer built to deliver clean and dispatchable solutions, today introduced PowerAlpha, the proprietary software platform that uses cutting-edge Artificial Intelligence, data analytics, and cloud computing to design, optimize, and operate renewable power plants with industry-leading economics.

PowerAlpha was unveiled today at the "AI: Powering the New Energy Era"summit in Washington, D.C. Sponsored by BrightNight and hosted by the Foundation for American Science and Technology, it is the first independent industry summit in North America exclusively dedicated to exploring the influence of Artificial Intelligence on the energy sector. The summit brought together senior public officials, policymakers, energy industry leaders, experts, academia, and investors, including co-sponsors NVIDIA, IBM, C3.ai and Qcells, and representatives from the Department of Energy, U.S. Congress, Intel, Alphabet (Google), EEI, Bank of America, KPMG, EPRI, and other organizations.

BrightNight's PowerAlpha platform accelerates the global clean energy transition and decarbonization by ensuring the generation of reliable power at the lowest cost attainable in various geographies and power grids globally. Its benefits as realized in the design, optimize, and operate stages within the project's lifecycle include:

PowerAlpha's unique dispatch hybrid controls interface can also be integrated with utilities' Energy Management Systems for even more operational efficiency and integrated asset management solutions. Also, with the increasing demand for more power to support the growing use of AI applications and corresponding data centers, PowerAlpha is leveraging AI capabilities to drive greater efficiencies, better designs, and higher capacity renewable projects.

Martin Hermann, CEO of BrightNight, said: "Through relentless innovation, the dream of a sustainable clean energy transition, where renewable power is affordable and reliable, is now within reach. The integration of Artificial Intelligence with renewable energy offers solutions to numerous challenges, from demand forecasting and smart grid management to labor shortages and the design of efficient power projects. I am proud that BrightNight is at the forefront of this technological revolution, bringing the clean energy transition closer to reality than ever before."

Kiran Kumaraswamy, CTO of BrightNight, said: "PowerAlpha provides a fully integrated platform that is highly differentiated in the marketplace in its ability to design, optimize, and operate hybrid renewable power projects with cutting edge capabilities enabled by AI and utilizing patent-pending algorithms. Our team is redefining industry standards, optimizing renewable projects globally to integrate new load like data centers and increase utilization of existing infrastructure."

BrightNight has delivered a number of PowerAlpha projects and use cases in the U.S. and globally, where it is helping customers optimize existing transmissions infrastructure, integrate long and short duration storage solutions, increase capacity, and improve dispatchability of renewable assets, as well as repower existing projects.

BrightNight's is also using PowerAlpha to develop projects with its partner in India and the Philippines, ACEN, one of Asia's leading renewable companies. ACEN Group CIO Patrice Clausse said, "PowerAlpha has been an important enabler for our investment decision-making. We feel confident leveraging its capabilities to simulate the performance of numerous hybrid renewable power plant configurations incorporating solar, wind, and energy storage and identify the most efficient configuration. This has enabled us to optimize our plant configuration and was a critical component in our recent tenderwins."

For more information about PowerAlpha and BrightNight's 37 GW renewable power portfolio, please see http://www.brightnightpower.com/poweralpha or contact [emailprotected].

About BrightNight

BrightNight is the first global renewable integrated power company designed to provide utility and commercial and industrial customers with clean, dispatchable renewable power solutions. BrightNight works with customers across the U.S. and Asia Pacific to design, develop, and operate safe, reliable, large-scale renewable power projects optimized through its proprietary software platform PowerAlpha to better manage the intermittent nature of renewable energy. Its deep customer engagement process, team of proven power experts, and industry-leading solutions enable customers to overcome challenging energy sustainability standards, rapidly changing grid dynamics, and the transition away from fossil fuel generation. To learn more, please visit:www.brightnightpower.com

SOURCE BrightNight

See original here:
BrightNight Unveils PowerAlpha Artificial Intelligence Software Platform at the "AI: Powering the New Energy Era ... - PR Newswire

Artificial Intelligence Feedback on Physician Notes Improves Patient Care – NYU Langone Health

Artificial intelligence (AI) feedback improved the quality of physician notes written during patient visits, with better documentation improving the ability of care teams to make diagnoses and plan for patients future needs, a new study finds.

Since 2021, NYU Langone Health has been using pattern-recognizing, machine-learning AI systems to grade the quality of doctors clinical notes. At the same time, NYU Langone created data informatics dashboards that monitor hundreds of measures of safety and the effectiveness of care. The informatics team over time trained the AI models to track in dashboards how well doctors notes achieved the 5 Cs: completeness, conciseness, contingency planning, correctness, and clinical assessment.

Now, a new case study, published online April 17 in NEJM Catalyst Innovations in Care Delivery, shows how notes improved by AI, in combination with dashboard innovations and other safety initiatives, resulted in an improvement in care quality across four major medical specialties: internal medicine, pediatrics, general surgery, and the intensive care unit.

This includes improvements across the specialties of up to 45 percent in note-based clinical assessments (that is, determining diagnoses) and reasoning (making predictions when diagnoses are unknown). In addition, contingency planning to address patients future needs saw improvements of up to 34 percent.

Last year, NYU Langone added to this long-standing effort a newer form of AI that develops likely options for the next word in any sentence based on how billions of people used language on the internet over time. A result of this next-word prediction is that generative AI chatbots like GPT-4 can read physician notes and make suggestions.In a pilot within the case study, the research team supercharged their machine-learning AI model, which can only give physicians a grade on their notes, by integrating a chatbot that added an accurate written narrative of issues with any note.

The NYU Langone case study also showed that GPT-4 or other large language models could provide a method for assessing the 5Cs across medical specialties without specialized training in each. Researchers say that the generalizability of GPT-4 for evaluating note quality supports its potential for application at many health systems.

Our study provides evidence that AI can improve the quality of medical notes, a critical part of caring for patients, said lead study author Jonah Feldman, MD, medical director of clinical transformation and informatics within NYU Langones Medical Center Information Technology (MCIT) Department of Health Informatics. This is the first large-scale study to show how a healthcare organization can use a combination of AI models to give note feedback that significantly improves care quality.

Poor note quality in healthcare has been a growing concern since the enactment of the Health Information Technology for Economic and Clinical Health (HITECH) Act in 2009. The act gave incentives to healthcare systems to switch from paper to electronic health records (EHR), enabling improved patient safety and coordination between healthcare providers.

A side effect of EHR adoption, however, has been that physician clinical notes are now four times longer on average in the United States than in other countries. Such note bloat has been shown to make it harder for collaborating clinicians to understand diagnoses described by their colleagues, say the study authors. Issues with note quality has been shown in the field to lead to missed diagnoses and delayed treatments, and there is no universally accepted methodology for measuring it. Further, evaluation of note quality by human peers is time-consuming and hard to scale up to the organizational level, the researchers say.

The effort captured in the new NYU Langone case study outlines a structured approach for organizational development of AI-based note quality measurement, a related system for process improvement, and a demonstration of AI-fostered clinician behavioral change in combination with other safety programs. The study also details how AI-generated note quality measurement helped to foster adoption of standard workflows, a significant driver for quality improvement.

Each of the four medical specialties that participated in the study achieved the institutional goal, which was that more than 75 percent of inpatient history and physical exams and consult notes, were being completed using standardized workflows that drove compliance with quality metrics. This represented an improvement from the previous share of less than 5 percent.

Our study represents the founding stage of what will undoubtedly be a national trend to leverage cutting-edge tools to ensure clinical documentation of the highest qualitymeasurably and reproducibly, said study author Paul A. Testa, MD, JD, MPH, chief medical information officer for NYU Langone. The clinical note can be a foundational toolif accurate, accessible, and effectiveto truly influence clinical outcomes by meaningfully engaging patients while ensuring documentation integrity.

Along with Dr. Feldman and Dr. Testa, the current studys authors from NYU Langone were Katherine Hochman, MD, MBA, Benedict Vincent Guzman, Adam J. Goodman, MD, and Joseph M. Weisstuch, MD.

Greg Williams Phone: 212-404-3500 Gregory.Williams@NYULangone.org

Read more:
Artificial Intelligence Feedback on Physician Notes Improves Patient Care - NYU Langone Health

Small Businesses Face Uphill Battle in AI Race, Says AI Index Head – PYMNTS.com

Small and medium-sized businesses will struggle to keep pace with tech giants like OpenAI in developing their own artificial intelligence (AI) models, according to a new report from Stanford University.

In an interview,Nestor Maslej, the editor-in-chief of Stanfords newly released 2024 AI Index Report, highlighted the studys findings on the growing AI divide between large and small companies. While tech behemoths pour billions into AI R&D, smaller firms lack the resources and talent to compete head-on.

A small or even medium-sized business will not be able to train a frontier foundation model that can compete with the likes of GPT-4, Gemini or Claude, Maslej said. However, there are some fairly competent open-source models, such as Llama 2 and Mistral, that are freely accessible. A lot can be done with these kinds of open-source models, and they are likely to continue improving over time. In a few years, there may be an open, relatively low-parameter model that works as well as GPT-4 does today.

Astudy from PYMNTS last year highlighted that generative AI technologies such as OpenAIs ChatGPT could significantly enhance productivity, yet they also risk disrupting employment patterns.

A major takeaway from the report is the possible disconnect between AI benchmarks and actual business requirements in the real world.

To me, it is less about improving the models on these tasks and more about asking whether the benchmarks we have are even well-suited to evaluate the business utility of these systems, Maslej stated. The current benchmarks may not be well-aligned with the real-world needs of businesses.

The report indicated that while private investment in AI generally declined last year, funding for generative AI experienced a dramatic surge, growing nearly eightfold from 2022 to $25.2 billion. Leading players in the generative AI industry, including OpenAI, Anthropic, Hugging Face and Inflection, reported substantial increases in their fundraising efforts.

Maslej highlighted that while the costs of adopting AI are considerable, they are overshadowed by the expenses associated with training the systems.

Adoption is less of a cost problem because the real cost lies in training the systems. Most companies do not need to worry about training their own models and can instead adopt existing models, which are available either freely through open source or through relatively cost-accessible APIs, he explained.

The report also calls for standardized benchmarks in responsible AI development. Maslej imagines a future where common benchmarks allow businesses to easily compare and choose AI models that match their ethical standards. Standardization would make it simpler for businesses to more confidently ascertain how various AI models compare to one another, he stated.

Balancing profit with ethical concerns emerges as a key challenge. The report shows that while many businesses are concerned about issues like privacy and data governance, fewer are taking concrete steps to mitigate these risks. The more pressing question is whether businesses are actually taking steps to address some of these concerns, Maslej noted.

Measuring AIs impact on worker productivity across different industries remains complex. It is possible to measure productivity within various industries; however, comparing productivity gains across industries is more challenging, Maslej said.

Looking ahead, the report highlights the need for businesses to navigate an increasingly complex regulatory landscape. On Tuesday, Utah Sen. Mitt Romney and several Senate colleagues unveiled a plan to guard against the potential dangers of AI. These include threats in biological, chemical, cyber and nuclear areas by increasing federal regulation of advanced technological developments.

Maslej emphasized the importance of staying vigilant. Navigating this issue will be challenging. The regulatory standards for AI are still unclear.

As public awareness of AI grows, Maslej believes that businesses must address concerns about job displacement and data privacy. As people become more aware of AI, how can businesses proactively address nervousness, especially regarding job displacement and data privacy? he posed as a crucial question for the industry to consider.

The 2024 AI Index Report is meant to guide businesses and society in navigating the rapid advancements in artificial intelligence. Maslej concluded, The AI landscape is evolving at an unprecedented pace, presenting both immense opportunities and daunting challenges.

Go here to see the original:
Small Businesses Face Uphill Battle in AI Race, Says AI Index Head - PYMNTS.com

NSA Warns of AI Cybersecurity Risks, Urges Businesses to Bolster Defenses – PYMNTS.com

The National Security Agency (NSA) is sounding the alarm on the cybersecurity risks posed by artificial intelligence (AI) systems, releasing new guidance to help businesses protect their AI from hackers.

As AI increasingly integrates into business operations, experts warn that these systems are particularly vulnerable to cyberattacks. The NSAs Cybersecurity Information Sheetprovides insights into AIs unique security challenges and offers steps companies can take to harden their defenses.

AI brings unprecedented opportunity but also can present opportunities for malicious activity. NSA is uniquely positioned to provide cybersecurity guidance, AI expertise, and advanced threat analysis, NSA Cybersecurity Director Dave Luber said Monday (April 15) in anews release.

The report suggested that organizations using AI systems should put strong security measures in place to protect sensitive data and prevent misuse. Key measures include conducting ongoing compromise assessments, hardening the IT deployment environment, enforcing strict access controls, using robust logging and monitoring and limiting access to model weights.

AI is vulnerable to hackers due to its complexity and the vast amounts of data it can process,Jon Clay, vice president of threat intelligence at the cybersecurity company Trend Micro,told PYMNTS. AI is software, and as such, vulnerabilities are likely to exist which can be exploited by adversaries.

Asreported by PYMNTS, AI is revolutionizing how security teams approach cyber threats by accelerating and streamlining their processes. Through its ability to analyze large datasets and identify complex patterns, AI automates the early stages of incident analysis, enabling security experts to start with a clear understanding of the situation and respond more quickly.

Cybercrime continues to rise with the increasing embrace of a connected global economy. According to an FBI report, the U.S. alone saw cyberattack losses exceed $10.3 billion in 2022.

AI systems are particularly prone to attacks due to their dependency on data for training models, according to Clay.

Since AI and machine learning depend on providing and training data to build their models, compromising that data is an obvious way for bad actors to poison AI/ML systems, Clay said.

He emphasized the risks of these hacks, explaining that they can lead to stolen confidential data, harmful commands being inserted and biased results. These issues could upset users and even lead to legal problems.

Clay also pointed out the challenges in detecting vulnerabilities in AI systems.

It can be difficult to identify how they process inputs and make decisions, making vulnerabilities harder to detect, he said.

He noted that hackers are looking for ways to get around AI security to change its results, and this method is being talked about more in secret online forums.

When asked about measures businesses can implement to enhance AI security, Clay emphasized the necessity of a proactive approach.

Its unrealistic to ban AI outright, but organizations need to be able to manage and regulate it, he said.

Clay recommended adopting zero-trust security modelsand using AI to enhance safety measures. This method means AI can help analyze emotions and tones in communications and check web pages to stop fraud. He also stressed the importance of strict access rules andmulti-factor authenticationto protect AI systems from unauthorized access.

As businesses embrace AI for enhanced efficiency and innovation, they also expose themselves to new vulnerabilities, Malcolm Harkins, chief security and trust officer at the cybersecurity firm HiddenLayer, told PYMNTS.

AI was the most vulnerable technology deployed in production systems because it was vulnerable at multiple levels, Harkins added.

Harkins advised businesses to take proactive measures, such as implementing purpose-built security solutions, regularly assessing AI models robustness, continuous monitoring and developing comprehensive incident response plans.

If real-time monitoring and protection were not in place, AI systems would surely be compromised, and the compromise would likely go unnoticed for extended periods, creating the potential for more extensive damage, Harkins said.

See more here:
NSA Warns of AI Cybersecurity Risks, Urges Businesses to Bolster Defenses - PYMNTS.com

Artificial intelligence studio for college students opens in Warner Robins at the VECTR Center – 13WMAZ.com

The AI-Enhanced Robotic Manufacturing training program offered at Central Georgia Technical College is preparing student veterans and active duty service members.

Author: 13wmaz.com

Published: 8:43 AM EDT April 17, 2024

Updated: 8:43 AM EDT April 17, 2024

See original here:
Artificial intelligence studio for college students opens in Warner Robins at the VECTR Center - 13WMAZ.com