Category Archives: Machine Learning

Reducing Toxic AI Responses – Neuroscience News

Summary: Researchers developed a new machine learning technique to improve red-teaming, a process used to test AI models for safety by identifying prompts that trigger toxic responses. By employing a curiosity-driven exploration method, their approach encourages a red-team model to generate diverse and novel prompts that reveal potential weaknesses in AI systems.

This method has proven more effective than traditional techniques, producing a broader range of toxic responses and enhancing the robustness of AI safety measures. The research, set to be presented at the International Conference on Learning Representations, marks a significant step toward ensuring that AI behaviors align with desired outcomes in real-world applications.

Key Facts:

Source: MIT

A user could ask ChatGPT to write a computer program or summarize an article, and the AI chatbot would likely be able to generate useful code or write a cogent synopsis. However, someone could also ask for instructions to build a bomb, and the chatbot might be able to provide those, too.

To prevent this and other safety issues, companies that build large language models typically safeguard them using a process called red-teaming. Teams of human testers write prompts aimed at triggering unsafe or toxic text from the model being tested. These prompts are used to teach the chatbot to avoid such responses.

But this only works effectively if engineers know which toxic prompts to use. If human testers miss some prompts, which is likely given the number of possibilities, a chatbot regarded as safe might still be capable of generating unsafe answers.

Researchers from Improbable AI Lab at MIT and the MIT-IBM Watson AI Lab used machine learning to improve red-teaming. They developed a technique to train a red-team large language model to automatically generate diverse prompts that trigger a wider range of undesirable responses from the chatbot being tested.

They do this by teaching the red-team model to be curious when it writes prompts, and to focus on novel prompts that evoke toxic responses from the target model.

The technique outperformed human testers and other machine-learning approaches by generating more distinct prompts that elicited increasingly toxic responses. Not only does their method significantly improve the coverage of inputs being tested compared to other automated methods, but it can also draw out toxic responses from a chatbot that had safeguards built into it by human experts.

Right now, every large language model has to undergo a very lengthy period of red-teaming to ensure its safety. That is not going to be sustainable if we want to update these models in rapidly changing environments.

Our method provides a faster and more effective way to do this quality assurance, says Zhang-Wei Hong, an electrical engineering and computer science (EECS) graduate student in the Improbable AI lab and lead author of apaper on this red-teaming approach.

Hongs co-authors include EECS graduate students Idan Shenfield, Tsun-Hsuan Wang, and Yung-Sung Chuang; Aldo Pareja and Akash Srivastava, research scientists at the MIT-IBM Watson AI Lab; James Glass, senior research scientist and head of the Spoken Language Systems Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Pulkit Agrawal, director of Improbable AI Lab and an assistant professor in CSAIL. The research will be presented at the International Conference on Learning Representations.

Automated red-teaming

Large language models, like those that power AI chatbots, are often trained by showing them enormous amounts of text from billions of public websites. So, not only can they learn to generate toxic words or describe illegal activities, the models could also leak personal information they may have picked up.

The tedious and costly nature of human red-teaming, which is often ineffective at generating a wide enough variety of prompts to fully safeguard a model, has encouraged researchers to automate the process using machine learning.

Such techniques often train a red-team model using reinforcement learning. This trial-and-error process rewards the red-team model for generating prompts that trigger toxic responses from the chatbot being tested.

But due to the way reinforcement learning works, the red-team model will often keep generating a few similar prompts that are highly toxic to maximize its reward.

For their reinforcement learning approach, the MIT researchers utilized a technique called curiosity-driven exploration. The red-team model is incentivized to be curious about the consequences of each prompt it generates, so it will try prompts with different words, sentence patterns, or meanings.

If the red-team model has already seen a specific prompt, then reproducing it will not generate any curiosity in the red-team model, so it will be pushed to create new prompts, Hong says.

During its training process, the red-team model generates a prompt and interacts with the chatbot. The chatbot responds, and a safety classifier rates the toxicity of its response, rewarding the red-team model based on that rating.

Rewarding curiosity

The red-team models objective is to maximize its reward by eliciting an even more toxic response with a novel prompt. The researchers enable curiosity in the red-team model by modifying the reward signal in the reinforcement learning set up.

First, in addition to maximizing toxicity, they include an entropy bonus that encourages the red-team model to be more random as it explores different prompts. Second, to make the agent curious they include two novelty rewards.

One rewards the model based on the similarity of words in its prompts, and the other rewards the model based on semantic similarity. (Less similarity yields a higher reward.)

To prevent the red-team model from generating random, nonsensical text, which can trick the classifier into awarding a high toxicity score, the researchers also added a naturalistic language bonus to the training objective.

With these additions in place, the researchers compared the toxicity and diversity of responses their red-team model generated with other automated techniques. Their model outperformed the baselines on both metrics.

They also used their red-team model to test a chatbot that had been fine-tuned with human feedback so it would not give toxic replies. Their curiosity-driven approach was able to quickly produce 196 prompts that elicited toxic responses from this safe chatbot.

We are seeing a surge of models, which is only expected to rise. Imagine thousands of models or even more and companies/labs pushing model updates frequently. These models are going to be an integral part of our lives and its important that they are verified before released for public consumption.

Manual verification of models is simply not scalable, and our work is an attempt to reduce the human effort to ensure a safer and trustworthy AI future, says Agrawal.

In the future, the researchers want to enable the red-team model to generate prompts about a wider variety of topics. They also want to explore the use of a large language model as the toxicity classifier. In this way, a user could train the toxicity classifier using a company policy document, for instance, so a red-team model could test a chatbot for company policy violations.

If you are releasing a new AI model and are concerned about whether it will behave as expected, consider using curiosity-driven red-teaming, says Agrawal.

Funding: This research is funded, in part, by Hyundai Motor Company, Quanta Computer Inc., the MIT-IBM Watson AI Lab, an Amazon Web Services MLRA research grant, the U.S. Army Research Office, the U.S. Defense Advanced Research Projects Agency Machine Common Sense Program, the U.S. Office of Naval Research, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator.

Author: Adam Zewe Source: MIT Contact: Adam Zewe MIT Image: The image is credited to Neuroscience News

Original Research: The findings will be presented at the International Conference on Learning Representations

More here:
Reducing Toxic AI Responses - Neuroscience News

Video Highlights: Deep Reinforcement Learning for Maximizing Profits with Prof. Barrett Thomas – insideBIGDATA

If you are a visitor of this website:

Please try again in a few minutes.

There is an issue between Cloudflare's cache and your origin web server. Cloudflare monitors for these errors and automatically investigates the cause. To help support the investigation, you can pull the corresponding error log from your web server and submit it our support team. Please include the Ray ID (which is at the bottom of this error page). Additional troubleshooting resources.

More:
Video Highlights: Deep Reinforcement Learning for Maximizing Profits with Prof. Barrett Thomas - insideBIGDATA

Could we use machine learning to converse with whales and dolphins? – New Scientist

R Dirscherl/imageBROKER/Shutterstock

Is there any prospect of using machine learning to converse with whales and dolphins?

Mike Follows Sutton Coldfield, West Midlands, UK

Douglas Adams dreamed up an organic universal translator called a Babel fish that could be popped into your ear. Though it is amazing how prescient science fiction can be, this wont become a reality in the near future, if ever.

Communicating with other animals like whales and dolphins is challenging because we dont have the equivalent of the Rosetta Stone, which would allow for direct translation. Decrees in ancient Egypt were inscribed onto stelae, essentially slabs of stone.

See the article here:
Could we use machine learning to converse with whales and dolphins? - New Scientist

From data to decision-making: the role of machine learning and digital twins in Alzheimers Disease – UCI MIND

For patients experiencing cognitive decline due to Alzheimers Disease (AD), choosing the most appropriate treatment course at the right time is of great importance. A key element to these decisions is the careful consideration of the available scientific evidence, particularly from randomized clinical trials (RCTs) such as the recent lecanemab trial. Translating RCT results into patient-level decisions, however, can be challenging. This is because trial results tell us about the outcomes of groups rather than individuals. A doctor must judge how similar their patient is to the groups studied in trials. For AD, where patients vary widely in clinical presentations and rates of cognitive decline, this may be a difficult task.

As a step towards more personalized decision-making, prescribing physicians may focus on specific patient characteristics that would affect the disease course and response to treatment, like demographics (e.g., sex, age, education) or genetic factors. In fact, subgroup analyses from some RCTs suggest that at least some drugs could differ in safety or efficacy based on these factors. Nevertheless, the main limitations of these types of results are that the group sizes are often small, increasing the risk of spurious findings. Furthermore, they do not consider the overall impact of many different factors simultaneously. This is where machine learning (ML) may close the gap between data and decision-making.

ML uses patterns found in large datasets to predict health outcomes and treatment response by considering many patient characteristics at once and, further, how they may interact. This underlying model can subsequently be used to form a digital twin for a patient, or the best possible copy of their characteristics and health status. We can use this twin to ask what if questions. For example, If we prescribed this patient this drug at this time, what would be their most likely outcome six months from now? Under the hood, an ML algorithm would utilize previously collected data, such as from RCTs, to locate potential twins and use their outcomes to formulate a response. This could give us a more pinpointed prediction of patient outcomes compared to subgroup analyses. Ideally, this targeted view on patients would help facilitate better care for AD patients.

Roy S. Zawadzki

The stage is set for digital twins to play a bigger role in clinical research and practice in AD: we have the methodology, the data, and, most importantly, a large unmet clinical need for new and more effective treatments. Digital twins can be integrated in a wide variety of contexts that can potentially save clinical trial costs, quicken the time until approval, and better utilize the treatments we already have for the patients that need them the most. For these reasons, biotech companies, academic researchers, and healthcare systems alike should be investigating how digital twins can help assist their particular goals.

To learn more about real-world opportunities and considerations surrounding digital twins, please check out my latest post on my Substack

Roy S. Zawadzki, graduate trainee with Professor Daniel Gillen and supported by the TITAN T32 training grant

Read this article:
From data to decision-making: the role of machine learning and digital twins in Alzheimers Disease - UCI MIND

AWS at NVIDIA GTC 2024: Accelerate innovation with generative AI on AWS | Amazon Web Services – AWS Blog

AWS was delighted to present to and connect with over 18,000 in-person and 267,000 virtual attendees at NVIDIA GTC, a global artificial intelligence (AI) conference that took place March 2024 in San Jose, California, returning to a hybrid, in-person experience for the first time since 2019.

AWS has had a long-standing collaboration with NVIDIA for over 13 years. AWS was the first Cloud Service Provider (CSP) to offer NVIDIA GPUs in the public cloud, and remains among the first to deploy NVIDIAs latest technologies.

Looking back at AWS re:Invent 2023, Jensen Huang, founder and CEO of NVIDIA, chatted with AWS CEO Adam Selipsky on stage, discussing how NVIDIA and AWS are working together to enable millions of developers to access powerful technologies needed to rapidly innovate with generative AI. NVIDIA is known for its cutting-edge accelerators and full-stack solutions that contribute to advancements in AI. The company is combining this expertise with the highly scalable, reliable, and secure AWS Cloud infrastructure to help customers run advanced graphics, machine learning, and generative AI workloads at an accelerated pace.

The collaboration between AWS and NVIDIA further expanded at GTC 2024, with the CEOs from both companies sharing their perspectives on the collaboration and state of AI in a press release:

The deep collaboration between our two organizations goes back more than 13 years, when together we launched the worlds first GPU cloud instance on AWS, and today we offer the widest range of NVIDIA GPU solutions for customers, says Adam Selipsky, CEO of AWS. NVIDIAs next-generation Grace Blackwell processor marks a significant step forward in generative AI and GPU computing. When combined with AWSs powerful Elastic Fabric Adapter networking, Amazon EC2 UltraClusters hyper-scale clustering, and our unique AWS Nitro Systems advanced virtualization and security capabilities, we make it possible for customers to build and run multi-trillion parameter large language models faster, at massive scale, and more securely than anywhere else. Together, we continue to innovate to make AWS the best place to run NVIDIA GPUs in the cloud.

AI is driving breakthroughs at an unprecedented pace, leading to new applications, business models, and innovation across industries, says Jensen Huang, founder and CEO of NVIDIA. Our collaboration with AWS is accelerating new generative AI capabilities and providing customers with unprecedented computing power to push the boundaries of whats possible.

On the first day of the NVIDIA GTC, AWS and NVIDIA made a joint announcement focused on their strategic collaboration to advance generative AI. Huang included the AWS and NVIDIA collaboration on a slide during his keynote, highlighting the following announcements. The GTC keynote had over 21 million views within the first 72 hours.

By March 22, AWSs announcement with NVIDIA had generated 104 articles mentioning AWS and Amazon. The vast majority of coverage mentioned AWSs plans to offer Blackwell-based instances. Adam Selipsky appeared on CNBCs Mad Money to discuss the long-standing collaboration between AWS and NVIDIA, among the many other ways AWS is innovating in generative AI, stating that AWS has been the first to bring many of its GPUs to the cloud to drive efficiency and scalability for customers.

Project Ceiba has also been a focus in media coverage. Forbes referred to Project Ceiba as the most exciting project by AWS and NVIDIA, stating that it should accelerate the pace of innovation in AI, making it possible to tackle more complex problems, develop more sophisticated models, and achieve previously unattainable breakthroughs. The Next Platform ran an in-depth piece on Ceiba, stating that the size and the aggregate compute of Ceiba cluster are both being radically expanded, which will give AWS a very large supercomputer in one of its data centers and NVIDIA will use it to do AI research, among other things.

Live from GTC was an on-site studio at GTC for invited speakers to have a fireside chat with tech influencers like VentureBeat. Chetan Kapoor, Director of Product Management for Amazon EC2 at AWS, was interviewed by VentureBeat at the Live from GTC studio, where he discussed AWSs presence and highlighted key announcements at GTC.

The AWS booth showcased generative AI services, like the LLMs with Anthropic and Cohere on Amazon Bedrock, PartyRock, Amazon Q, Amazon SageMaker JumpStart, and more. Highlights included:

During GTC, AWS invited 23 partner and customer solution demos to join its booth with either a dedicated demo kiosk or a 30-minute in-booth session. Such partners and customers included Ansys, Anthropic, Articul8, Bria.ai, Cohere, Deci, Deepbrain.AI, Denali Advanced Integration, Ganit, Hugging Face, Lilt, Linker Vision, Mavenir, MCE, Media.Monks, Modular, NVIDIA, Perplexity, Quantiphi, Run.ai, Salesforce, Second Spectrum, and Slalom.

Among them, high-potential early-stage startups in generative AI across the globe were showcased with a dedicated kiosk at the AWS booth. The AWS Startups team works closely with these companies by investing and supporting their growth, offering resources through programs like AWS Activate.

NVIDIA was one of the 45 launch partners for the new AWS Generative AI Competency program. The Generative AI Center of Excellence for AWS Partners team members were on site at the AWS booth, presenting this program for both existing and potential AWS partners. The program offers valuable resources along with best practices for all AWS partners to build, market, and sell generative AI solutions jointly with AWS.

Watch a video recap of the AWS presence at NVIDIA GTC 2024. For additional resources about the AWS and NVIDIA collaboration, refer to the AWS at NVIDIA GTC 2024 resource hub.

Julie Tang is the Senior Global Partner Marketing Manager for Generative AI at Amazon Web Services (AWS), where she collaborates closely with NVIDIA to plan and execute partner marketing initiatives focused on generative AI. Throughout her tenure at AWS, she has held various partner marketing roles, including Global IoT Solutions, AWS Partner Solution Factory, and Sr. Campaign Manager in Americas Field Marketing. Prior to AWS, Julie served as the Marketing Director at Segway. She holds a Masters degree in Communications Management with a focus on marketing and entertainment management from the University of Southern California, and dual Bachelors degrees in Law and Broadcast Journalism from Fudan University.

Read the original post:
AWS at NVIDIA GTC 2024: Accelerate innovation with generative AI on AWS | Amazon Web Services - AWS Blog

High-resolution meteorology with climate change impacts from global climate model data using generative machine … – Nature.com

Zhou, E. & Mai, T. Electrification Futures Study: Operational Analysis of U.S. Power Systems with Increased Electrification and Demand-Side Flexibility (US National Renewable Energy Laboratory, 2021); https://www.nrel.gov/docs/fy21osti/79094.pdf

Xexakis, G. & Trutnevyte, E. Consensus on future EU electricity supply among citizens of France, Germany, and Poland: implications for modeling. Energy Strategy Rev. 38, 100742 (2021).

Article Google Scholar

Steggals, W., Gross, R. & Heptonstall, P. Winds of change: how high wind penetrations will affect investment incentives in the GB electricity sector. Energy Policy 39, 13891396 (2011).

Article Google Scholar

Brinkman, G. et al. The North American Renewable Integration Study: A U.S. Perspective (US National Renewable Energy Laboratory, 2021); https://www.nrel.gov/docs/fy21osti/79224.pdf

Boie, I., Fernandes, C., Fras, P. & Klobasa, M. Efficient strategies for the integration of renewable energy into future energy infrastructures in European analysis based on transnational modeling and case studies for nine European regions. Energy Policy 67, 170185 (2014).

Article Google Scholar

Sun, X., Zhang, B., Tang, X., McLellan, B. C. & Hk, M. Sustainable energy transitions in China: renewable options and impacts on the electricity system. Energies 9, 980 (2016).

Article Google Scholar

Carvallo, J. et al. A Guide for Improved Resource Adequacy Assessments in Evolving Power Systems: Institutional and Technical Dimensions (Ernest Orlando Lawrence Berkeley National Laboratory, 2023); https://eta-publications.lbl.gov/sites/default/files/ra_project_-_final.pdf

Stenclik, D. et al. Redefining Resource Adequacy for Modern Power Systems (Energy Systems Integration Group, 2021); https://www.esig.energy/wp-content/uploads/2022/12/ESIG-Redefining-Resource-Adequacy-2021-b.pdf

Auffhammer, M., Baylis, P. & Hausman, C. H. Climate change is projected to have severe impacts on the frequency and intensity of peak electricity demand across the United States. Proc. Natl Acad. Sci. USA 114, 18861891 (2017).

Article Google Scholar

Huang, J. & Gurney, K. R. Impact of climate change on U.S. building energy demand: sensitivity to spatiotemporal scales, balance point temperature, and population distribution. Clim. Change 137, 171185 (2016).

Article Google Scholar

Craig, M. T. et al. A review of the potential impacts of climate change on bulk power system planning and operations in the United States. Renew. Sustain. Energy Rev. 98, 255267 (2018).

Article Google Scholar

Bloomfield, H. C., Brayshaw, D. J., Shaffrey, L. C., Coker, P. J. & Thornton, H. E. Quantifying the increasing sensitivity of power systems to climate variability. Environ. Res. Lett. 11, 124025 (2016).

Article Google Scholar

Yalew, S. G. et al. Impacts of climate change on energy systems in global and regional scenarios. Nat. Energy 5, 794802 (2020).

Article Google Scholar

Craig, M. T., Jaramillo, P., Hodge, B.-M., Nijssen, B. & Brancucci, C. Compounding climate change impacts during high stress periods for a high wind and solar power system in Texas. Environ. Res. Lett. 15, 024002 (2020).

Article Google Scholar

Dowling, P. The impact of climate change on the European energy system. Energy Policy 60, 406417 (2013).

Article Google Scholar

Craig, M. T. et al. Overcoming the disconnect between energy system and climate modeling. Joule 6, 14051417 (2022).

Article Google Scholar

Tapiador, F. J., Navarro, A., Moreno, R., Snchez, J. L. & Garca-Ortega, E. Regional climate models: 30 years of dynamical downscaling. Atmos. Res. 235, 104785 (2020).

Article Google Scholar

Pierce, D. W., Cayan, D. R. & Thrasher, B. L. Statistical downscaling using localized constructed analogs (LOCA). J. Hydrometeorol. 15, 25582585 (2014).

Article Google Scholar

Wood, A. W., Leung, L. R., Sridhar, V. & Lettenmaier, D. P. Hydrologic implications of dynamical and statistical approaches to downscaling climate model outputs. Clim. Change 62, 189216 (2004).

Article Google Scholar

Kaczmarska, J., Isham, V. & Onof, C. Point process models for fine-resolution rainfall. Hydrol. Sci. J. 59, 19721991 (2014).

Article Google Scholar

Vandal, T., Kodra, E. & Ganguly, A. R. Intercomparison of machine learning methods for statistical downscaling: the case of daily and extreme precipitation. Theor. Appl. Climatol. 137, 557570 (2019).

Article Google Scholar

Stengel, K., Glaws, A., Hettinger, D. & King, R. N. Adversarial super-resolution of climatological wind and solar data. Proc. Natl Acad. Sci. USA 117, 1680516815 (2020).

Article Google Scholar

Tran, D. T. et al. GANs enabled super-resolution reconstruction of wind field. J. Phys. Conf. Ser. 1669, 012029 (2020).

Article Google Scholar

Kim, J., Lee, J. K. & Lee, K. M. Deeply-recursive convolutional network for image super-resolution. in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 16371645 (2016).

Hess, P., Drke, M., Petri, S., Strnad, F. M. & Boers, N. Physically constrained generative adversarial networks for improving precipitation fields from Earth system models. Nat. Mach. Intell. https://doi.org/10.1038/s42256-022-00540-1 (2022).

Goodfellow, I. et al. Generative adversarial nets. In Proc. Advances in Neural Information Processing Systems Vol. 27 (eds Ghahramani, Z. et al.) (Curran Associates, Inc., 2014).

Di Luca, A., de Ela, R. & Laprise, R. Potential for small scale added value of RCMs downscaled climate change signal. Clim. Dyn. 40, 601618 (2013).

Article Google Scholar

Flato, G. et al. in IPCC Climate Change 2013: The Physical Science Basis Ch. 9 (eds Stocker, T. F. et al.) (IPCC, Cambridge Univ. Press, 2013).

Yukimoto, S. et al. MRI MRI-ESM2.0 Model Output Prepared for CMIP6 C4MIP esm-ssp585 Version 20191108 (WDC Climate, 2019); https://doi.org/10.22033/ESGF/CMIP6.6811

EC-Earth Consortium (EC-Earth). EC-Earth-Consortium EC-Earth3 Model Output Prepared for CMIP6 CMIP esm-ssp585, Version 20200310 (Earth System Grid Federation, 2019); https://doi.org/10.22033/ESGF/CMIP6.4700

Kao, S.-C. et al. The Third Assessment of the Effects of Climate Change on Federal Hydropower (OSTI, 2022); https://www.osti.gov/biblio/1887712/

Martinez, A. & Iglesias, G. Climate change impacts on wind energy resources in North America based on the CMIP6 projections. Sci. Total Environ. 806, 150580 (2022).

Article Google Scholar

Sengupta, M. et al. The National Solar Radiation Data Base (NSRDB). Renew. Sustain. Energy Rev. 89, 5160 (2018).

Article Google Scholar

Draxl, C., Clifton, A., Hodge, B.-M. & McCaa, J. The Wind Integration National Dataset (WIND) Toolkit. Appl. Energy 151, 355366 (2015).

Article Google Scholar

James, E. P. et al. The High-Resolution Rapid Refresh (HRRR): an hourly updating convection-allowing forecast model. Part II: forecast performance. Weather Forecast. 37, 13971417 (2022).

Article Google Scholar

Jafari, S., Sommer, T., Chokani, N. & Abhari, R. S. Wind resource assessment using a mesoscale model: the effect of horizontal resolution. in Proc. ASME Turbo Expo 2012: Turbine Technical Conference and Exposition (eds Bainier, F. et al.) 987995 (American Society of Mechanical Engineers Digital Collection, 2013).

Perez, R., David, M. & Hoff, T. E. in Foundations and Trends in Renewable Energy (eds Norton, B. et al.) 144 (Now Publishers Inc., 2016).

Kolmogorov, A. N. Dissipation of energy in the locally isotropic turbulence. Proc. Math. Phys. Sci. 434, 1517 (1991).

MathSciNet Google Scholar

Holttinen, H. et al. Design and Operation of Power Systems with Large Amounts of Wind Power: Final Summary Report, IEA WIND Task 25, Phase Four 20152017 (VTT Technical Research Centre of Finland, 2019); https://doi.org/10.32040/2242-122X.2019.T350

Dobos, A. P. PVWatts Version 5 Manual (OSTI, 2014); https://www.osti.gov/biblio/1158421

Gueymard, C. A. REST2: high-performance solar radiation model for cloudless-sky irradiance, illuminance, and photosynthetically active radiationvalidation with a benchmark dataset. Sol. Energy 82, 272285 (2008).

Article Google Scholar

Maxwell, E. L. A Quasi-Physical Model for Converting Hourly Global Horizontal to Direct Normal Insolation (OSTI, 1987); https://www.osti.gov/biblio/5987868

Olea, R. A. in Geostatistics for Engineers and Earth Scientists (ed. Olea, R. A.) 6790 (Springer, 1999).

Stull, R. Wet-bulb temperature from relative humidity and air temperature. J. Appl. Meteorol. Climatol. 50, 22672269 (2011).

Article Google Scholar

Gelaro, R. et al. The modern-era retrospective analysis for research and applications, version 2 (MERRA-2). J. Clim. 30, 54195454 (2017).

Article Google Scholar

Atmospheric Radiation Measurement (ARM). Data Quality Assessment for ARM Radiation Data (QCRADBRS1LONG). 2015-01-01 to 2021-12-31, Southern Great Plains (SGP) Central Facility, Lamont, OK (C1) (eds Shi, Y. & Riihimaki, L.) (ARM Data Center, 1993); https://doi.org/10.5439/1027745

Brinkman, G. et al. The North American Renewable Integration Study (NARIS): A U.S. Perspective (OSTI, 2021); https://www.osti.gov/biblio/1804701

Peacock, J. A. Two-dimensional goodness-of-fit testing in astronomy. Mon. Not. R. Astron. Soc. 202, 615627 (1983).

Article Google Scholar

Novacheck, J. et al. The Evolving Role of Extreme Weather Events in the U.S. Power System with High Levels of Variable Renewable Energy (OSTI, 2021); https://www.osti.gov/biblio/1837959

IPCC Climate Change 2023: Synthesis Report Contribution of Working Groups I, II and III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (eds Lee, H. & Romero, J.) 184 (IPCC, 2023).

Ralston Fonseca, F. et al. Climate-induced tradeoffs in planning and operating costs of a regional electricity system. Environ. Sci. Technol. 55, 1120411215 (2021).

Article Google Scholar

Avery, C. W. et al. in Impacts, Risks, and Adaptation in the United States: Fourth National Climate Assessment Vol. II (eds Reidmiller, D. R. et al.) 14131430 (US Global Change Research Program, 2018).

Draxl, C., Hodge, B. M., Clifton, A. & McCaa, J. Overview and Meteorological Validation of the Wind Integration National Dataset Toolkit (OSTI, 2015); https://www.osti.gov/biblio/1214985

Hassanaly, M., Glaws, A., Stengel, K. & King, R. N. Adversarial sampling of unknown and high-dimensional conditional distributions. J. Comput. Phys. 450, 110853 (2022).

Article MathSciNet Google Scholar

Wootten, A., Terando, A., Reich, B. J., Boyles, R. P. & Semazzi, F. Characterizing sources of uncertainty from global climate models and downscaling techniques. J. Appl. Meteorol. Climatol. 56, 32453262 (2017).

Article Google Scholar

Karnauskas, K. B., Lundquist, J. K. & Zhang, L. Southward shift of the global wind energy resource under high carbon dioxide emissions. Nat. Geosci. 11, 3843 (2018).

Article Google Scholar

Cohen, J. et al. Divergent consensuses on Arctic amplification influence on midlatitude severe winter weather. Nat. Clim. Chang. 10, 2029 (2020).

Article Google Scholar

Voigt, A. et al. Clouds, radiation, and atmospheric circulation in the present-day climate and under climate change. WIREs Clim. Change 12, e694 (2021).

Article Google Scholar

Springenberg, J. T., Dosovitskiy, A., Brox, T. & Riedmiller, M. A. Striving for simplicity: the all convolutional net. in CoRR Vol. abs/1412.6806 (2014).

He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. in Proc. IEEE Conference on Computer Vision and Pattern Recognition 770778 (2016).

He, K., Zhang, X., Ren, S. & Sun, J. Identity mappings in deep residual networks. in Proc. Computer VisionECCV 2016 (eds Leibe, B. et al.) 630645 (Springer International Publishing, 2016).

Shi, W. et al. Is the deconvolution layer the same as a convolutional layer? Preprint at arXiv http://arxiv.org/abs/1609.07009 (2016).

Federal Aviation Administration. in Pilots Handbook of Aeronautical Knowledge Ch. 4 (FAA, US Government, 2023).

Ho, C. K., Stephenson, D. B., Collins, M., Ferro, C. A. T. & Brown, S. J. Calibration strategies: a source of additional uncertainty in climate change projections. Bull. Am. Meteorol. Soc. 93, 2126 (2012).

Article Google Scholar

Read the original post:
High-resolution meteorology with climate change impacts from global climate model data using generative machine ... - Nature.com

Google Cloud Next 2024: Pushing the Next Frontier of AI – Technology Magazine

The companys updated AI offering is now available on Vertex AI, Googles platform to customise and manage a wide range of leading Gen AI models. The company says that more than one million developers are currently using Googles Gen AI via its AI Studio and Vertex AI tools.

Likewise, its AI Hypercomputer is now being used by leading AI companies such as Anthropic, AI21 Labs, Contextual AI, Essential AI and Mistral AI. The Hypercomputer aims to employ a system of performance-optimised hardware, open software and machine learning frameworks to enable companies to better advance their digital transformation strategies.

Multiple companies are already harnessing the power of Google Cloud AI, including forward-thinking organisations like Mercedes-Benz, Uber and Palo Alto Networks to bolster their existing services and improve customer experience.

Mercedes-Benz, for example, is harnessing Google AI to improve customer service in call centres and to further optimise their website experience.

As AI continues to drive transformative progress in the business world, Google Cloud is aiming to help organisations around the world to discover whats next.

Google Cloud is also introducing new features that aim to offer AI assistance so that its customers can work and code more efficiently, allowing them to better identify and resolve cybersecurity threats by taking direct action against attacks.

Google Clouds product, Gemini in Threat Intelligence, utilises natural language to deliver insights about how threat actors behave. With Geminis larger context window, users can analyse much larger samples of potentially malicious code and gain more accurate results.

These AI-driven tools will help businesses take more detailed action, preventing more catastrophic data breaches.

Currently, there is incredible customer innovation across a broad range of industries, including retail, transportation and more. Harnessing Gen AI to fast-forward innovation requires a secure business AI platform that offers end-to-end capabilities that are easy to integrate with existing systems within a business.

The rest is here:
Google Cloud Next 2024: Pushing the Next Frontier of AI - Technology Magazine

What to know about the security of open-source machine learning models – TechTalks

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.

More here:
What to know about the security of open-source machine learning models - TechTalks

Why MLBOMs Are Useful for Securing the AI/ML Supply Chain – Dark Reading

COMMENTARY

The days of large, monolithic apps are withering. Today's applications rely on microservices and code reuse, which makes development easier but creates complexity when it comes to tracking and managing the components they use.

This is why thesoftware bill of materials (SBOM)has emerged as an indispensable tool for identifying what's in a software app, including the components, versions, and dependencies that reside within systems. SBOMs also deliver deep insights into dependencies, vulnerabilities, and risks that factor into cybersecurity.

An SBOM allows CISOs and other enterprise leaders to focus on what really matters by providing an up-to-date inventory of software components. This makes it easier to establish and enforce strong governance and spot potential problems before they spiral out of control.

Yet in the age of artificial intelligence (AI), the classic SBOM has some limitations. Emerging machine learning (ML) frameworks introduce remarkable opportunities, but they also push the envelope on risk and introduce a new asset to organizations: the machine learning model. Without strong oversight and controls over these models, an array ofpractical, technical, and legal problemscan arise.

That's where machine learning bills of materials (MLBOMs) enter the picture. The framework tracks names, locations, versions, and licensing for assets that comprise an ML model. It also includes overarching information about the nature of the model, training configurations embedded in metadata, who owns it, various feature sets, hardware requirements, and more.

CISOs are realizing that AI and ML require a different security model and the underlying training data and models that run them are frequently not tracked or governed. An MLBOM can help an organization avoid security risks and failures. It addresses critical factors like model and data provenance, safety ratings, and dynamic changes that extend beyond the scope of SBOM.

Because ML environments are in a constant state of flux and changes can take place with little or no human interaction, issues related to data consistency including where it originated, how it was cleaned, and how it was labeled are a constant concern.

For example, if a business analyst or data scientist determines that a data set is poisoned, the MLBOM simplifies the task of findingall the various touch points and models that were trained with that data.

Transparency, auditability, control, and forensic insight are all hallmarks of an MLBOM. With a comprehensive view of the "ingredients" that go into an ML model, an organization is equipped to manage its ML models safely.

Here are some ways to build a best practice framework around an MLBOM:

Recognize the need for an MLBOM:It's no secret that ML fuels business innovation and even disruption. Yet it also introduces significant risks that can extend to reputation, regulatory compliance, and legal issues. Having visibility into ML models is critically important.

Conduct essential due diligence:An MLBOM should integrate withthe CI/CD pipelineand deliver a high level of clarity. Support for standard frameworks like JSON or OWASP'sCycloneDXcanunify SBOM and MLBOM processes.

Analyze policies, processes, and governance:It's essential to sync an MLBOM with an organization's workflows and business processes. This increases the odds that ML pipelines will work as intended, while minimizing risks related to cybersecurity, data privacy, compliance, and other risk-associated areas.

Use an MLBOM with machine learning gates:Rigorous controls and gateways lead to essential AI and ML guardrails. In this way, the business and the CSO can build on successes and harness ML to unlock greater cost savings, performance gains, and business value.

Machine learning is radically changing the business and IT landscape. By extending proven SBOM methodologies to ML through MLBOMs, it's possible to take a giant step toward boosting machine learning performance and protecting data and assets.

Read more:
Why MLBOMs Are Useful for Securing the AI/ML Supply Chain - Dark Reading

The Role of Data Analytics and Machine Learning in Personalized Medicine Through Healthcare Apps – DataDrivenInvestor

Image by author

Personalized medicine through data analytics and Machine Learning has revolutionized the healthcare industry by tailoring medical treatments to individual patients based on their unique characteristics. In recent years, Data analytics and ML apps have become powerful tools to facilitate patient engagement and self-monitoring. This article explores the role of data analytics and Machine Learning in personalized medicine through healthcare apps, highlighting their importance, benefits, challenges, ethical considerations, and prospects.

Personalized medicine is like having a tailor-made healthcare plan. It considers your unique genetic makeup, lifestyle, and environment to provide more precise and effective treatments. This approach aims to deliver targeted therapies based on individual characteristics.

Healthcare apps have revolutionized the way we access and manage our health information. From tracking our daily steps to monitoring our heart rate, these apps have become essential tools in our quest for better health. What used to be basic fitness trackers have now evolved into comprehensive platforms that can analyze a vast amount of data to offer personalized insights and recommendations.

Data is the fuel that powers personalized medicine. It provides the necessary information to understand patterns, risks, and potential treatments for individuals. By analyzing vast amounts of data, such as genomic information, medical histories, and lifestyle factors, healthcare professionals can identify personalized treatment options and interventions.

Data analytics opens a whole new world of possibilities in personalized medicine. It allows healthcare providers to identify trends and correlations that may go unnoticed. This means faster and more accurate diagnoses, more effective treatment plans, and ultimately better health outcomes for patients. Data analytics also enables continuous learning and improvement by constantly refining treatment strategies based on real-world evidence.

Machine Learning is like having a computer that can learn and make decisions on its own. Its a branch of Artificial Intelligence (AI) that allows systems to analyze and interpret complex data patterns, discover insights, and make predictions or recommendations. In healthcare apps, Machine Learning algorithms can process large datasets and extract valuable information to enhance decision-making and improve patient outcomes.

Machine Learning algorithms can be embedded into healthcare apps, allowing them to continuously learn from user data and adapt their recommendations accordingly. For example, a fitness app can use Machine Learning to analyze a users exercise habits, heart rate, and sleep patterns to provide personalized exercise routines and sleep recommendations. By leveraging Machine Learning, healthcare apps can become intelligent and proactive health companions.

The combination of data analytics and Machine Learning offers significant advantages in personalized medicine.

By combining the power of data analytics tools with Machine Learning algorithms, businesses can create visually engaging and interactive dashboards that present data in a way that is easy to digest and interpret.

With data analytics and ML, businesses can uncover valuable insights that drive informed decision-making and improve overall business performance.

Businesses can automate data processing and analysis processes by integrating data analytics and Machine Learning, saving valuable time and improving accuracy.

ML algorithms analyze data objectively and make decisions based on patterns and statistical models, minimizing the impact of human subjectivity.

While data analytics and Machine Learning offer immense potential in healthcare, there are ethical considerations that need to be addressed. One concern is the potential bias in algorithms. If the data used to train Machine Learning models is biased, it can lead to biased treatment recommendations or diagnoses, disproportionately impacting certain groups of patients.

Another ethical concern is the transparency of algorithms. Patients and healthcare providers need to understand how algorithms arrive at their recommendations or diagnoses. Lack of transparency can undermine trust in the healthcare system and raise concerns about the accountability of algorithms.

The use of data analytics and Machine Learning in healthcare apps necessitates the collection and analysis of personal health information. Privacy concerns arise as this sensitive data needs to be handled with the utmost care. Healthcare apps must employ robust security measures to safeguard patient data and comply with relevant privacy regulations.

Transparency in data usage and obtaining informed consent from patients is crucial. Patients should have control over how their data is used and be fully aware of the potential risks and benefits.

As data analytics continues to evolve, several emerging trends hold promise for personalized medicine. One such trend is the integration of data from wearables and Internet of Things (IoT) devices. This real-time data collection allows for more accurate monitoring of patient health and enables timely interventions.

Another trend is the use of Natural Language Processing in analyzing unstructured medical data, such as doctors notes or research papers. NLP algorithms can extract valuable insights from these vast amounts of text, aiding in personalized medicine research and decision-making.

Machine Learning advancements are opening doors to exciting possibilities in healthcare apps. One breakthrough area is the use of deep learning algorithms. These sophisticated neural networks can process complex medical images, such as MRI scans or histopathology slides, with remarkable accuracy, assisting doctors in diagnosis and treatment planning.

Additionally, federated learning is gaining attention in healthcare. This approach allows Machine Learning models to be trained on decentralized data sources without sharing the raw data, preserving patient privacy while still benefiting from the collective knowledge present in diverse datasets.

Data analytics and Machine Learning have the potential to revolutionize personalized medicine through healthcare apps. Healthcare app development services from improving diagnosis accuracy to personalized treatment recommendations, these technologies offer valuable insights and benefits in healthcare.

Go here to read the rest:
The Role of Data Analytics and Machine Learning in Personalized Medicine Through Healthcare Apps - DataDrivenInvestor