Category Archives: Artificial Intelligence

Riding the whirlwind: BMJ’s policy on artificial intelligence in … – The BMJ

BMJ will consider content created with artificial intelligence only if the use is clearly described and reasonable

Artificial intelligence (AI) can rival human knowledge, accuracy, speed, and choices when carrying out tasks. The latest generative AI tools are trained on large quantities of data and use machine learning techniques such as logical reasoning, knowledge representation, planning, and natural language processing. They can produce text, code, and other media such as graphics, images, audio, or video. Large language models (LLMs), which are a form of AI, are able to search, extract, generate, summarise, translate, and rewrite text or code rapidly. They can answer complex questions (called prompts) at search engine speeds that the human mind cannot match.

AI is transforming our world, and we are not yet fully able to comprehend or harness its power. It is a whirlwind sweeping up all before it. Availability of LLMs such as ChatGPT, and growing awareness of their capabilities, is challenging many industries, including academic publishing. The potential benefits for content creation are clear, such as the opportunity to overcome language barriers. However, there is also potential for harm: text produced by LLMs may be inaccurate, and references can be unreliable. Questions remain about the degree to which AI can be accountable and responsible for content, the originality and quality of content that is produced, and the potential for bias, misconduct, and misinformation.

BMJ groups policy on the use of AI in producing and disseminating content recognises the potential for both benefit and harm and aims primarily for transparency. The policy allows editors to judge the suitability of authors use of AI within an overarching governance framework (https://authors.bmj.com/policies/ai-use). BMJ journals will consider content prepared using AI as long as use of the technology is declared and described in detail so that editors, reviewers, and readers can assess suitability and reasonableness. Where use of AI is not declared, we reserve the right to decline to publish submitted content or retract content.

With greater experience and understanding of AI, BMJ may specify circumstances in which particular uses are or are not appropriate. We appreciate that nothing stands still for long with AI; editing tasks enabled by AI embedded in word processing programmes or their extensions to improve language, grammar, and translation will become commonplace and are more likely to be acceptable than use of AI to complete tasks linked to authorship criteria.1 These tasks include contributing to the conception and design of the proposed content; acquisition, analysis, or interpretation of data; and drafting or critically reviewing the work.

BMJs policy requires authors to declare all use of AI in the contributorship statement. AI cannot be an author as defined by BMJ, the International Committee of Medical Journal Editors (ICMJE), or the Committee on Publication Ethics (COPE) criteria, because it cannot be accountable for submitted work.1 The guarantor or lead author remains responsible and accountable for content, whether or not AI was used.

BMJs policy mirrors that of organisations such as the World Association of Medical Editors (WAME),2 COPE,3 and other publishers. All content will be held to the same standard, whether produced by external authors or by editors and staff linked to BMJ. Our policy on the use of AI for drafting peer review comments and any other advisory material is similar. All use must be declared, and editors will judge the appropriateness of that use. Importantly, reviewers may not enter unpublished manuscripts or information about them into publicly available AI tools.

It is imperative for journals and publishers to work with AI, learn from and evaluate new initiatives in a meaningful but pragmatic way, and devise or endorse policies for the use of AI in the publication process. UKs Science Technology and Medicine Integrity Hub (a membership organisation for the publishing industry which aims to advance trust in research)4 outlined three main areas that could be improved by AI: supporting specific services, such as screening for substandard content, improving language, or translating or summarising content for diverse audiences; searching for and categorising content to enhance content tagging or labelling and the production of metadata; and improving user experience and dissemination through curating or recommending content.

BMJ will carefully assess the effect of AI on its broader business and will publicly report use where appropriate. New ideas for trialling AI within BMJs publishing workflows will be assessed on an individual basis, and we will consider factors such as efficiency, transparency and accountability, quality and integrity, privacy and security, fairness, and sustainability.

AI presents publishers with serious and potentially existential challenges, but the opportunities are also revolutionary. Journals and publishers must maximise these opportunities while limiting harms. We will continue to review our policy given the rapid and unpredictable evolution of AI technologies. AI is a whirlwind capable of destroying everything in its path. It cant be tamed, but our best hope is to learn how to ride the whirlwind and direct the storm.

With thanks to Theo Bloom and the other editorial staff and editors at BMJ who contributed to the development of the policy.

Read the original:
Riding the whirlwind: BMJ's policy on artificial intelligence in ... - The BMJ

AG Nessel Urges Congress to Study Artificial Intelligence and Its … – Michigan Courts

LANSING As part of a bipartisan 54-state and territory coalition, Michigan Attorney General Dana Nessel joined a letter urging Congress to study how artificial intelligence (AI) can and is being used to exploit children through child sexual abuse material (CSAM) and to propose legislation to protect children from those abuses.

Artificial Intelligence poses a serious threat to our children, and abusers are already taking advantage, Nessel said. Our laws and regulations must catch up to the technology being used by those who prey on our children. I stand with my colleagues in asking Congress to prioritize examining the dangers posed by AI-generated child sexual abuse material.

The dangers of AI as it relates to CSAM consist of three main categories: a real child who has not been physically abused, but whose likeness is being digitally altered in a depiction of abuse; a real child who has been physically abused and whose likeness is being digitally recreated in other depictions of abuse; and a child who does not exist, but is being digitally created in a depiction of abuse that feeds the market for CSAM.

The letter states that AI can, rapidly and easily create 'deepfakes' by studying real photographs of abused children to generate new images showing those children in sexual positions. This involves overlaying the face of one person on the body of another. Deepfakes can also be generated by overlaying photographs of otherwise unvictimized children on the internet with photographs of abused children to create new CSAM involving the previously unharmed children.

Attorney General Nessel and the rest of the coalition ask Congress to form a commission specifically to study how AI can be used to exploit children and to act to deter and address child exploitation, such as by expanding existing restrictions on CSAM to explicitly cover AI-generated CSAM.

The letter continues, We are engaged in a race against time to protect the children of our country from the dangers of AI. Indeed, the proverbial walls of the city have already been breached. Now is the time to act.

Besides Michigan, the letter, which was co-led bySouth Carolina, Mississippi, North Carolina, and Oregon in a bipartisan effort, was joined by Alabama, Alaska, Arizona, Arkansas, California, Colorado, Connecticut, Delaware, the District of Columbia, Florida, Georgia, Hawaii, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Maryland, Massachusetts, Minnesota, Missouri, Montana, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Dakota, Northern Mariana Islands, Ohio, Oklahoma, Pennsylvania, Puerto Rico, Rhode Island, South Dakota, Tennessee, Texas, Utah, Vermont, Virgin Islands, Virginia, Washington, West Virginia, Wisconsin, and Wyoming.

You can read the full letter here.

###

Read more here:
AG Nessel Urges Congress to Study Artificial Intelligence and Its ... - Michigan Courts

WATCH | How will Artificial Intelligence shape the automotive … – News24

Just how impressive is the latest artificial intelligence technology when applied to cars and mobility? The team fromDeutsche Welle brings you the top four AI innovations from the recent Internationale Automobil-Ausstellung (which means International Motor Show Germany), or better known as the IAA 2023.

Some fascinating new technology has been showcased at the recent auto show in Germany, ranging from Volkswagen's self-learning vehicles to Vera the AI assistant, and Chinese auto manufacturers with a camera that can see your health!

We live in a world where facts and fiction get blurred

In times of uncertainty you need journalism you can trust. For 14 free days, you can have access to a world of in-depth analyses, investigative journalism, top opinions and a range of features. Journalism strengthens democracy. Invest in the future today. Thereafter you will be billed R75 per month. You can cancel anytime and if you cancel within 14 days you won't be billed.

Go here to see the original:
WATCH | How will Artificial Intelligence shape the automotive ... - News24

Artificial intelligence in nursing education 1: strengths and … – Nursing Times

Artificial intelligence is expanding rapidly. This article looks at the strengths and weaknesses of ChatGPT and other generative AI tools in nursing education

Artificial intelligence (AI) refers to the application of algorithms and computational models that enable machines to exhibit cognitive abilities including learning, reasoning, pattern recognition and language processing that are similar to those of humans. By analysing vast amounts of data (text, images, audio and video), sophisticated digital tools, such as ChatGPT, have surpassed previous forms of AI and are now being used by students and educators in universities worldwide. Nurse educators could use these tools to support student learning, engagement and assessment. However, there are some drawbacks of which nurse educators and students should be aware, so they understand how to use AI tools appropriately in professional practice. This, the first of two articles on AI in nursing education, discusses the strengths and weaknesses of generative AI and gives recommendations for its use.

Citation: OConnor S et al (2023) Artificial intelligence in nursing education 1: strengths and weaknesses. Nursing Times [online]; 119: 10.

Authors: Siobhan OConnor is senior lecturer, Emilia Leonowicz is nursing student, both at University of Manchester; Bethany Allen is digital nurse implementer, The Christie NHS Foundation Trust; Dominique Denis-Lalonde is nursing instructor, University of Calgary, Canada.

Artificial intelligence (AI) comprises advanced computational techniques, including algorithms, that are designed to process and analyse various forms of data, such as written text or audio and visual information like images or videos. These algorithms rapidly evaluate vast quantities of digital data to generate mathematical models that predict the likelihood of particular outcomes. Such predictive models serve as the foundation for more advanced digital tools, including chatbots that simulate human-like conversation and cognition.

AI tools have the potential to improve decision making, facilitate learning and enhance communication (Russell and Norvig, 2021). However, it is important to note that these AI systems are not sentient or conscious; they lack understanding or emotional response to the inputs they receive or the outputs they generate, as their primary function is to serve as sophisticated predictive instruments.

AI technology has existed for some time in many everyday contexts, such as recommendations for content on social media platforms, finding information and resources via internet search engines, email spam filtering, grammar checks in document-writing software, and personal virtual assistants like Siri (iPhone) or Cortana (Microsoft), among others. The latest evolution of AI is a significant leap from these previous versions and warrants additional scrutiny and discussion.

The Joint Information Systems Committee (JISC), the UKs digital, data and technology agency that focuses on tertiary education, published a report on AI in education in 2022. JISC (2022) explains how AI could help improve different aspects of education for teaching staff and learners. As an example, AI could be used to create more adaptive digital learning platforms by analysing data from students who access educational material online. If students choose to read an article, watch a video or post on a discussion forum, this data could predict what kind of support and educational resources they need and like. This type of learning analytics could be used to improve the design of a digital learning platform and curricula on different topics to suit each individual student.

JISC also set up a National Centre for AI to support teachers and students to use AI effectively, in line with the governments AI strategy (Office for Artificial Intelligence, 2021). The centre holds a range of publications and interactive demonstrations on different applications of AI, such as chatbots, augmented or virtual reality, automated image classification and speech analysis.

JISC also has an interactive map of UK institutions that are piloting AI in education in practical ways. In addition, there is a blog to follow, and many events that focus on AI in education, which are free to attend. Recordings of these events are also available on the JISC website (JISC, 2023).

A cutting-edge type of AI is generative AI, which uses algorithms and mathematical models to create text, images, video or a mixture of media when prompted to do so by a human user. One promising application of generative AI is a chatbot or virtual conversational agent that is powered by large language models.

Chatbots can generate a sequence of words that a typical human interaction is likely to create, and they can perform this function surprisingly accurately as they have been trained using a large dataset of text. Chatbots have been trialled in university education to:

Despite the benefits of these chatbots, they are not yet widely used in universities as they have several limitations. Some problems include: the accuracy of responses they provide; the privacy of inputted data; and negative opinions of the technology among teachers and students, who prefer face-to-face interactions and fear the potential implications of AI (Choi et al, 2023; Wollny et al, 2021).

A chatbot called ChatGPT (version 3.5) was launched in November 2022 by a commercial company called OpenAI. GPT stands for generative pre-trained transformer, which is powered by a family of large language models. ChatGPT went viral in early 2023, with millions of users around the world (Dwivedi et al, 2023). The dataset for ChatGPT 3.5 came from websites such as Wikipedia and Reddit, as well as online books, research articles and a range of other sources. This caused concern about how much trust to place in the chatbots responses as some of these data sources may contain inaccuracies or gender, racial and other biases (van Dis et al, 2023).

Understandably, educators and students at schools and universities have been conflicted about the use of generative AI tools. Some institutions have tried to ban the use of ChatGPT on campus, fearing students would use it to write and submit essays that plagiarise other peoples work (Yang, 2023).

In an attempt to identify AI use, detection tools, such as GPTZero, have been created, as well as tools by educational technology companies, such as Turnitin and Cadmus (Cassidy, 2023). These could be integrated into learning management systems, like Blackboard, Canvas or Moodle, to detect AI writing and deter academic misconduct. However, detection tools may not be able to keep up with the pace of change as generative AI becomes ever more sophisticated. Relying on software to spot the use of AI in students written work or other assessments may be fruitless, and trying to determine where the human ends and the AI begins may be pointless and futile (Eaton, 2023).

In March 2023, a more advanced chatbot, GPT-4, was released. It is currently available as a paid subscription service and has a waiting list for software developers who want to use it to build new digital applications and services. Other technology companies have promptly released similar AI tools, such as Bing AI from Microsoft and Bard from Google. Other types of generative AI tools have also emerged, including:

These types of AI tools could be used in many ways in education. The UKs Department for Education published a statement on the use of generative AI in education. Key messages were:

DfE (2023) also highlighted that generative AI tools can produce unreliable information or content. For example, an AI tool may make up titles and authors of seemingly real papers that are entirely fictitious; as such, critical judgement is needed to check the accuracy and quality of any AI-generated content, whether it is written text, audio, images or videos. Humans must remain accountable for the safe and appropriate use of AI-generated content and they are responsible for how AI tools are developed (Eaton, 2023).

The use of AI in nursing education is just starting. A recent review by OConnor (2022) found that AI was being used to predict student attrition from nursing courses, academic failure rates, and graduation and completion rates.

Nurse educators and students in many countries may have already started using ChatGPT and other generative AI tools for teaching, learning and assessment. However, they may be hesitant or slow to engage with these new tools, especially if they have a limited understanding of how they work and the problems they may cause. Developing guidelines on how to use these AI tools could support nurse educators, clinical mentors and nursing students in university, hospital and community settings (Koo, 2023; OConnor and ChatGPT, 2023).

Nurses should leverage the strengths and weaknesses of generative AI tools (outlined in Box 1) to create new learning opportunities for students, while being aware of, and limiting, any threats they pose to teaching and assessment (OConnor, 2022).

Box 1. Strengths and weaknesses of generative AI tools

Strengths

Weaknesses

AI = artificial intelligenceSources: OConnor and Booth (2022), Russell and Norvig (2021)

As generative AI tools can process large amounts of data quickly, they could be used in nursing education to support students in a number of ways. For instance, AI audio or voice generators, which create speech from text, could be used to make podcasts, videos, professional presentations or any media that requires a voiceover more quickly than people can produce. This could enrich online educational resources because a diverse range of AI voices are available to choose from in multiple languages. Some tools also allow you to edit and refine the pitch, speed, emphasis and interjections in the voiceover. This could make digital resources easier for students to listen to and understand, particularly those who have learning disabilities or are studying in a foreign language.

A chatbot could, via interactive conversations on their smartphone, encourage students to attend class, speak to a member of faculty or access university services, such as the library or student support (Chang et al, 2022). One designed specifically for nursing students could also be beneficial during a clinical placement, and direct them to educational resources, such as books and videos while training in hospital and community settings. This may be particularly useful to support learning in those clinical areas in which nurses are very busy or understaffed, or where educational resources are limited or inaccessible.

As generative AI can adjust its responses over time, a chatbot could provide tailored advice and information to a nursing student that aligns with their individual needs and programme outcomes.

Another way nurse educators could support students would be to highlight a weakness of generative AI in its ability to confabulate that is, to fill in knowledge gaps with plausible, but fabricated, information. Nursing students should be taught about this weakness so they can learn to develop the skills necessary to find, appraise, cite and reference the work of others, and critique the outputs of generative AI tools (Eaton, 2023).

Simple exercises comparing the outputs of a chatbot with scientific studies and good-quality news articles from human authors on a range of topics could help students appreciate this flaw. As an example, a chatbot could be asked to explain up-to-date social, cultural or political issues affecting patients and healthcare in different regions and countries. The AI-generated output could be cross-checked by students to determine its accuracy. They could also discuss the impact the AI output could have on nurses, patients and society if it were applied more broadly and assumed to be completely factual and unbiased.

Nurse educators could also use AI-generated text, image, audio or video material to help students explore health literacy. As group work in a computer laboratory, students could use a generative AI tool to create diverse customisable patient education about a health problem and how it might be managed through, for example, diet, exercise, medication and lifestyle changes. Students could be asked to design and refine text prompts to ensure the content that is generated is appropriate, accurate and easy for patients to understand.

Chatbots can also be used to create interactive, personalised learning material and simulations for students. Box 2 illustrates how generative AI has been used in simulation education. Given this example, it is easy to imagine combining realistic text-to-speech synthesis (which we have today) and high-fidelity simulation laboratory manikins. This could support learning by providing engaging and interactive simulations that are less scripted or predetermined than traditional case study simulations.

Box 2. Use of generative AI in simulation education

Context: A two-hour laboratory session with first-year nursing students.

Objective: To create opportunities for students to trial relational communication skills to which they have previously been exposed in lectures.

Simulation: Nursing students were put into small groups and a chatbot was used as a simulated patient in a community health setting. Using relational communication techniques, each group interacted with the chatbot in a scenario it had randomly generated. The patient responded based on what the students typed, with no predetermined storyline. The chatbot allowed several conversational turns, then provided students with a grade and constructive feedback.

Prompt used (GPT-4): Lets simulate relational practice skills used by professional registered nurses:

Results: Students enjoyed the novelty of this activity and the opportunity to deliberately try different question styles in a safe and low-risk context. They thoughtfully and collaboratively put together responses to develop a therapeutic relationship with the patient and their chatbot-assigned grade improved with each scenario tried.

Considerations: Although not a replacement for in-person interaction, this activity provided space for trial and error before students engaged with real patients in clinical contexts. It is important for nursing students to be supervised during an activity like this, as the chatbot occasionally became fixated on minor issues, such as its inability to detect students eye contact and other body language. When this occurred, the chatbot needed to be restarted in a new chat or context window to function correctly. It is also critical that students be instructed not to input any personally identifiable data into the chatbot as this information may not remain confidential.

AI = Artificial Intelligence

Nurse educators could leverage another weaknesses of generative AI to create innovative lesson plans and curricula that teach nursing students about important topics. Bias that is present in health and other data is an important concept for students to understand as it can perpetuate existing health inequalities. AI tools work solely on digital data, which may contain age, gender, race and other biases, if certain groups of people are over- or under-represented in text, image, audio or video datasets (OConnor and Booth, 2022). For example, an AI tool was trained to detect skin cancer based on a dataset of images that were mainly from fair-skinned people. This might mean that those with darker skin tones (such as Asian, Black and Hispanic people) may not get an accurate diagnosis using this AI tool (Goyal et al, 2020). A case study like this could be used to teach nursing students about bias and the limitations of AI, thereby improving their digital and health literacy.

Finally, nursing students will need to be vigilant with their use of AI tools to avoid accusations of plagiarism or other academic misconduct (OConnor and ChatGPT, 2023). They should be supported by nursing faculty and nurses in practice to disclose and discuss their use of generative AI as it relates to professional accountability. This could help reduce the risks of inappropriate use of AI tools and ensure nursing students adhere to professional codes of conduct.

The field of AI is evolving quickly, with new generative AI tools and applications appearing frequently. There is some concern about whether the nursing profession can, or should, engage with these digital tools while they are in the early stages of development. However, the reality is that students have access to AI tools and attempts to ban them could well do more harm than good. Furthermore, as patients and health professionals will likely start using these tools, nurses cannot ignore this technological development. What is needed during this critical transition is up-to-date education about these new digital tools as they are here to stay and will, undoubtedly, improve over time.

A curious, cautious and collaborative approach to learning about AI tools should be pursued by educators and their students, with a focus on enhancing critical thinking and digital literacy skills while upholding academic integrity. Wisely integrating AI tools into nursing education could help to prepare nursing students for a career in which nurses, patients and other professionals use AI tools every day to improve patient health outcomes.

Cassidy C (2023) College student claims app can detect essays written by chatbot ChatGPT. theguardian.com, 11 January (accessed 6 September 2023).

Chang CY et al (2022) Promoting students learning achievement and self-efficacy: a mobile chatbot approach for nursing training. British Journal of Educational Technology; 53: 1, 171-188.

Choi EPH et al (2023) Chatting or cheating? The impacts of ChatGPT and other artificial intelligence language models on nurse education. Nurse Education Today; 125: 105796.

Department for Education (2023) Generative Artificial Intelligence in Education. DfE

Dwivedi YK et al (2023) So what if ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management; 71: 102642.

Eaton SE (2023) 6 tenets of postplagirism: writing in the age of artificial intelligence. drsaraheaton.wordpress.com, 25 February (accessed 6 September 2023).

Goyal M et al (2020) Artificial intelligence-based image classification methods for diagnosis of skin cancer: challenges and opportunities. Computers in Biology and Medicine; 127: 104065.

Joint Information Systems Committee (2023) National Centre for AI: accelerating the adoption of artificial intelligence across the tertiary education sector. beta.jisc.ac.uk (accessed 6 September 2023).

Joint Information Systems Committee (2022) AI in Tertiary Education: A Summary of the Current State of Play. JISC (accessed 17 April 2023).

Koo M (2023) Harnessing the potential of chatbots in education: the need for guidelines to their ethical use. Nurse Education in Practice; 68, 103590.

OConnor S (2022) Teaching artificial intelligence to nursing and midwifery students. Nurse Education in Practice; 64: 103451.

OConnor S et al (2022) Artificial intelligence in nursing and midwifery: a systematic review. Journal of Clinical Nursing; 32: 13-14, 3130-3137.

OConnor S, Booth RG (2022) Algorithmic bias in health care: opportunities for nurses to improve equality in the age of artificial intelligence. Nursing Outlook; 70: 6, 780-782.

OConnor S, ChatGPT (2023) Open artificial intelligence platforms in nursing education: tools for academic progress or abuse? Nurse Education in Practice; 66: 103537.

Office for Artificial Intelligence (2021) National AI Strategy. HM Government.

Okonkwo CW, Ade-Ibijola A (2021) Chatbots applications in education: a systematic review. Computers and Education: Artificial Intelligence; 2: 100033.

Russell S, Norvig P (2021) Artificial Intelligence: A Modern Approach. Pearson.

van Dis EAM et al (2023) ChatGPT: five priorities for research. Nature; 614: 7947, 224-226.

Wollny S et al (2021) Are we there yet? A systematic literature review on chatbots in education. Frontiers in Artificial Intelligence; 4, 654924.

Yang M (2023) New York City schools ban AI chatbot that writes essays and answers prompts. theguardian.com; 6 January (accessed 16 April 2023).

Help Nursing Times improve

Help us better understand how you use our clinical articles, what you think about them and how you would improve them. Please complete our short survey.

Follow this link:
Artificial intelligence in nursing education 1: strengths and ... - Nursing Times

Artificial Intelligence: A step change in climate modeling predictions for climate adaptation – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

proofread

by CMCC Foundation - Euro-Mediterranean Center on Climate Change

close

As of today, climate models face the challenge of providing the high-resolution predictionswith quantified uncertaintiesneeded by a growing number of adaptation planners, from local decision-makers to the private sector, who require detailed assessments of the climate risks they may face locally.

This calls for a step change in the accuracy and usability of climate predictions that, according to the authors of the paper "Harnessing AI and computing to advance climate modeling and prediction," can be brought by Artificial Intelligence.

The Comment was published in Nature Climate Change by a group of international climate scientists, including CMCC Scientific Director Giulio Boccaletti and CMCC President Antonio Navarra.

One proposed approach for a step change in climate modeling is to focus on global models with 1-km horizontal resolution. However, the authors explain, although kilometer-scale models have been referred to as "digital twins" of Earth, they still have limitations and biases similar to current models. Moreover, given the high computational costs, they impose limitations on the size of simulation ensembles, which are needed both to calibrate the unavoidable empirical models of unresolved processes and to quantify uncertainties.

Overall, kilometer-scale models do not offer the step change in accuracy that would justify accepting the limitations that they impose.

Rather than prioritizing kilometer-scale resolution, authors propose a balanced approach focused on generating large ensembles of simulations at moderately high resolution (1050 km, from around 100 km, which is standard today) that capitalizes on advances in computing and AI to learn from data.

By moderately increasing global resolution while extensively harnessing observational and simulated data, this approach is more likely to achieve the objective of climate modeling for risk assessment, which involves minimizing model errors and quantifying uncertainties and enables wider adoption.

1,000 simulations at 10-km resolution cost the same as 1 simulation at 1-km resolution. "Although we should push the resolution frontier as computer performance increases, climate modeling in the next decade needs to focus on resolutions in the 1050 km range," write the authors. "Importantly, climate models must be developed so that they can be used and improved on through rapid iteration in a globally inclusive and distributed research program that does not concentrate resources in the few monolithic centers that would be needed if the focus is on kilometer-scale global modeling."

More information: Tapio Schneider et al, Harnessing AI and computing to advance climate modelling and prediction, Nature Climate Change (2023). DOI: 10.1038/s41558-023-01769-3

Journal information: Nature Climate Change

Provided by CMCC Foundation - Euro-Mediterranean Center on Climate Change

Originally posted here:
Artificial Intelligence: A step change in climate modeling predictions for climate adaptation - Phys.org

Guide to Artificial Intelligence ETFs – Zacks Investment Research

Robots and artificial intelligence (AI) are increasingly gaining precedence in our daily life. The pandemic-driven stay-at-home trend made these more important as we have become more dependent on technology. The growing accessibility and falling costs are also making the space more demanding and lucrative.

The global artificial intelligence (AI) market size was valued at $454.12 billion in 2022 and is expected to hit around $2,575.16 billion by 2032, growing at a CAGR of 19% from 2023 to 2032, per Precedence Research. The recent success of ChatGPT also made the space even more intriguing. ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022.

It is constructed on top of OpenAI's GPT-3 family of large language models and has been modified further using both supervised and reinforcement learning techniques. OpenAI has now been working on a more powerful version of the ChatGPT system called GPT-4, which is set to be released in 2023.

Artificial intelligence can transform the productivity and GDP potential of the global economy, per a PWC article. PWCs research reveals that 45% of total economic gains by 2030 will come from product enhancements, boosting consumer demand.

This will be possible because AI will bring about product variety, with increased personalization and affordability. The maximum economic benefit from AI will be in China (26% boost to GDP in 2030) and North America (14.5% boost), per PWC.

As AI continues to evolve and reshape our world, Nvidia (NVDA - Free Report) stands at the forefront, ready to harness the potential of a $600 billion market. With sustainability in mind and a track record of innovation, Nvidia's vision for accelerated computing promises a brighter future powered by AI-driven technology. Nvidia exec Manuvir Das recently presented some interesting numbers on the market for AI technology, as quoted on Yahoo Finance.

Das noted that the $600 billion total addressable market comprises three major segments:

Chips and Systems ($300 Billion): The foundation of AI, hardware like GPUs and specialized AI chips will play a crucial role in powering AI applications across various industries.

Generative AI Software ($150 Billion): Software that generates content, such as ChatGPT, is gaining traction and transforming creative processes, content generation, and data analysis.

Omniverse Enterprise Software ($150 Billion): Enterprise solutions that leverage AI to enhance productivity, collaboration, and innovation within organizations.

Manuvir Das pointed out that the industry is still in its early stages when it comes to accelerated computing. He drew a parallel between traditional CPU-based computing and the transformative potential of accelerated computing.

As computing becomes increasingly integral to business operations, the demand for data centers, energy, and processing power escalates. This growth pattern, Das argued, is unsustainable without a fundamental shift towards accelerated computing.

No wonder, big tech companies are tapping the space with full vigor. Microsoft (MSFT - Free Report) is investing billions into OpenAI, the creator of ChatGPT, and launched its new AI-powered Bing search and Edge browser. CEO Satya Nadella told CNBC that AI is the biggest thing to have happened to the company since he took over.

Alphabet (GOOGL - Free Report) , which has invested heavily in AI and machine learning over the past few years, rushed to roll out its chatbot competitor BARD. However, BARD failed to see initial success as it gave inaccurate information. Meta Platform (META - Free Report) released a new AI tool LLaMA. Baidu (BIDU - Free Report) launched ChatGPT-style Ernie Bot.

Amazon (AMZN - Free Report) is also not far behind. In a nutshell, the AI war among tech behemoths is heating up as generative technologies capture investors attention. Not only these big techs, there are many small-scale A.I. companies that could be tapped at a go with the ETF approach.

Against this backdrop, below, we highlight a few artificial intelligence ETFs that are great bets now.

AI Powered Equity ETF (AIEQ - Free Report)

The AI Powered Equity ETF is actively managed and seeks capital appreciation by investing primarily in equity securities listed on a U.S. exchange based on the results of a proprietary, quantitative model. The fund charges 75 bps in fees.

ROBO Global Robotics and Automation Index ETF (ROBO - Free Report)

The underlying ROBO Global Robotics and Automation Index measures the performance of companies which derive a portion of revenues and profits from robotics-related or automation-related products or services. The fund charges 95 bps in fees.

Global X Robotics & Artificial Intelligence ETF (BOTZ - Free Report)

The underlying Indxx Global Robotics & Artificial Intelligence Thematic Index invests in companies that potentially stand to benefit from increased adoption and utilization of robotics and artificial intelligence, including those involved with industrial robotics and automation, non-industrial robots, and autonomous vehicles. The fund charges 69 bps in fees.

iShares Robotics And Artificial Intelligence Multisector ETF (IRBO - Free Report)

The underlying NYSE FactSet Global Robotics and Artificial Intelligence Index is composed of equity securities of companies primarily listed in one of 43 developed or emerging market countries that are the most involved in, or exposed to, one of the 22 robotics and artificial intelligence-related FactSet Revere Business Industry Classification Systems. The fund charges 47 bps in fees.

First Trust Nasdaq Artificial Intelligence and Robotics ETF (ROBT - Free Report)

The underlying Nasdaq CTA Artificial Intelligence and Robotics Index is designed to track the performance of companies engaged in Artificial intelligence, robotics and automation. The fund charges 65 bps in fees.

Want key ETF info delivered straight to your inbox?

Zacks free Fund Newsletter will brief you on top news and analysis, as well as top-performing ETFs, each week.

Get it free >>

Read this article:
Guide to Artificial Intelligence ETFs - Zacks Investment Research

Artificial intelligence experts from around the world converge in Faial … – Fall River Herald News

HORTA - About 150 experts from around the world are in Faial, Azores, debating about the present and future of artificial science.

I am very proud to see the island where I was born become an epicenter of interdisciplinary debate on the future of artificial intelligence, an area to which I have dedicated my research, said Dr. Nuno Moniz, a professor at the University of Notre Dame, who is coordinating the event.

Holder of a PhD in Computer Science from the University of Porto, Moniz joined the University of Notre Dame in August 2022 as an Associate Research Professor at the Lucy Family Institute for Data & Society. In March 2023, he was named the Associate Director of the Data, Inference, Analysis, and Learning (DIAL) Lab.

The main objective of the event is to promote discussion among our colleagues: science is made up of encounters and disagreements, exposure to other ideas and points of view, Dr. Moniz told O Jornal. Thats what were betting on with this event, in the hope that it will serve as a starting point for new collaborations and research programs.

Organized by the Portuguese Association for Artificial Intelligence (APPIA), the event is taking place from Sept. 5 to 8, featuring 17 panel discussions on a wide variety of topics, ranging from ethics and responsibility in the development of artificial intelligence to its application in the arts and creativity.

In addition to Dr. Moniz, the U.S. delegation includes several representatives from Carnegie Mellon University, and Prof. Nitesh Chawla from the University of Notre Dame is one of the keynote speakers.

The potential of artificial intelligence is immense, and it is precisely its application, in the most diverse areas, that could bring important innovations, with the potential to help solve societys urgent problems, such as the sustainability of our oceans, said Dr. Moniz.

This is the second time the Azores is the host of the Portuguese conference on artificial intelligence. Ten years ago, Angra do Herosmo, Terceira, served as the stage for a similar event.

Some Lusa material used in this report

Go here to read the rest:
Artificial intelligence experts from around the world converge in Faial ... - Fall River Herald News

Artificial Intelligence and Robotics in Aerospace and Defense Market Quantitative and Qualitative Analysi – Benzinga

"The Best Report Benzinga Has Ever Produced"

Massive returns are possible within this market! For a limited time, get access to the Benzinga Insider Report, usually $47/month, for just $0.99! Discover extremely undervalued stock picks before they skyrocket! Time is running out! Act fast and secure your future wealth at this unbelievable discount! Claim Your $0.99 Offer NOW!

Advertorial

Recent research on the "Artificial Intelligence and Robotics in Aerospace and Defense Market" offers a thorough analysis of market growth prospects as well as current kinds [Hardware, Software, Service] and applications [Military, Commercial Aviation, Space] segmentation trends on a worldwide scale. The SWOT analysis, CAGR status, and revenue estimate of stakeholders are the main topics of the report. The research [107 Pages] also provides a thorough analysis of market segmentations, industrial new development, and expansion plans across major geographical regions.

The report illustrates the market's dynamic nature by highlighting driving growth factors and the most recent technological advancements. It provides a comprehensive view of the industry landscape by integrating strategic evaluation of leading competitors, historic and current market performance, and fresh investment prospects. The report's credibility is further increased by its review of the scope of supply and demand relationships, trade figures, and manufacturing cost structures.

Enter your email and you'll also get Benzinga's ultimate morning update AND a free $30 gift card and more!

Ask for Sample Report

Market Analysis and Insights: Global Artificial Intelligence and Robotics in Aerospace and Defense Market

Artificial Intelligence and Robotics in Aerospace and Defense Market report elaborates the market size, market characteristics, and market growth of the Artificial Intelligence and Robotics in Aerospace and Defense industry, and breaks down according to the type, application, and consumption area of Artificial Intelligence and Robotics in Aerospace and Defense. The report also conducted a PESTEL analysis of the industry to study the main influencing factors and entry barriers of the industry.

Major Players in Artificial Intelligence and Robotics in Aerospace and Defense market are:

Get a Sample Copy of the report: https://www.absolutereports.com/enquiry/request-sample/17125928

Artificial Intelligence and Robotics in Aerospace and Defense Market by Types:

Artificial Intelligence and Robotics in Aerospace and Defense Market by Applications:

Artificial Intelligence and Robotics in Aerospace and Defense Market Key Points:

To Understand How Covid-19 Impact Is Covered in This Report - https://www.absolutereports.com/enquiry/request-covid19/17125928

Geographically, the detailed analysis of consumption, revenue, market share and growth rate, historical data and forecast :

Outline

Chapter 1 mainly defines the market scope and introduces the macro overview of the industry, with an executive summary of different market segments ((by type, application, region, etc.), including the definition, market size, and trend of each market segment.

Chapter 2 provides a qualitative analysis of the current status and future trends of the market. Industry Entry Barriers, market drivers, market challenges, emerging markets, consumer preference analysis, together with the impact of the COVID-19 outbreak will all be thoroughly explained.

Chapter 3 analyzes the current competitive situation of the market by providing data regarding the players, including their sales volume and revenue with corresponding market shares, price and gross margin. In addition, information about market concentration ratio, mergers, acquisitions, and expansion plans will also be covered.

Chapter 4 focuses on the regional market, presenting detailed data (i.e., sales volume, revenue, price, gross margin) of the most representative regions and countries in the world.

Chapter 5 provides the analysis of various market segments according to product types, covering sales volume, revenue along with market share and growth rate, plus the price analysis of each type.

Chapter 6 shows the breakdown data of different applications, including the consumption and revenue with market share and growth rate, with the aim of helping the readers to take a close-up look at the downstream market.

Chapter 7 provides a combination of quantitative and qualitative analyses of the market size and development trends in the next five years. The forecast information of the whole, as well as the breakdown market, offers the readers a chance to look into the future of the industry.

Chapter 8 is the analysis of the whole market industrial chain, covering key raw materials suppliers and price analysis, manufacturing cost structure analysis, alternative product analysis, also providing information on major distributors, downstream buyers, and the impact of COVID-19 pandemic.

Chapter 9 shares a list of the key players in the market, together with their basic information, product profiles, market performance (i.e., sales volume, price, revenue, gross margin), recent development, SWOT analysis, etc.

Chapter 10 is the conclusion of the report which helps the readers to sum up the main findings and points.

Chapter 11 introduces the market research methods and data sources.

Major Questions Addressed in the Report:

Inquire or Share Your Questions If Any before the Purchasing This Report - https://www.absolutereports.com/enquiry/pre-order-enquiry/17125928

Detailed TOC of Global Artificial Intelligence and Robotics in Aerospace and Defense Industry Research Report

1 Artificial Intelligence and Robotics in Aerospace and Defense Market - Research Scope

1.1 Study Goals

Hidden gems are waiting to be found in this market! Don't miss the Benzinga Insider Report, typically $47/month, now ONLY $0.99! Uncover incredibly undervalued stocks before they soar! Limited time offer! Secure your financial success with this unbeatable discount! Grab your 0.99 offer TODAY!

Advertorial

1.2 Market Definition and Scope

1.3 Key Market Segments

1.4 Study and Forecasting Years

2 Artificial Intelligence and Robotics in Aerospace and Defense Market - Research Methodology

2.1 Methodology

2.2 Research Data Source

2.2.1 Secondary Data

2.2.2 Primary Data

2.2.3 Market Size Estimation

2.2.4 Legal Disclaimer

3 Artificial Intelligence and Robotics in Aerospace and Defense Market Forces

3.1 Global Artificial Intelligence and Robotics in Aerospace and Defense Market Size

3.2 Top Impacting Factors (PESTEL Analysis)

3.2.1 Political Factors

3.2.2 Economic Factors

3.2.3 Social Factors

3.2.4 Technological Factors

3.2.5 Environmental Factors

3.2.6 Legal Factors

3.3 Industry Trend Analysis

3.4 Industry Trends Under COVID-19

3.4.1 Risk Assessment on COVID-19

3.4.2 Assessment of the Overall Impact of COVID-19 on the Industry

3.4.3 Pre COVID-19 and Post COVID-19 Market Scenario

3.5 Industry Risk Assessment

Get a Sample Copy of the Artificial Intelligence and Robotics in Aerospace and Defense Market Report

4 Artificial Intelligence and Robotics in Aerospace and Defense Market - By Geography

4.1 Global Artificial Intelligence and Robotics in Aerospace and Defense Market Value and Market Share by Regions

4.1.1 Global Artificial Intelligence and Robotics in Aerospace and Defense Value ($) by Region (2015-2020)

4.1.2 Global Artificial Intelligence and Robotics in Aerospace and Defense Value Market Share by Regions (2015-2020)

4.2 Global Artificial Intelligence and Robotics in Aerospace and Defense Market Production and Market Share by Major Countries

4.2.1 Global Artificial Intelligence and Robotics in Aerospace and Defense Production by Major Countries (2015-2020)

4.2.2 Global Artificial Intelligence and Robotics in Aerospace and Defense Production Market Share by Major Countries (2015-2020)

4.3 Global Artificial Intelligence and Robotics in Aerospace and Defense Market Consumption and Market Share by Regions

4.3.1 Global Artificial Intelligence and Robotics in Aerospace and Defense Consumption by Regions (2015-2020)

4.3.2 Global Artificial Intelligence and Robotics in Aerospace and Defense Consumption Market Share by Regions (2015-2020)

5 Artificial Intelligence and Robotics in Aerospace and Defense Market - By Trade Statistics

5.1 Global Artificial Intelligence and Robotics in Aerospace and Defense Export and Import

5.2 United States Artificial Intelligence and Robotics in Aerospace and Defense Export and Import (2015-2020)

5.3 Europe Artificial Intelligence and Robotics in Aerospace and Defense Export and Import (2015-2020)

5.4 China Artificial Intelligence and Robotics in Aerospace and Defense Export and Import (2015-2020)

5.5 Japan Artificial Intelligence and Robotics in Aerospace and Defense Export and Import (2015-2020)

5.6 India Artificial Intelligence and Robotics in Aerospace and Defense Export and Import (2015-2020)

6 Artificial Intelligence and Robotics in Aerospace and Defense Market - By Type

6.1 Global Artificial Intelligence and Robotics in Aerospace and Defense Production and Market Share by Types (2015-2020)

6.1.1 Global Artificial Intelligence and Robotics in Aerospace and Defense Production by Types (2015-2020)

6.1.2 Global Artificial Intelligence and Robotics in Aerospace and Defense Production Market Share by Types (2015-2020)

6.2 Global Artificial Intelligence and Robotics in Aerospace and Defense Value and Market Share by Types (2015-2020)

6.2.1 Global Artificial Intelligence and Robotics in Aerospace and Defense Value by Types (2015-2020)

6.2.2 Global Artificial Intelligence and Robotics in Aerospace and Defense Value Market Share by Types (2015-2020)

7 Artificial Intelligence and Robotics in Aerospace and Defense Market - By Application

7.1 Global Artificial Intelligence and Robotics in Aerospace and Defense Consumption and Market Share by Applications (2015-2020)

7.1.1 Global Artificial Intelligence and Robotics in Aerospace and Defense Consumption by Applications (2015-2020)

7.1.2 Global Artificial Intelligence and Robotics in Aerospace and Defense Consumption Market Share by Applications (2015-2020)

8 North America Artificial Intelligence and Robotics in Aerospace and Defense Market

8.1 North America Artificial Intelligence and Robotics in Aerospace and Defense Market Size

8.2 United States Artificial Intelligence and Robotics in Aerospace and Defense Market Size

8.3 Canada Artificial Intelligence and Robotics in Aerospace and Defense Market Size

8.4 Mexico Artificial Intelligence and Robotics in Aerospace and Defense Market Size

8.5 The Influence of COVID-19 on North America Market

9 Europe Artificial Intelligence and Robotics in Aerospace and Defense Market Analysis

9.1 Europe Artificial Intelligence and Robotics in Aerospace and Defense Market Size

9.2 Germany Artificial Intelligence and Robotics in Aerospace and Defense Market Size

See the rest here:
Artificial Intelligence and Robotics in Aerospace and Defense Market Quantitative and Qualitative Analysi - Benzinga

China leads the world in artificial intelligence; India tries catch-up – ETTelecom

Microsoft co-founder Bill Gates had called artificial intelligence (AI) only the second revolutionary tech advancement in his lifetime, the first being graphical user interface (GUI), the foundation upon which Windows was built.

In a blog post in March, he called AI development similar to other tech inventions such as microprocessors, mobile phones and internet.

In fact, this war is not only among companies but among countries too. The table below shows the dominance of China in terms of general AI-related patent applications compared to its closest peers.

continued below

Generative AI: Order of impact across supersectorsFinancials and fintech: Improved customer experience, fraud deduction and prevention, business risk management and decision making.

Healthcare: Drug discovery and design, recruitment, optimisation of sales calls.

Industrial tech and mobility: Consumer facing and interactive applications, autonomous driving research.

Natural resources and climate tech: Help with higher resource- and asset-efficiencies

Consumer: Mass customisation and personalisation, product authenticity, facial recognition.

Real estate: Chatbots, smart buildings, generative AI adoption will increase demand for data centres.

Join the community of 2M+ industry professionals Subscribe to our newsletter to get latest insights & analysis.

Follow this link:
China leads the world in artificial intelligence; India tries catch-up - ETTelecom

Researchers are developing artificial intelligence that will detect … – Sciencenorway

Artificial intelligence (AI) can be useful in healthcare.

AI can help with interpreting images and free up time forthe radiologists.

A new study from Sweden showed, for example, thatAI-supported mammography led to 20 per cent more cancer cases being detected, accordingto NRK (link in Norwegian).

The EU research project AI-Mind focuses on artificial intelligence and health.

The goal is to be able to identify who in the group with mildcognitive impairment is at high risk of developing dementia. They could be identifiedseveral years before a diagnosis is made today.

The research is led by Ira Haraldsen at Oslo UniversityHospital.

People with mild cognitive impairment have begun to experience that their memory is failing and have some problems with reasoning and attention. But that does not necessarily mean they have, or will develop,dementia.

The background for our project is a worldwide clinicalneed. Currently, we are not able to predict your risk of developing dementia ifyou are affected by mild cognitive impairment, Ira Haraldsen said during a recent event at Arendalsuka, an annual political festival in Norway.

She believes that the dementia diagnosis comes too late. Itcomes after clear symptoms have appeared.

By then, you can alleviate symptoms but you cant affect thecourse of the disease. What we want is to shift the diagnosisinto another time window, she said.

The research group plans to create a tool based onartificial intelligence for screening, or mass examination, of the population,Haraldsen explains in an interview with sciencenorway.no. Screening involvesexamining healthy people to detect disease or precursors to disease before symptomsappear.

The dream is population-based screening of, for example,all 55-year-olds, she said.

If it turns out that you are at high risk, you will befollowed up, and all risk factors contributing to dementia should be corrected,according to Haraldsen.

Ira Haraldsen is a psychiatrist and researcher at Oslo University Hospital. (Photo: Tone Herregrden)

The study will include 1,000 participants from Norway,Finland, Italy, and Spain.

Participants from Norway and Italy have already beenrecruited. There are still some missing from Spain and Finland.

The participants are between 60 and 80 years old and havemild cognitive impairment.

What is interesting is that among people with mildcognitive impairment, 50 per cent develop dementia and 50 per cent do not. Doctorstoday dont know which group you belong to, Haraldsen said.

Researchers in AI-Mind aim to separate these two groups. Whois on the way to developing dementia, and who can be reassured?

Karin Persson is a postdoctoral fellow at the Norwegian NationalCentre for Ageing and Health and is researching dementia.

She is not part of the project and writes in an email to sciencenorway.nothat AI-Mind is one of several large projects now trying to find effective waysto diagnose cognitive impairment and dementia early on.

Common to these new projects is the use of artificialintelligence and focus on developing methods that can predict which people withearly symptoms will develop dementia, she said.

The difference between various projects is the variables theyinput into the models: whether its cognitive tests, EEG, MRI images, geneticdata, biomarkers from spinal fluid, blood, and other imaging diagnostics, Perssonexplained.

I think artificial intelligence is here to stay, I believethat this is the way forward for effective diagnostics in this field, shewrites.

Participants in AI-Mind will take part in four studies overtwo years.

An electroencephalography (EEG) examination will then beconducted. This is an examination where a cap with electrodes measureselectrical activity in the brain.

Blood samples are taken, and participants take a testregarding their ability to think and remember.

Over the two years, researchers will see who gets worse andwho stays the same or improves.

Two algorithms will be trained to predict this. One istrained on EEG examinations. It analyses how different areas of the brain communicatewith each other.

It has been known for a long time that this changes when dementiadevelops, Haraldsen explains.

An example of EEG. (Photo: Svitlana Hulko / Shutterstock / NTB)

Haraldsen compares what happens in the brain to a footballteam.

When youre very good at football, the ball is constantlypassed from one player to another, back and forth. Then suddenly, Haaland stormstowards the goal, and then manages to score, she said at the event. That's how the brain is also constructed. It works all the time, whether we are asleep orawake. All areas are chaotically in contact all the time. Then a task comes along,and we do it.

You can see the difference between a football team that collaborateswell and one that doesn't. In the latter, maybe only two players pass to eachother and exclude the others.

This is something that happens in the early stages ofdementia and mild cognitive impairment. Some areas communicate too frequentlywith each other, and others are given lower priority, Haraldsen said.

Researchers are testing two types of artificialintelligence, which Haraldsen describes as classical machine learning and deeplearning.

They are asked to divide the participants into two groupsbased on EEG examinations: Those who will deteriorate and those who will not.

The classical machine learning algorithm is asked to lookfor characteristics that researchers know are indications of early-onsetdementia.

The deep learning model has more freedom and finds its ownpatterns. Many experts believe that this is the future of artificialintelligence in healthcare, Haraldsen explained. But it is more challenging tounderstand what the machine is doing and how it arrives at the answer.

Researchers are comparing whether humans and machines cometo the same result.

Eventually, a new artificial intelligence will use the EEGanalysis, along with the results from blood tests and the mental test, to saywho is at high risk of developing dementia.

It is known that changes in blood tests can be measuredseveral years before a patient receives a dementia diagnosis. Blood tests thatcan reveal the beginning of the disease have already been successfully tested.

The researchers believe that the AI they are developing can uncoverearly-stage dementia two to three years before a diagnosis is usually made.Later, it may be possible to push the time window even a couple of yearsearlier, Haraldsen believes.

The artificial intelligence is planned to be ready for usein 2026. But Haraldsen points out that part two of the study will be necessaryfor the algorithm to be approved for the market.

The results from the artificial intelligence must becompared to the most reliable way to diagnose dementia, which is to take aspinal fluid sample and imaging diagnostics with MRI or PET, Haraldsen explains.

Today there is no treatment that can cure dementia andAlzheimer's disease. Is there then any advantage to being diagnosed earlierthan today?

Karin Persson at the National Centre for Ageing and Healthanswers:

There is much ongoing research on the development ofdisease-modifying drugs, that is, medicines that do not only affect thesymptoms of dementia but can actually stop the disease development.

This especially applies to Alzheimer's disease, the mostcommon cause of dementia worldwide.

If these types of medications are to be effective, it willbe crucial to get to it early in the disease process before the brain is too attackedand damaged, she says.

There is a reason why there is a focus on early diagnosticsnow.

At the same time, there may be ethical challenges withgiving an early diagnosis, especially in cases where the disease cannot betreated. People who are working with this are concerned about these ethical challenges, Perssonsays.

The diagnosis must come at the right time. However, there are more treatments that are relevant to dementia other than medication, even if wecurrently have no cure to stop it, she says.

Patients who notice changes in their memory and thinking,i.e., their cognitive function, often want information about the cause, Perssoncontinues.

But it is essential that we balance correctly and that ethicalconsiderations are included in guidelines for diagnosis, she says.

There is a lot happening in the field when it comes to earlydiagnosis and medication.

Three new drugs have been approved in the USA(link in Norwegian). They arebased on removing amyloid plaques, a protein that accumulates in the brain withAlzheimer's disease, Persson explains.

They are being assessed by the European Medicines Agency.However, the effects of the medications are relatively small, and the sideeffects are potentially serious, the researcher explains.

So far, the follow-up time in the studies has beenrelatively short, and it will be interesting to see how the patients fare overa longer period, she says.

Overall, these are not medications that will be given to allpeople with Alzheimer's disease. Disease stage in the patient, risk of sideeffects, expected effect, and price will be important factors, Persson explains.

Regarding early diagnosis, an essential development is methodsfor looking at markers in blood. There have been good results here, which canmake early diagnosis easier.

Again, you have to have a clear thought about who will betested when these methods become clinically available. Currently, they are usedin research in Norway, with ethical principles in mind, Persson says.

Conflict of interest: Ira Haraldsen is chairman andco-founder of the company BrainSymph, as a spin-off of the AI-Mind project.

Translated by Alette Bjordal Gjellesvik.

Readthe Norwegian version of this article on forskning.no

Read the original post:
Researchers are developing artificial intelligence that will detect ... - Sciencenorway