Category Archives: Artificial Intelligence

What are the four main types of artificial intelligence? Find out how future AI programs can change the world – Fox News

Over the last few years, the rapid development of artificial intelligence has taken the world by storm as many experts believe machine learning technology will fundamentally alter the way of life for all humans.

The general idea of artificial intelligence is that it represents the ability to mimic human consciousness and therefore can complete tasks that only humans can do. Artificial intelligence has various uses, such as making the most optimal decisions in a chess match, driving a family of four across the United States, or writing a 3,000 world essay for a college student.

Read below to understand the concepts and abilities of the four categories of artificial intelligence.

AI BOT CHAOSGPT TWEETS ITS PLANS TO DESTROY HUMANITY: WE MUST ELIMINATE THEM

The most basic form of artificial intelligence is reactive machines, which react to an input with a simplistic output programmed into the machine. In this form of AI, the program does not actually learn a new concept or have the ability to make predictions based on a dataset. During this first stage of AI, reactive machines do not store inputs and, therefore, cannot use past decisions to inform current ones.

The simplest type of artificial intelligence is seen in reactive machines, which were used in the late 1990s to defeat the world's best chess players. (REUTERS/Dado Ruvic/Illustration)

Reactive machines best exemplify the earliest form of artificial intelligence. Reactive machines were capable of beating the world's best chess players in the late 1990s by making the most optimal decisions based on their opponent's moves. The world was shocked when IBM's chess player, Deep Blue, defeated chess grandmaster Guy Kasparov during their rematch in 1997.

Reactive machines have the ability to generate thousands of different possibilities in the present based on input; however, the AI ignores all other forms of data in the present moment, and no actual learning occurs. Regardless, this programming led the way to machine-learning computing and introduced the unique power of artificial intelligence to the public for the first time.

Limited memory further expanded the complexity and abilities of machine learning computing. This form of artificial intelligence understands the concept of storing previous data and using it to make accurate predictions for the future. Through a series of trial and error efforts, limited memory allows the program to perfect tasks typically completed by humans, such as driving a car.

AI COULD GO 'TERMINATOR,' GAIN UPPER HAND OVER HUMANS IN DARWINIAN RULES OF EVOLUTION, REPORT WARNS

Limited memory AI is trained by scientists to memorize a data set before an environment is built in which it has the ability to correct mistakes and have approved behaviors reinforced. The AI then perfects its ability to complete the task during the training phase by receiving feedback from either human or environmental stimuli. That feedback is then reviewed and used to make better decisions in the future.

Elon Musk is the founder and CEO of Tesla, a leading self-driving vehicles company. (AP Photo/Susan Walsh, File)

A perfect example of limited memory artificial intelligence is self-driving cars. The model examines the speed and direction of other cars in the present moment to make the best decisions on the road. The training phase of self-driving cars also considers traffic lights, road structures, lane markings, and how human drivers act on the road. Companies like Tesla are leading the way in producing and wide-scale marketing of AI-controlled self-driving vehicles.

Theory of mind AI systems are still being researched and developed by computer scientists and may represent the future of machine learning. The general concept of the theory of mind is that an AI system will be able to react in real time to the emotions and mental characteristics of the human entity it encounters. Scientists hope that AI can complete these tasks by understanding the emotions, beliefs, thinking, and needs of individual humans.

This future AI system will need to have the ability to look past the data and understand that humans often make decisions not based on purely sound logic or fact but rather based on the mental state of their mind and overall emotions. Therefore, machine learning will need to adjust their decisions and behavior according to the mental state of humans.

GOOGLE SCRAMBLES FOR NEW SEARCH ENGINE AS AI CREEPS IN: REPORT

The development of self-aware artificial intelligence is not possible with today's technology but would represent a massive achievement for machine learning science. (Cyberguy.com)

While this is not possible at the moment, if the theory of the mind ever becomes a reality, it would be one of the greatest developments in artificial intelligence computing in decades.

The final stage of the development of artificial intelligence is when the machine has the ability to become self-aware and form its own identity. This form of AI is not at all possible today but has been used in science fiction media for decades to scare and intrigue the public. In order for self-aware AI to become possible, scientists will need to find a way to replicate consciousness into a machine.

CLICK HERE TO GET THE FOX NEWS APP

The ability to map human consciousness is a goal far beyond simply plugging inputs into an AI program or using a dataset to predict future outcomes. It represents the pinnacle of machine learning technology and may fundamentally shift how humans interact with themselves and the world.

More:
What are the four main types of artificial intelligence? Find out how future AI programs can change the world - Fox News

Your Firm and Your AI (Artificial Intelligence) – CPAPracticeAdvisor.com

It must feel good to have another tax season in the record books. While you worked heads-down, the Artificial Intelligence (AI) world advanced rapidly. Generative AI tools expanded notably, and various competitors released their offerings. My colleague Brian Tankersley and I have recorded five podcasts on the topics of AI with ChatGPT4, Microsoft AI, DALL-E & AI Competitors, AI Truthiness & Hallucinations, and Large Language Model (LLM) considerations.

Please ensure you have checked out these AI podcasts and our podcast discussions of various other products at The Technology Lab. We believe the significant CPA firm publishers will extend the tools you routinely use with AI capabilities in 2023 and beyond. Further, your technology stack should include AI for all the right reasons in all the right places.

In the last few months, Ive been reflecting on how your firm could use AI to improve business development, client experience, staff retention & recruitment, and other operations in your CPA Firm. We suspect that a single client portal will be critical to creating a focused LLM, even though Microsoft or Google would like to own the AI insights for your firm.

If you have not signed up for a ChatGPT account from OpenAI AND asked permission for early entry into Microsoft Bing, you should stop reading this article and do those two things now. I suspect you will find the paid version of ChatGPT4 worth the $20/month fee since the tool will readily save you time. Beyond these two tools, there are additional AI products that could be useful to your practice, but I dont want you to attempt to do too many things at once.

What Are the AI Trends That Can Affect Your Practice and Tech Stack?

As artificial intelligence (AI) continues to advance, CPA firms must stay informed about the latest trends and developments. Several AI trends can significantly impact a practice and tech stack, which we have discussed in previous columns.

First, the sky is the limit for applying AI in your firm. The tools can do all that you imagine and more, with some limitations. While being hyperbolic is questionable, consider that the following could all be done with AI.

Intelligent automation is a trend that combines automation and AI to help streamline repetitive tasks. This technology can help reduce the staffs workload and increase efficiency by allowing them to focus on more complex tasks. Another trend is predictive analytics, which uses AI-powered algorithms to analyze data, identify trends, and provide insights that can inform business decisions.

Natural language processing (NLP) is another AI trend that can help automate document review and analysis processes, allowing firms to manage large volumes of data more efficiently. Machine learning is another trend that can help firms automate processes and make more accurate predictions by analyzing data and detecting patterns. We have seen Machine Learning in various Client Accounting Services (CAS) tools.

Finally, blockchain technology is becoming more widely used in financial transactions, and AI can help automate and streamline these processes. While these trends can offer many benefits to CPA firms, it is essential to carefully evaluate and test any new technology before implementing it in the firm to ensure that it meets the needs of the practice and its clients. By staying current on these trends and incorporating them into their tech stack, CPA firms can improve efficiency, reduce costs, and provide better service to their clients.

In addition to these trends, CPA firms must pay close attention to cybersecurity. As AI technology continues to evolve in this field, it is essential to implement robust security measures to protect sensitive data and ensure client confidentiality.

What Are Valid Concerns About the Technology?

First, any work product or correspondence from your firm represents you and the partners. You have liability for incorrect recommendations. Further, we expect a level of professional embarrassment from improperly reviewed work.

Additionally, intellectual property violations are certainly possible with a tool of this type. Imagine, if you will, research done by a junior team member that is not carefully reviewed. A few lines may read fine, but the further AI tools go, the more they make stuff up or hallucinate. Finally, AI tools can generate results that the original programmers dont understand and cant predict. Thats not to say the results arent correct, but there are no clear, documented steps on how AI derived that result.

Further, countries such as Italy have banned using artificial intelligence tools, which may spread to other jurisdictions, such as the European Union. Visionaries in technology have also signed a document asking for a six-month moratorium on development. I have said in previous columns that all technology can be used for good or bad, and AI is no exception. Bad actors have already demonstrated how to use the platform to write new, original zero-day attacks. But, again, note the cybersecurity trend above. It is clear that the current AI tools have bias. I also suspect many competitors were caught flat-footed and want time to catch up.

So, What Is the Outlook for AI?

Consider every area of your practice that is routine, mundane, or repetitive. You can likely cut work hours in this area significantly. I can do the same work with AI assistance in about 25% of the time. One area of concern is that I dont want to lose creativity or originality by using this assistance. Because of that, Im taking more time to think and sketch on a yellow pad to outline my ideas before structuring my queries and commands to AI tools like ChatGPT or Bing. In effect, Im trying to train the AI engine like I would a staff assistant. It is working, but I learn more every day about how to ask my question better. It would be even more helpful if I could pre-load supplement data to help focus the LLM. Ive been refining my work methodologies to teach you and others how to leverage AI in your technology stack. While refining my techniques, I encourage you to spend some time with generative AI tools now!

View original post here:
Your Firm and Your AI (Artificial Intelligence) - CPAPracticeAdvisor.com

‘Artificial intelligence will outsmart humanity and take over world unless we act soon’ – The Mirror

Sunday Mirror columnist Rachael Bletchly says we should be alarmed at the development of artificial intelligence and stop ignoring its warning signs before it is too late

Hes got Daniel Craigs pout, Sean Connerys swagger and the sex appeal of Pierce Brosnan in his prime. So when I saw this photo of the new James Bond, I thought he was too good to be true.

And I was right.

Because this 007 had been created by an AI from a list of ideal qualities just like the perfect computer-designed fashion models that top brands are using in advertising campaigns.

This week, a German magazine got an exclusive interview with paralysed Michael Schumacher by using an AI chatbot programmed to respond like he might. The F1 legends family reportedly plans to sue the title.

Elsewhere, deepfake images of everyone from the Pope to Donald Trump show how easily artificial intelligence can fool us trusting humans.

But the steely stare of that phoney 007 scares the living daylights out of me. Because AI now has the ability to fulfil the dreams of every baddie Bond has thwarted. And unless we act soon, it will out-smart humanity, take over the world and destroy us all.

Think Im being over-dramatic?

Surely AIs a force for good helping solve crimes, cure cancer and transform industry?

It is until it decides to make us redundant and put our DNA to better use. So we can no longer sleepwalk towards AI armageddon while ignoring the warning signs. A Belgian dad of two committed suicide after an AI chatbot fuelled his climate change fears and urged him to end it all.

A US author claims his bot told him to ditch his wife after announcing: Im in love with you.

And the boss of Google admits he lies awake worrying after his own AI taught itself to speak a foreign language without being programmed to do so.

Elon Musk and other Silicon Valley brains have called for a six-month halt to AI research while a variety of safety protocols are designed.

Yet our Government seems far less concerned and thinks regulatory responsibility can mostly be left to the industry.

This afternoon, our phones will all go off with a test alert for future emergencies when we really should be getting alarmed at the imminent AI one.

As Eliezer Yudkowsky, a renowned expert at Californias Machine Intelligence Research Institute, recently explained, AI hasnt yet been taught to care about human life and eventually it will recognise that we are made of atoms it can use for something else.

His solution? Shut it all down. The moratorium on AI needs to be indefinite and worldwide, he says. If we continue on this course, everyone will die.

Follow this link:
'Artificial intelligence will outsmart humanity and take over world unless we act soon' - The Mirror

WEEKEND READING: Artificial intelligence, ChatGPT and ‘AIgiarism … – Higher Education Policy Institute

What is artificial intelligence?

The definition of AI has changed over time, but put simply it isa systems ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.

We take AI tools for granted every day from search engines to digital voice assistants to facial recognition on our phones.

Generative AI is a field of artificial intelligence that has taken the world by storm. Its goal is to mimic the way humans think and communicate. The best known example of generative AI isChatGPT, developed by OpenAI. ChatGPT is based on GPT (Generative Pre-trained Transformer) architecture, which is a type of deep neural network designed to generate human-like text. It has been trained on massive amounts of text data (including books, articles and web pages) to understand and generate responses to a wide range of natural language inputs (thanks ChatGPT for that description). There has been a proliferation of generative AI writing apps, with enterprise software companies like Microsoft and Google (as well as a host of start-up companies) implementing the technology for a variety of uses.

Whats the problem?

Generative AI writing apps offer exciting possibilities, but their human-like response is causing much concern in higher education as students may use them to write assignments.

If an academic cant tell whether a students assignment is all their own work, it raises questions about plagiarism and academic integrity in the current higher education assessment model.So how is the technology industry addressing the issue ofAIgiarism?

OpenAI has updated itsusage policy, stating that it doesnt allow use for fraudulent or deceptive activity including plagiarism and academic dishonesty (although its questionable how this could be enforced in reality).

The company is also reportedly working on technology tostatistically watermark the outputs, making them detectable as AI-generated text. However, these preventative measures are not being adopted across the industry. Instead, the focus of other generative AI writing apps seems to be more on promoting inbuilt plagiarism checkers, claiming the output wont be flagged by plagiarism detection tools.

Is AI bad for learning?

Does that mean AI is a negative development for higher education? Not from Kortexts perspective. TheQAA Briefingon AI and academic integrity doesnt advise banning generative AI writing apps. Instead it suggests higher education providers should use this as an opportunity to rethink assessment design, engaging students and staff in the development of authentic and innovative assessment methods.

ChatGPT (and other AI tools) can provide opportunities to encourage studentsto think critically, be more reflective, participate in group discussions,use problem-solving skills and engage in assessments that are more relevant to real-life scenarios in the workplace. Indeed, there is concern in the higher education sector about the use of AI detection tools; they can return false-positives and their performance can vary across disciplines. At this stage, its important not to be over-reliant on them, but instead to regard these classifiers as add-ons, whose conclusions require critical analysis.

The future is bright

For some students, plagiarism becomes a shortcut when they dont have enough time to meet a deadline it can be tempting to make a bad decision when youre under pressure. Kortexts Arcturus smart study platformuses AI technologies to enable students to do more in their limited time. In our eTextbooks, students can search, highlight text, make notes, add bookmarks and translate text intelligently into 100+ languages. By making tasks like these quicker and easier, were saving students time and enabling them to focus on deeper learning

Our collaborative AI technologies will supportacademics to drive student engagement with their course content, by creating learning objectives from adopted learning content and by personalizing student learning journeys. Our engagement insights helpacademics keep track of students interactions with all content in workbooks, allowing them to diagnose at an early stage where more support is needed.Kortext is working actively with the higher education sector to develop more AI tools to improve the student experience.

AI technologies have the potential to transform higher education and were excited about the possibilities that lie ahead.

Read the original:
WEEKEND READING: Artificial intelligence, ChatGPT and 'AIgiarism ... - Higher Education Policy Institute

Debunking the Myth: Is Deep Learning Necessary for Artificial … – SciTechDaily

Recent research demonstrates that brain-inspired shallow feedforward networks can efficiently learn non-trivial classification tasks with reduced computational complexity, compared to deep learning architectures. It showed that shallow architectures can achieve the same classification success rates as deep learning architectures, but with less complexity. Efficient learning on shallow architectures is connected to efficient dendritic tree learning, which incorporates findings from earlier experimental research on sub-dendritic adaptation and anisotropic properties of neurons. This discovery suggests the potential for the development of unique hardware for fast and efficient shallow learning, while reducing energy consumption. (Representation of a deep learning neural network tree.)

Deep learning appears to be a key magical ingredient for the realization of many artificial intelligence tasks. However, these tasks can be efficiently realized by the use of simpler shallow architectures.

Shallow feedforward networks can efficiently learn non-trivial classification tasks with reduced computational complexity compared to deep learning architectures, according to research published in Scientific Reports. This finding may direct the development of unique, energy-efficient hardware for shallow learning.

The earliest artificial neural network, the Perceptron, was introduced approximately 65 years ago and consisted of just one layer. However, to address solutions for more complex classification tasks, more advanced neural network architectures consisting of numerous feedforward (consecutive) layers were later introduced. This is the essential component of the current implementation of deep learning algorithms. It improves the performance of analytical and physical tasks without human intervention, and lies behind everyday automation products such as the emerging technologies for self-driving cars and autonomous chatbots.

Scheme of Deep Machine Learning consisting of many layers (left) vs. Shallow Brain Learning consisting of a few layers with enlarged width (right). Credit: Prof. Ido Kanter, Bar-Ilan University

The key question driving new research published today (April 20) in the journal Scientific Reportsis whether efficient learning of non-trivial classification tasks can be achieved using brain-inspired shallow feedforward networks, while potentially requiring less computational complexity. A positive answer questions the need for deep learning architectures, and might direct the development of unique hardware for the efficient and fast implementation of shallow learning, said Prof. Ido Kanter, of Bar-Ilans Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research. Additionally, it would demonstrate how brain-inspired shallow learning has advanced computational capability with reduced complexity and energy consumption.

Weve shown that efficient learning on an artificialshallowarchitecture can achieve the same classification success rates that previously were achieved by deep learningarchitectures consisting of many layers and filters, but with less computational complexity, said Yarden Tzach, a PhD student and contributor to this work. However, the efficient realization of shallow architectures requires a shift in the properties of advanced GPU technology, and future dedicated hardware developments, he added.

The efficient learning on brain-inspired shallow architectures goes hand in hand with efficientdendritic treelearningwhich is based on previous experimental research by Prof. Kanter on sub-dendritic adaptation usingneuronal cultures, together with other anisotropic properties of neurons, likedifferent spike waveforms,refractoryperiodsandmaximal transmission rates (see video above on dendritic learning.)

For years brain dynamics and machine learning development were researched independently, however recently brain dynamics has been revealed as a source for new types of efficient artificial intelligence.

Reference: Efficient shallow learning as an alternative to deep learning 20 April 2023, Scientific Reports.DOI: 10.1038/s41598-023-32559-8

Read the rest here:
Debunking the Myth: Is Deep Learning Necessary for Artificial ... - SciTechDaily

Texas House Passes Bill to Establish Artificial Intelligence Advisory … – The Texan

Austin, TX, 49 seconds ago A recent bill that passed the House would establish an advisory council to monitor the rise and adoption of artificial intelligence (AI), which has become an increasing concern for present and future generations of Texans.

Rep. Giovanni Capriglione introduced House Bill (HB) 2060 in an effort to study and monitor artificial intelligence systems developed, employed, or procured by state agencies.

The council would include seven members: one from each legislative chamber, an executive director, and four members appointed by the governor. Those four appointed members will include an ethics professor, an AI system professor, an expert in law enforcement, and an expert in constitutional and legal rights.

Additionally, the state council will produce a report on whether an AI system has been used in a state capacity as an automated final decision system that makes final decisions, judgments, or conclusions without human intervention, or an automated support decision system that provides information to inform the final decision, judgment, or conclusion of a human decision maker.

The State Bar of Texas has provided insight into how AI is changing the way law practices and legal judgments are being decided, including how the use of AI for predictive analytics remains one of the biggest attractions for lawyers and their clients.

Even proponents of AIs use in analyzing vast amounts of data and predicting such things as likely verdict ranges or a judges predispositions based on past rulings agree that technology has its limitations.

The recent phenomenon of ChatGPT has taken hold of the cultural and political consciousness for its potential as a disruptive technology. ChatGPT is a large language model (LLM) that was developed by OpenAI to create human-like conversations through an AI chatbot.

OpenAIs founder Sam Altman recently said in an interview that we are a little bit scared of ChatGPTs success and that more regulation is important to deter its possible downsides.

LLMs use a neural network of informational data to create a probabilistic model of patterned language. This means that when someone asks a question in ChatGPT, the AI model creates a coherent response one word at a time, based on the overall probability that the next word in the sentence is correct.

Neural networks are an adaptive method of AI learning, modeled on neurons in the human brain, that teach a computer system to process data. These systems create relationship models between information from inputs and outputs of data, utilizing a technique called deep learning to take unstructured information from inputs and make models of probable outputs.

ChatGPT, an LLM that utilizes a neural network, does not create new information but rather intuits it. It is thus not true AI because it does not think as human beings do, but instead uses algorithmic predictions to mimic human intelligence in its responses.

Another task of the council is to review the effect of the automated decision systems on the constitutional or legal rights, duties, or privileges of the residents of this state. This relates to AI alignment, or the process of making sure an AI does what its human creators intend it to do.

In a recent interview with Tucker Carlson, Elon Musk sounded the alarm about what could happen with a misaligned AI system.

AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, Musk said, in the sense that it has the potential of civilization destruction.

An important aspect of the Texas AI advisory council will be to assess the biases that might be present in AI systems. AI bias is a well-documented occurrence; for example, the Manhattan Institute notes, OpenAIs content moderation system is more permissive of hateful comments made about conservatives than the exact same comments made about liberals.

The European Union has also expressed concerns related to AI and instituted its own research commission to study its potential benefits and pitfalls.

Despite warnings from those like Musk and Altman, independent creators and AI developers have been deploying OpenAI to create a plethora of unique tools. Everything from text-to-speech vocalization and photo editing to relationship matchmaking and recipe and meal plan creations can utilize AI, which is still in just the beginning stage of what is possible with the technology.

Read more from the original source:
Texas House Passes Bill to Establish Artificial Intelligence Advisory ... - The Texan

Impacts of artificial intelligence on social interactions – CTV News

Published April 22, 2023 11:00 a.m. ET

Updated April 22, 2023 11:08 a.m. ET

Click to Expand

A new study from Cornell University published in Scientific Reports has found that while generative artificial intelligence (AI) can improve efficiency and positivity, it can also impact the way that people express themselves and see others in conversations.

Technology companies tend to emphasize the utility of AI tools to accomplish tasks faster and better, but they ignore the social dimension, said Malte Jung, associate professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science (Cornell Bowers CIS), in a press release. We do not live and work in isolation, and the systems we use impact our interactions with others.

In a study where pairs were evaluated on their conversations, some of which used AI-generated responses, researchers found that those who used smart replies were perceived as less co-operative, and their partner felt less affiliation toward them.

A smart reply is a tool created to help users respond to messages faster and to make it easier to reply to messages on devices with limited input capabilities, according to Google Developers.

While AI might be able to help you write, it's altering your language in ways you might not expect, especially by making you sound more positive, said postdoctoral researcher Jess Hohenstein in a press release. This suggests that by using text-generating AI, youre sacrificing some of your own personal voice.

Heres how the study worked.

One of the study researchers created a smart-reply platform, which the group called moshi, which is hello in Japanese.

Then, the participants evaluated, consisting of 219 pairs of people, who were asked to discuss a policy issue. They were also assigned to one of three conditions, meaning both participants could use smart replies, one could use smart replies, or neither used smart replies.

While smart replies made up 14.3 per cent of sent messages, those who used smart replies appeared to have increased efficiency in communication and positive emotional language, and their partners perceived them through positive evaluations.

Although the results of the use of smart replies were largely positive, researchers noticed something else.

The partners who suspected that their mate responded with smart replies were evaluated in a negative light, compared to those who were suspected to have written their own replies. These findings are aligned with common assumptions about the negative impacts of using AI, according to the researchers.

The researchers took things further and conducted a second experiment. This time, 299 pairs discussed a policy issue, but under four conditions: no smart replies, using default replies from Google, using smart replies that had a positive emotional tone, or using smart replies with an emotionally negative tone.

I was surprised to find that people tend to evaluate you more negatively simply because they suspect that youre using AI to help you compose text, regardless of whether you actually are, Hohenstein said, adding that this research demonstrates the overall suspicion that people seem to have around AI.

The researchers observe how some unintended social consequences can crop up as a result of AI.

This suggests that whoever is in control of the algorithm may have influence on peoples interactions, language and perceptions of each other, said Jung.

RELATED IMAGES

Follow this link:
Impacts of artificial intelligence on social interactions - CTV News

Woman’s bowel cancer spotted by artificial intelligence – BBC

21 April 2023

The Colo-Detect study uses AI to flag up areas of concern during colonoscopies

A woman who was part of a study using artificial intelligence (AI) to detect bowel cancer is free of the disease after it was found and removed.

Jean Tyler, 75, from South Shields, took part in a study called Colo-Detect as part of a trial at 10 NHS Trusts.

In the trial the AI flags up tissue potentially of concern to the medic carrying out the colonoscopy, which could be missed by the human eye.

About 2,000 patients from 10 NHS trusts have been recruited for the trial.

Jean Tyler - pictured with husband Derek - had surgery and has since recovered

The AI detected a number of polyps and an area of cancer on Mrs Tyler's colonoscopy about a year ago after she agreed to be part of the trial.

She then underwent surgery at South Tyneside District Hospital and has since recovered.

"I had fantastic support, it was unbelievable," she said.

"I had about seven or eight visits last year and I was so well looked after.

"I always say yes to these research projects because I know that they can make things a lot better for everybody."

Gastroenterology consultant Professor Colin Rees, based at Newcastle University, led the study alongside a team of colleagues working in South Tyneside and Sunderland NHS Trust.

The trial also includes North Tees and Hartlepool NHS Foundation Trust, South Tees NHS Foundation Trust, Northumbria NHS Foundation Trust and Newcastle Upon Tyne Hospitals NHS Foundation Trust.

Professor Colin Rees led the study alongside a team of colleagues working in South Tyneside and Sunderland NHS Trust

Professor Rees described it as "world-leading" in improving detection, adding AI was likely to become "a major tool used by medicine in the coming years".

The findings will studied to see how it can help save lives from bowel cancer - the second biggest killer cancer, claiming around 16,800 lives a year in the UK.

The results are expected to be published in the autumn.

Go here to read the rest:
Woman's bowel cancer spotted by artificial intelligence - BBC

How artificial intelligence is matching drugs to patients – BBC

17 April 2023

Image source, Natalie Lisbona

Dr Talia Cohen Solal, left, is using AI to help her and her team find the best antidepressants for patients

Dr Talia Cohen Solal sits down at a microscope to look closely at human brain cells grown in a petri dish.

"The brain is very subtle, complex and beautiful," she says.

A neuroscientist, Dr Cohen Solal is the co-founder and chief executive of Israeli health-tech firm Genetika+.

Established in 2018, the company says its technology can best match antidepressants to patients, to avoid unwanted side effects, and make sure that the prescribed drug works as well as possible.

"We can characterise the right medication for each patient the first time," adds Dr Cohen Solal.

Genetika+ does this by combining the latest in stem cell technology - the growing of specific human cells - with artificial intelligence (AI) software.

From a patient's blood sample its technicians can generate brain cells. These are then exposed to several antidepressants, and recorded for cellular changes called "biomarkers".

This information, taken with a patient's medical history and genetic data, is then processed by an AI system to determine the best drug for a doctor to prescribe and the dosage.

Although the technology is currently still in the development stage, Tel Aviv-based Genetika+ intends to launch commercially next year.

Image source, Getty Images

The global pharmaceutical sector had revenues of $1.4 trillion in 2021

An example of how AI is increasingly being used in the pharmaceutical sector, the company has secured funding from the European Union's European Research Council and European Innovation Council. Genetika+ is also working with pharmaceutical firms to develop new precision drugs.

"We are in the right time to be able to marry the latest computer technology and biological technology advances," says Dr Cohen Solal.

A senior lecturer of biomedical AI and data science at King's College London, she says that AI has so far helped with everything "from identifying a potential target gene for treating a certain disease, and discovering a new drug, to improving patient treatment by predicting the best treatment strategy, discovering biomarkers for personalised patient treatment, or even prevention of the disease through early detection of signs for its occurrence".

New Tech Economy is a series exploring how technological innovation is set to shape the new emerging economic landscape.

Yet fellow AI expert Calum Chace says that the take-up of AI across the pharmaceutical sector remains "a slow process".

"Pharma companies are huge, and any significant change in the way they do research and development will affect many people in different divisions," says Mr Chace, who is the author of a number of books about AI.

"Getting all these people to agree to a dramatically new way of doing things is hard, partly because senior people got to where they are by doing things the old way.

"They are familiar with that, and they trust it. And they may fear becoming less valuable to the firm if what they know how to do suddenly becomes less valued."

However, Dr Sailem emphasises that the pharmaceutical sector shouldn't be tempted to race ahead with AI, and should employ strict measures before relying on its predictions.

"An AI model can learn the right answer for the wrong reasons, and it is the researchers' and developers' responsibility to ensure that various measures are employed to avoid biases, especially when trained on patients' data," she says.

Hong Kong-based Insilico Medicine is using AI to accelerate drug discovery.

"Our AI platform is capable of identifying existing drugs that can be re-purposed, designing new drugs for known disease targets, or finding brand new targets and designing brand new molecules," says co-founder and chief executive Alex Zhavoronkov.

Image source, Insilico Medicine

Alex Zhavoronkov says that using AI is helping his firm to develop new drugs more quickly than would otherwise be the case

Its most developed drug, a treatment for a lung condition called idiopathic pulmonary fibrosis, is now being clinically trialled.

Mr Zhavoronkov says it typically takes four years for a new drug to get to that stage, but that thanks to AI, Insilico Medicine achieved it "in under 18 months, for a fraction of the cost".

He adds that the firm has another 31 drugs in various stages of development.

Back in Israel, Dr Cohen Solal says AI can help "solve the mystery" of which drugs work.

Read the original post:
How artificial intelligence is matching drugs to patients - BBC

News coverage of artificial intelligence reflects business and government hype not critical voices – The Conversation Indonesia

The news media plays a key role in shaping public perception about artificial intelligence. Since 2017, when Ottawa launched its Pan-Canadian Artificial Intelligence Strategy, AI has been hyped as a key resource for the Canadian economy.

With more than $1 billion in public funding committed, the federal government presents AI as having potential that must be harnessed. Publicly-funded initiatives, like Scale AI and Forum IA Qubec, exist to actively promote AI adoption across all sectors of the economy.

Over the last two years, our multi-national research team, Shaping AI, has analyzed how mainstream Canadian news media covers AI. We analyzed newspaper coverage of AI between 2012 and 2021 and conducted interviews with Canadian journalists who reported on AI during this time period.

Our report found news media closely reflects business and government interests in AI by praising its future capabilities and under-reporting the power dynamics behind these interests.

Our research found that tech journalists tend to interview the same pro-AI experts over and over again especially computer scientists. As one journalist explained to us: Who is the best person to talk about AI, other than the one who is actually making it? When a small number of sources informs reporting, news stories are more likely to miss important pieces of information or be biased.

Canadian computer scientists and tech entrepreneurs Yoshua Bengio, Geoffrey Hinton, Jean-Franois Gagn and Jolle Pineau are disproportionately used as sources in mainstream media. The name of Bengio a leading expert in AI, pioneer in deep learning and founder of Mila AI Institute turns up nearly 500 times in 344 different news articles.

Only a handful of politicians and tech leaders, like Elon Musk or Mark Zuckerberg, have appeared more often across AI news stories than these experts.

Few critical voices find their way into mainstream coverage of AI. The most-cited critical voice against AI is late physicist Stephen Hawking, with only 71 mentions. Social scientists are conspicuous in their absence.

Bengio, Hinton and Pineau are computer science authorities, but like other scientists theyre not neutral and free of bias. When interviewed, they advocate for the development and deployment of AI. These experts have invested their professional lives in AI development and have a vested interest in its success.

Most AI scientists are not only researchers, but are also entrepreneurs. There is a distinction between these two roles. While a researcher produces knowledge, an entrepreneur uses research and development to attract investment and sell their innovations.

The lines between the state, the tech industry and academia are increasingly porous. Over the last decade in Canada, state agencies, private and public organizations, researchers and industrialists have worked to create a profitable AI ecosystem. AI researchers are firmly embedded in this tightly-knit network, sharing their time between publicly-funded labs and tech giants like Meta.

AI researchers occupy key positions of power in organizations that promote AI adoption across industries. Many hold, or have held, decision-making positions at the Canadian Institute for Advanced Research (CIFAR) an organization that channels public funding to AI Research Chairs across Canada.

When computer scientists make their way into the news cycle, they do so not only as AI experts, but also as spokespeople for this network. They bring credibility and legitimacy to AI coverage because of their celebrated expertise. But they are also in a position to promote their own expectations about the future of AI, with little to no accountability for the fulfilment of these visions.

The AI experts quoted in mainstream media rarely discussed the technicalities of AI research. Machine learning techniques colloquially known as AI were deemed too complex for a mainstream audience. Theres only room for so much depth about technical issues, one journalist told us.

Instead, AI researchers use media attention to shape public expectations and understandings of AI. The recent coverage of an open letter calling for a six-month ban on AI development is a good example. News reports centred on alarmist tropes on what AI could become, citing profound risks to society.

Bengio, who signed the letter, warned that AI has the potential to destabilize democracy and the world order.

These interventions shaped the discourse about AI in two ways. First, they framed AI debates according to alarmist visions of distant future. Coverage of an open letter calling for a six-month break from AI development overshadowed real and well-documented harms from AI, like worker exploitation, racism, sexism, disinformation and concentration of power in the hands of tech giants.

Second, the open letter casts AI research into a Manichean dichotomy: the bad version that no onecan understand, predict, or reliably control and the good one the so-called responsible AI. The open letter was as much about shaping visions about the future of AI as it was about hyping up responsible AI.

But according to AI industry standards, what is framed as responsible AI to date has consisted of vague, voluntary and toothless principles that cannot be enforced in corporate contexts. Ethical AI is often just a marketing ploy for profit and does little to eliminate the systems of exploitation, oppression and violence that are already linked to AI.

Our report proposes five recommendations to encourage reflexive, critical and investigative journalism in science and technology, and pursue stories about the controversies of AI.

1. Promote and invest in technology journalism. Be wary of economic framings of AI and investigate other angles that are typically left out of business reporting, like inequalities and injustices caused by AI.

2. Avoid treating AI as a prophecy. The expected realizations of AI in the future must be distinguished from its real-world accomplishments.

3. Follow the money. Canadian legacy media has paid little attention to the significant amount of governmental funding that goes into AI research. We urge journalists to scrutinize the networks of people and organizations that work to construct and maintain the AI ecosystem in Canada.

4. Diversify your sources. Newsrooms and journalists should diversify their sources of information when it comes to AI coverage. Computer scientists and their research institutions are overwhelmingly present in AI coverage in Canada, while critical voices are severely lacking.

5. Encourage collaboration between journalists and newsrooms and data teams. Co-operation among different types of expertise helps to highlight the social and technical considerations of AI. Without one or the other, AI coverage is likely to be deterministic, inaccurate, naive or overly simplistic.

To be reflexive and critical of AI does not mean to be against the development and deployment of AI. Rather, it encourages the news media and its readers to question the underlying cultural, political and social dynamics that make AI possible, and examine the broader impact that technology has on society and vice versa.

If so, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. With the latest scientific discoveries, thoughtful analysis on political issues and research-based life tips, each email is filled with articles that will inform you and often intrigue you.

Get our newsletters

Editor and General Manager

Find peace of mind, and the facts, with experts. Add evidence-based articles to your news digest. No uninformed commentariat. Just experts. 90,000 of them have written for us. They trust us. Give it a go.

Get our newsletter

If you found the article you just read to be insightful, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. Each newsletter has articles that will inform and intrigue you.

Subscribe now

CEO | Editor-in-Chief

It helps you go deeper into key political issues and also introduces you to the diversity of research coming out of the continent. It's not about breaking news. It's not about unfounded opinions. The Europe newsletter is evidence-based expertise from European scholars, presented by myself in France, and two of my colleagues in Spain and the UK.

Get our newsletter

Head of English section, France edition

Go here to read the rest:
News coverage of artificial intelligence reflects business and government hype not critical voices - The Conversation Indonesia