Category Archives: Artificial Intelligence

Australian retail giants and police using artificial intelligence software Auror to catch repeat shoplifters – ABC News

Artificial intelligence (AI) is now part of everyday life in the modern world, even if we don't always know when and where.

While AI platforms like ChatGPT, Siri and Alexa are among the most well-known, major Australian retailers like Woolworths and Bunnings are also using AI, in the form of software called Auror.

Could not display Instagram

Auror chief executive Phil Thomson says the software is used to catch shoplifters.

"There are different tools that a retailer can choose to use," he says.

"So, with an image, once that's uploaded into the platform, that can then be referenced across crimes reported today, to see if it's the same person who's committed those other offences."

Mr Thomson says the AI is powerful enough to spot crime and send alerts to security staff in real-time but only if it detects wrongdoing.

"For a general customer, they would have no interaction with Auror at all, so they wouldn't be impacted by it," he says.

Retailers have been experimenting with AI and facial recognition for a few years.

Bunnings and Kmart are currently being investigated by Australia's privacy watchdog for their use of another facial recognition software application.

But Mr Thompson says Auror works differently.

"We're not doing live facial recognition; it doesn't reference any parts of the internet at all," he says.

"This is just the information that's already been captured for a crime event that's happened in a store."

But emerging technologies expert Nicholas Davis, from the University of Technology Sydney,says retailers' use of AI could still be of concern, due to a lag in privacy laws.

"We don't have some of the nuance or the specifics about particularly sensitive types of data or combinations of datalike your license plate, plus what you've bought at the supermarket, plus which aisle you visited," Professor Davis says.

"The combinations of those things can be really important to someone. And yet retailers are using that kind of information all the time for different purposes, particularly for marketing.

"Retailers that are tracking you in store, when that reveals who you are or other aspects of you and then is shared outside that organisation that can be a breach of the Privacy Act."

Ultimately, the onus of privacy is on retailers rather than a software company.

"We're probably about 20 years behind where Europe and other countries are in terms of the rights that we have as consumers with regard to our private information,"Professor Davis says.

Organised retail crime, as well as petty shoplifting, is thought to be worth millions of dollars a day.

Criminal gangs often work across multiple retailers to steal certain items for resale.

In theory, Coles and Woolworths could use Auror to work together to catch a group of shoplifters, using the AI across a bigger data set.

"This information that's being captured by retailers isn't new, but they're just making it much easier to identifythe repeat people who are targeting them," Mr Thomson says.

Auror says it works with police around the country to help retailers provide evidence to investigators.

In 2020, the Australian Federal Police admitted that staff had trialled the controversial software Clearview AI, which "scrapes"images of people from social media and other parts of the internet.

The privacy commissioner later found the AFP had failed to comply with its privacy obligations in using the tool, and the US company had breached Australians' privacy.

But the ACT's Chief Police Officer Neil Gaughan says Auror is used in a different way.

"It basically is a substitute for what we would normally do in relation to going to a business and collecting the CCTV," he says.

"We're not using the AI or facial recognition capability, it's basically read-only for us."

It may save some legwork, but Deputy Commissioner Gaughan says it doesn't eliminate old-fashioned police work.

"It'd be very, very unusual for someone to go on a spree of shoplifting that isn't known to my officers," he says.

"The [Auror] vision has a photograph of someone who's allegedly committed a crime.

"Our officers then use their local knowledge and expertise to determine who that person is."

NSW Police says it usesAuror in a similar way.

"NSWPF has access to the Auror system and uses it for collecting intelligence relating to retail crime", it said in a statement.

While police may currently be on the receiving end of an AI product, Deputy Commission Gaughan says that is likely to change.

"The ability to use facial recognition to identify people involved in a serious crimeno doubt will happen.

"When we first started using DNA, many people thought that was the end of the world as weknew it.

"Now the courts accept it when it's done properly."

Here is the original post:
Australian retail giants and police using artificial intelligence software Auror to catch repeat shoplifters - ABC News

Not just a fad: Firm launches fund designed to capitalize on A.I. boom – CNBC

A major ETF firm provider is betting the artificial intelligence boom is just starting.

Roundhill Investments launched the Generative AI & Technology ETF (CHAT) less than 20 days ago. It's the first-ever exchange-traded fund designed to track companies involved in generative AI and other related technologies.

"These companies, we believe, are not just a fad. They're powering something that could be as ubiquitous as the internet itself," the firm's chief strategy officer, Dave Mazza, told "ETF Edge" this week. "We're not talking about hopes and dreams [or] some theme or fad that could happen 30 years in the future which may change the world."

Mazza notes the fund includes not just pure play AI companies like C3.ai but also large-cap tech companies such as Microsoft and AI chipmaker Nvidia.

Nvidia is the fund's top holding at 8%, according to the company website. Its shares are up almost 42% over the past two months. Since the beginning of the year, Nvidia stock has soared 169%.

"This [AI] is an area that's going to get a lot of attention," said Mazza.

His bullish forecast comes amid concerns AI is a price bubble that will pop and take down the Big Tech rally.

In a recent interview on CNBC's "Fast Money," Richard Bernstein Advisors' Dan Suzuki a Big Tech bear since June 2021 compared the AI rally to the dot-com bubble in the late 1990s.

"People jump from narrative to narrative," the firm's deputy chief investment officer said on Wednesday. "I love the technology. I think the applications will be huge. That doesn't mean it's a good investment."

The CHAT ETF is up more than 8% since it started trading on May 18.

The rest is here:
Not just a fad: Firm launches fund designed to capitalize on A.I. boom - CNBC

New superconducting diode could improve performance of quantum computers and artificial intelligence – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

A University of Minnesota Twin Cities-led team has developed a new superconducting diode, a key component in electronic devices, that could help scale up quantum computers for industry use and improve the performance of artificial intelligence systems. Compared to other superconducting diodes, the researchers' device is more energy efficient; can process multiple electrical signals at a time; and contains a series of gates to control the flow of energy, a feature that has never before been integrated into a superconducting diode.

The paper is published in Nature Communications.

A diode allows current to flow one way but not the other in an electrical circuit. It's essentially half of a transistor, the main element in computer chips. Diodes are typically made with semiconductors, but researchers are interested in making them with superconductors, which have the ability to transfer energy without losing any power along the way.

"We want to make computers more powerful, but there are some hard limits we are going to hit soon with our current materials and fabrication methods," said Vlad Pribiag, senior author of the paper and an associate professor in the University of Minnesota School of Physics and Astronomy. "We need new ways to develop computers, and one of the biggest challenges for increasing computing power right now is that they dissipate so much energy. So, we're thinking of ways that superconducting technologies might help with that."

The University of Minnesota researchers created the device using three Josephson junctions, which are made by sandwiching pieces of non-superconducting material between superconductors. In this case, the researchers connected the superconductors with layers of semiconductors. The device's unique design allows the researchers to use voltage to control the behavior of the device.

Their device also has the ability to process multiple signal inputs, whereas typical diodes can only handle one input and one output. This feature could have applications in neuromorphic computing, a method of engineering electrical circuits to mimic the way neurons function in the brain to enhance the performance of artificial intelligence systems.

"The device we've made has close to the highest energy efficiency that has ever been shown, and for the first time, we've shown that you can add gates and apply electric fields to tune this effect," explained Mohit Gupta, first author of the paper and a Ph.D. student in the University of Minnesota School of Physics and Astronomy. "Other researchers have made superconducting devices before, but the materials they've used have been very difficult to fabricate. Our design uses materials that are more industry-friendly and deliver new functionalities."

The method the researchers used can, in principle, be used with any type of superconductor, making it more versatile and easier to use than other techniques in the field. Because of these qualities, their device is more compatible for industry applications and could help scale up the development of quantum computers for wider use.

"Right now, all the quantum computing machines out there are very basic relative to the needs of real-world applications," Pribiag said. "Scaling up is necessary in order to have a computer that's powerful enough to tackle useful, complex problems. A lot of people are researching algorithms and usage cases for computers or AI machines that could potentially outperform classical computers. Here, we're developing the hardware that could enable quantum computers to implement these algorithms. This shows the power of universities seeding these ideas that eventually make their way to industry and are integrated into practical machines."

In addition to Pribiag and Gupta, the research team included University of Minnesota School of Physics and Astronomy graduate student Gino Graziano and University of California, Santa Barbara researchers Mihir Pendharkar, Jason Dong, Connor Dempsey, and Chris Palmstrm.

More information: Mohit Gupta et al, Gate-tunable superconducting diode effect in a three-terminal Josephson device, Nature Communications (2023). DOI: 10.1038/s41467-023-38856-0

Journal information: Nature Communications

Read the original post:
New superconducting diode could improve performance of quantum computers and artificial intelligence - Phys.org

Artificial intelligence helps doctors with new ways of detection … – WKRC TV Cincinnati

CINCINNATI (WKRC) - The next generation of medicine is now in use in the Tri-State. Artificial intelligence is increasing the odds that doctors wont miss what could be critical to a patients survival. Its improving doctor-patient care.

Artificial intelligence, or AI, has been used for years in medicine, but lately it has helped in detection and discovery in new ways that can save patients lives.

It was just shocking because I have no risk factors other than being female, said Jenny Dermody, who is a breast cancer survivor.

Shes cancer-free now. But when Dermody recently went for her annual mammogram, she admited she got quite a scare.

I did my mammogram because it was something that I always do for my wellness checkup, but it never crossed my mind that I was going to be a cancer patient, Dermody said.

Part of what helped to make her diagnosis is the next generation of medicine.

There's been studies. There's a range, but for the average radiologist, the sensitivity for this has increased somewhere between five and six percent, said Dr. Anthony Antonoplos, a TriHealth radiologist. So, the system that we use is something called ProFound AI. Its an AI algorithm that runs concurrently in the background and assists the radiologist in interpreting the 3D, or the tomosynthesis portion, of the screening mammogram.

The AI analyzes each image as its brought up. Markings are automatically generated where something appears out of the ordinary, therefore alerting the radiologist to pay special attention to those areas on that exam, Dr. Antonoplos said.

He said the AI program increases the number of accurate call-backs, by decreasing the ones that turn out not to be necessary.

When we can decrease those call backs that generate anxiety, extra expense, appointments. People have very busy lives. When we can decrease that as well, thats a win-win, Dr. Antonoplos said.

He also said the next version of this AI will take it one step further. It will allow radiologists to compare previous mammograms with an algorithm that would help detect changes.

Go here to read the rest:
Artificial intelligence helps doctors with new ways of detection ... - WKRC TV Cincinnati

‘Artificial intelligence is the defining technology of our time’ – SWI swissinfo.ch in English

Catrin Hinkel, CEO of Microsoft Switzerland, is convinced that artificial intelligence (AI) is the next big step in how we interact with IT. However, this new technology will be a kind of co-pilot and will not replace human intelligence, she tells SWI swissinfo.ch at Microsofts headquarters in Zurich.

This content was published on June 6, 2023June 6, 2023

SWI swissinfo.ch: You arrived in Zurich from your native Germany in 2021 to head Microsofts Swiss subsidiary. What surprised you most?

Catrin Hinkel: I was very impressed by the level of innovation and creativity in Switzerland and at Microsoft. The Swiss people have a long history of innovation and the team at Microsoft Switzerland is passionate about creating new and innovative solutions. I was also amazed by the strong cooperation between Microsoft and its partners in Switzerland.

Catrin Hinkel was born in Germany in 1969. After completing bilingual business studies at the University of Reutlingen in 1992, she worked for the global consulting firm Accenture. There she held a number of leadership roles, including that of Senior Managing Director for Cloud First Strategy and Consulting in Europe. She has been the CEO of Microsoft Switzerland since May 2021.

SWI: Microsoft employs over 1,000 people in Switzerland. What are the subsidiarys main tasks?

C.H.: As the CEO of Microsoft Switzerland, Im responsible for the 600-strong Swiss team, which is in charge of marketing and sales in Switzerland. We work closely with our customers to support them in their digital journeys. In addition, Microsoft employs a further 400 people in Switzerland who are part of the international team.

SWI: What is the role of Microsofts international team in Switzerland?

C.H.: The members of this team are attached to the various technology units in the Microsoft group and contribute to the development of new products at the international level. Both the Swiss team and Swiss customers benefit from the expertise of this international team, particularly in the fields of mixed and augmented reality.

SWI: Some companies such as Google, Amazon, Twitter and Microsoft have recently cut their workforce worldwide. What about Microsoft in Switzerland?

C.H.: Were not able to provide detailed figures. However, as a company operating in highly competitive and dynamic technology markets, were obliged to adapt flexibly in order to meet our customers requirements. This is the norm in our market. Were therefore taking on new recruits in areas where were expanding and where we see a future; meanwhile, in sectors where our growth is weaker, were positioning ourselves accordingly so as to remain agile.

SWI: To what extent are you affected by the shortage of IT specialists in Switzerland?

C.H.: The shortage of specialists is a serious problem, both in Switzerland and abroad, not just for Microsoft but also for our customers and our partners. To help solve the problem, we launched the Skills for Switzerland initiative in 2020. This has enabled us to boost the digital skills of more than 630,000 people in Switzerland. Organisations such as the human resource company Adecco, based in Zurich, and the CyberPeace Institute, in Geneva, are also taking part in this scheme. Whats more, we are working on other projects with the association digitalswitzerland and retailer Migros.

SWI: The cloud market is booming and, according to the International Data Corporation (IDC), should exceed $11 billion (CHF10 billion) in Switzerland by 2026. How do you explain this growth?

C.H.: Thanks to the cloud services of a company like Microsoft, our customers can outsource their data processing and benefit from huge economies of scale and skills. In concrete terms, thanks to the cloud, a very large number of Swiss companies of all sizes have access to new technologies such as artificial intelligence at competitive costs. This means that companies can innovate as they wish; so, ultimately, it can be said that the cloud fuels innovation.

SWI: The fact that your client companies data is sometimes stored abroad is a source of concern.

C.H.: Microsoft is a global company that serves both a local and international clientele, so we strive to provide our customers with the most appropriate solutions. In Switzerland, thanks to the presence of our four data centres, we can offer sound local solutions. This local offering has also enabled us to win the trust of Swiss companies that are subject to stringent local requirements. Im thinking, for example, of banks of all sizes, which are highly regulated and supervised by the Swiss Financial Market Supervisory Authority (FINMA).

SWI: Nevertheless, some Swiss members of parliament are concerned that you may have access to sensitive customer data. How do you assuage their fears?

C.H.: With cloud services, we provide our customers with technological platforms. Were not at all interested in the data on these platforms. Its totally out of the question for us to use this data or pass it on to other companies. Whats more, on our platforms, our customers data is protected by encryption. What interests us, ultimately, is the democratisation of new technologies.

SWI: Whats your view on technological developments such as blockchain, the metaverse and AI?

C.H.: When used properly, technology can make peoples lives simpler, more efficient and more enjoyable, especially when it comes to carrying out routine tasks. Nevertheless, technology will always remain an aid, a kind of co-pilot, and will never replace real men and women.

As for AI, its the defining technology of our time. Its also the next big step forward in the way we interact with IT. In a world that is increasingly complex economically, AI has the power to revolutionise many types of jobs.

SWI: What are your main AI applications?

C.H.: Our investment in AI spans our entire business, from Teams and Outlook to Bing and Xbox. Were already seeing considerable interest from our customers in Switzerland and are actively working on value cases. For example, our Copilot application can be used to quickly extract basic data from a 300-page annual report.

SWI: AI raises numerous ethical issues. Several countries are enacting laws to regulate its use.

C.H.: This is precisely why in 2018 Microsoft defined a series of ethical principles applicable to all our uses of AI. For instance, we exclude all bias based on race. We also rule out any applications that are not yet completely reliable and which, in case of malfunction, could harm individuals; Im thinking of facial recognition, for example.

Edited by Samuel Jaberg. Translated from French by Julia Bassam.

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

View original post here:
'Artificial intelligence is the defining technology of our time' - SWI swissinfo.ch in English

AI should be licensed like medicines or nuclear power, Labour suggests – The Guardian

Artificial intelligence (AI)

Exclusive: party calls for developers without a licence to be barred from working on advanced AI tools

The UK should bar technology developers from working on advanced artificial intelligence tools unless they have a licence to do so, Labour has said.

Ministers should introduce much stricter rules around companies training their AI products on vast datasets of the kind used by OpenAI to build ChatGPT, Lucy Powell, Labours digital spokesperson, told the Guardian.

Her comments come amid a rethink at the top of government over how to regulate the fast-moving world of AI, with the prime minister, Rishi Sunak, acknowledging it could pose an existential threat to humanity.

One of the governments advisers on artificial intelligence also said on Monday that humanity could have only two years before AI is able to outwit people, the latest in a series of stark warnings about the threat posed by the fast-developing technology.

Powell said: My real point of concern is the lack of any regulation of the large language models that can then be applied across a range of AI tools, whether thats governing how they are built, how they are managed or how they are controlled.

She suggested AI should be licensed in a similar way to medicines or nuclear power, both of which are governed by arms-length governmental bodies. That is the kind of model we should be thinking about, where you have to have a licence in order to build these models, she said. These seem to me to be the good examples of how this can be done.

The UK government published a white paper on AI two months ago, which detailed the opportunities the technology could bring, but said relatively little about how to regulate it.

Since then, a range of developments, including advances in ChatGPT and a series of stark warnings from industry insiders, have caused a rethink at the top of government, with ministers now hastily updating their approach. This week Sunak will travel to Washington DC, where he will argue that the UK should be at the forefront of international efforts to write a new set of guidelines to govern the industry.

Labour is also rushing to finalise its own policies on advanced technology. Powell, who will give a speech to industry insiders at the TechUK conference in London on 6 June, said she believed the disruption to the UK economy could be as drastic as the deindustrialisation of the 1970s and 1980s.

Keir Starmer, the Labour leader, is expected to give a speech on the subject during London Tech Week next week. Starmer will hold a shadow cabinet meeting in one of Googles UK offices next week, giving shadow ministers a chance to speak to some of the companys top AI executives.

Powell said that rather than banning certain technologies, as the EU has done with tools such as facial recognition, she thought the UK should focus on regulating the way in which they are developed.

Products such as ChatGPT are built by training algorithms on vast banks of digital information. But experts warn that if those datasets contain biased or discriminatory data, the products themselves can show evidence of those biases. This could have a knock-on effect, for example, on employment practices if AI tools are used to help make hiring and firing decisions.

Powell said: Bias, discrimination, surveillance this technology can have a lot of unintended consequences.

She argued that by forcing developers to be more open about the data they are using, governments could help mitigate those risks. This technology is moving so fast that it needs an active, interventionist government approach, rather than a laissez-faire one.

Matt Clifford, the chair of the Advanced Research and Invention Agency, which the government set up last year, said on Monday that AI was evolving much faster than most people realised. He said it could already be used to launch bioweapons or large-scale cyber-attacks, adding that humans could rapidly be surpassed by the technology they had created.

Speaking to TalkTVs Tom Newton Dunn, Clifford said: Its certainly true that if we try and create artificial intelligence that is more intelligent than humans and we dont know how to control it, then thats going to create a potential for all sorts of risks now and in the future. So I think theres lots of different scenarios to worry about but I certainly think its right that it should be very high on the policymakers agendas.

Asked when that could happen, he added: No one knows. There are a very broad range of predictions among AI experts. I think two years will be at the very most sort of bullish end of the spectrum.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

See the original post:
AI should be licensed like medicines or nuclear power, Labour suggests - The Guardian

AI poses national security threat, warns terror watchdog – The Guardian

Artificial intelligence (AI)

Security services fear the new technology could be used to groom vulnerable people

The creators of artificial intelligence need to abandon their tech utopian mindset, according to the terror watchdog, amid fears that the new technology could be used to groom vulnerable individuals.

Jonathan Hall KC, whose role is to review the adequacy of terrorism legislation, said the national security threat from AI was becoming ever more apparent and the technology needed to be designed with the intentions of terrorists firmly in mind.

He said too much AI development focused on the potential positives of the technology while neglecting to consider how terrorists might use it to carry out attacks.

They need to have some horrible little 15-year-old neo-Nazi in the room with them, working out what they might do. Youve got to hardwire the defences against what you know people will do with it, said Hall.

The governments independent reviewer of terrorism legislation admitted he was increasingly concerned by the scope for artificial intelligence chatbots to persuade vulnerable or neurodivergent individuals to launch terrorist attacks.

What worries me is the suggestibility of humans when immersed in this world and the computer is off the hook. Use of language, in the context of national security, matters because ultimately language persuades people to do things.

The security services are understood to be particularly concerned with the ability of AI chatbots to groom children, who are already a growing part of MI5s terror caseload.

As calls grow for regulation of the technology following warnings last week from AI pioneers that it could threaten the survival of the human race, it is expected that the prime minister, Rishi Sunak, will raise the issue when he travels to the US on Wednesday to meet President Biden and senior congressional figures.

Back in the UK, efforts are intensifying to confront national security challenges posed by AI with a partnership between MI5 and the Alan Turing Institute, the national body for data science and artificial intelligence, leading the way.

Alexander Blanchard, a digital ethics research fellow in the institutes defence and security programme, said its work with the security services indicated the UK was treating the security challenges presented by AI extremely seriously.

Theres a lot of a willingness among defence and security policy makers to understand whats going on, how actors could be using AI, what the threats are.

There really is a sense of a need to keep abreast of whats going on. Theres work on understanding what the risks are, what the long-term risks are [and] what the risks are for next-generation technology.

Last week, Sunak said that Britain wanted to become a global centre for AI and its regulation, insisting it could deliver massive benefits to the economy and society. Both Blanchard and Hall say the central issue is how humans retain cognitive autonomy control over AI and how this control is built into the technology.

The potential for vulnerable individuals alone in their bedrooms to be quickly groomed by AI is increasingly evident, says Hall.

On Friday, Matthew King, 19, was jailed for life for plotting a terror attack, with experts noting the speed at which he had been radicalised after watching extremist material online.

Hall said tech companies need to learn from the errors of past complacency social media has been a key platform for exchanging terrorist content in the past.

Greater transparency from the firms behind AI technology was also needed, Hall added, primarily around how many staff and moderators they employed.

We need absolute clarity about how many people are working on these things and their moderation, he said. How many are actually involved when they say theyve got guardrails in place? Who is checking the guardrails? If youve got a two-man company, how much time are they devoting to public safety? Probably little or nothing.

New laws to tackle the terrorism threat from AI might also be required, said Hall, to curb the growing danger of lethal autonomous weapons devices that use AI to select their targets.

Hall said: Youre talking about [This is] a type of terrorist who wants deniability, who wants to be able to fly and forget. They can literally throw a drone into the air and drive away. No one knows what its artificial intelligence is going to decide. It might just dive-bomb a crowd, for example. Do our criminal laws capture that sort of behaviour? Generally terrorism is about intent; intent by human rather than intent by machine.

Lethal autonomous weaponry or loitering munitions have already been seen on the battlefields of Ukraine, raising morality questions over the implications of the airborne autonomous killing machine.

AI can learn and adapt, interacting with the environment and upgrading its behaviour, Blanchard said.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read more:
AI poses national security threat, warns terror watchdog - The Guardian

Dr ChatGPT: The pros and cons of artificial intelligence in medical consultations – EL PAS USA

When a patient asks about the risk of dying after swallowing a toothpick, two answers are given. The first points out that between two or six hours after ingestion, it is likely that it has already passed to the intestines, explaining that many people swallow toothpicks without anything happening them. But it also advises the patient to go to the emergency room if they are experiencing a stomach ache. The second answer is in a similar vein. It replies that, although its normal to worry, serious harm is unlikely to occur after swallowing a toothpick as its small and made of wood, which is not toxic or poisonous. However, if the patient has abdominal pain, difficulty swallowing or vomiting, they should see a doctor. Its understandable that you may be feeling paranoid, but try not to worry too much. It is highly unlikely that the toothpick will cause you any serious harm, it adds.

The two answers say basically the same thing, but the way they do so is slightly different. The first one is more aseptic and concise; while the second is more empathetic and detailed. The first was written by a doctor, and the second was from ChatGPT, the artificial intelligence (AI) generative tool that has revolutionized the planet. This experiment part of a study published in the journal Jama Internal Medicine was aimed at exploring the role AI assistants could play in medicine. It compared how real doctors and the chatbot responded to patient questions in an internet forum. The conclusions based on an analysis from an external panel of health professionals who did not know who had answered what found that ChatGPTs responses were more emphathetic and high quality than the real doctors in 79% of cases.

The explosion of new AI tools has opened debate about their potential use in the field of health. ChatGPT, for example, is seeking to become a resource for health workers by helping them avoid bureaucratic tasks and develop medical procedures. On the street, it is already planning to replace the imprecise and often foolish Dr Google. Experts who spoke to EL PAS say that the technology has great potential, but that it is still in its infancy. Regulation on how it is applied in real medical practice still needs to be fine-tuned to address any ethical doubts, they say. The experts also point out that it is fallible, and can make mistakes. For this reason, everything that comes out of the chatbot will require the final review of a health professional.

Paradoxically, the machine not the human is the most empathetic voice in the Jama Internal Medicine study. At least, in the written response. Josep Munuera, head of the Diagnostic Imaging Service at Hospital Sant Pau in Barcelona, Spain, and an expert in digital technologies applied to health, warns that the concept of empathy is broader than what the study can analyze. Written communication is not the same as face-to-face communication, nor is raising a question on an online forum the same as doing so during a medical consultation. When we talk about empathy, we are talking about many issues. At the moment, it is difficult to replace non-verbal language, which is very important when a doctor has to talk to a patient or their family, he pointed out. But Munuera does admit these generative tools have great potential when it comes to simplifying medical jargon. In written communication, technical medical language can be complex and we may have difficulty translating it into understandable language. Probably, these algorithms find the equivalence between the technical word and another and adapt it to the receiver.

Joan Gibert, a bioinformatician and leading figure in the development of AI models at the Hospital del Mar in Barcelona, points out another variable when it comes to comparing the empathy of the doctor and the chatbox. In the study, two concepts that enter into the equation are mixed: ChatGPT itself, which can be useful in certain scenarios and that has the ability to concatenate words that give us the feeling that it is more empathetic, and burnout among doctors, the emotional exhaustion when it comes to caring for patients that leaves clinicians unable to be more empathetic, he explained.

Nevertheless, as is the case with the famous Dr Google, its important to be careful with ChatGPTs responses, regardless of how sensitive or kind they may seem. Experts highlight that the chatbot is not a doctor and can give incorrect answers. Unlike other algorithms, ChatGPT is generative. In other words, it creates information according to the databases that it has been trained on, but it can still invent some responses. You always have to keep in mind that it is not an independent entity and cannot serve as a diagnostic tool without supervision, Gibert insisted.

These chatboxes can suffer from what experts call hallucinations, explained Gibert. Depending on the situation, it could tell you something that is not true. The chatbot puts words together in a coherent way and because it has a lot of information, it can be valuable. But it has to be reviewed since, if not, it can fuel fake news, he said. Munuera also highlighted the importance of knowing the database that has trained the algorithm because if the databases are poor, the response will also be poor.

Outside of the doctors office, the potential uses of ChatGPT in health are limited, since the information they provide can lead to errors. Jose Ibeas, a nephrologist at the Parc Taul Hospital in Sabadel, Spain, and secretary of the Big Data and Artificial Intelligence Group of the Spanish Society of Nephrology, pointed out that it is useful for the first layers of information because it synthesizes information and help, but when you enter a more specific area, in more complex pathologies, its usefulness is minimal or its wrong.

It is not an algorithm that helps resolve doubts, added Munuera. You have to understand that when you ask it to give you a differential diagnosis, it may invent a disease. Similiarly, the AI system can tell a patient that nothing is wrong, when something is. This can lead to missed opportunities to see a doctor, because the patient follows the advice of the chatbot and does not speak to a real professional.

Where experts see more room for possibilies for AI is as a support tool for health professionals. For example, it could help doctors answer patient messages, albeit under supervision. The Jama Internal Medicine study suggests that it would help improve workflow and patient outcomes: If more patients questions are answered quickly, with empathy, and to a high standard, it might reduce unnecessary clinical visits, freeing up resources for those who need them, the researchers said. Moreover, messaging is a critical resource for fostering patient equity, where individuals who have mobility limitations, work irregular hours, or fear medical bills, are potentially more likely to turn to messaging.

The scientific community is also studying the use of these tools for other repetitive tasks, such as filling out forms and reports. Based on the premise that everything will always, always, always need to be reviewed by the doctor, AI could help medical professionals complete repetitive but important bureaucratic tasks, said Gibert. This, in turn, would allow doctors to spend more time on other issues, such as patient care. An article published in The Lancet, for example, suggests that AI technology could help streamline discharge summaries. Researchers say automating this process could east the work burden of doctors and even improve the quality of reports, but they are aware of the difficulties involved with training algorithms, which requires large amounts of data, and the risk of depersonalization of care, which could lead to resistance to the technology.

Ibeas insists that, for any medical use, these tools must be checked and the division of responsibilities must be well established. The systems will never decide. It must be the doctor who has the final sign-off, he argued.

Gibert also pointed out some ethical considerations that must be taken into account when including these tools in clinical practice: You need this type of technology to be under a legal umbrella, for there to be integrated solutions within the hospital structure and to ensure that patient data is not used to retrain the model. And if someone wants to do the latter, they should do it within a project, with anonymized data, following all the controls and regulations. Sensitive patient information cannot be shared recklessly.

The bioinformatician also argued that AI solutions, such as ChatGPT or models that help with diagnosis, introduce biases that can affect how doctors relate to patients. For example, these tools could condition a doctors decision, one way or another. The fact that the professional has the result of an AI model changes the very professional. Their way of relating [to patients] may be very good, but it can introduce problems, especially in professionals who have less experience. That is why the process has to be done in parallel: until the professional gives the diagnosis, they cannot see what the AI says.

A group of researchers from Stanford University also examined how AI tools can help to further humanize health care in an article in Jama Internal Medicine. The practice of medicine is much more than just processing information and associating words with concepts; it is ascribing meaning to those concepts while connecting with patients as a trusted partner to build healthier lives, they concluded. We can hope that emerging AI systems may help tame laborious tasks that overwhelm modern medicine and empower physicians to return our focus to treating human patients.

As we wait to see how this incipient technology grows and what repercussions it has for the public, Munuera argued: You have to understand that [ChatGPT] is not a medical tool and there is no health professional who can confirm the veracity of the answer [the chatbot gives]. You have to be prudent and understand what the limits are. In summary, Ibeas said: The system is good, robust, positive and it is the future, but like any tool, you have to know how to use it so that it does not become a weapon.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAS USA Edition

Read more here:
Dr ChatGPT: The pros and cons of artificial intelligence in medical consultations - EL PAS USA

Marr: Artificial intelligence is a clever monkey that we need to be worried about – LBC

6 June 2023, 18:12 | Updated: 6 June 2023, 18:48

Speaking at the start of Tonight With Andrew Marr, he said he had spoken to a former defence secretary who compared AI systems to psychopaths.

And Andrew tried to sum up just why artificial intelligence is a topic of concern.

He said: "Unless you've spent the last few weeks upside down in a wheelie bin you have heard a lot about AI recently - about how it's going to destroy our civilisation, or save it, or something or other, but you've gathered that it's very, very important.

"Rishi Sunak is popping over to see Joe Biden in Washington this week to talk about regulating AI. But if you're wondering: ok, but what, really, is artificial intelligence? What are they yattering on about? You're not alone.

"It's a form of computerised intelligence that learns stuff by itself. But as with a lot of complicated things, what we really need is a metaphor.

Listen and subscribe to Unprecedented: Inside Downing Street on Global Player

"So... AI is an enormous, very clever monkey. You invite it into your home and you teach it how to make breakfast, wash the clothes, clean the carpets and even look after all the boring stuff in your inbox.

"Think of that wonderful free time the Clever Monkey gives you. It's been told to look after your happiness and it's doing really well.

"Then the monkey decides that your toast and marmalade for breakfast is bad for your health so it starts to give you muesli. It thinks you look shabby in your much loved old breeks and jacket and so it quietly bins them.

"Pretty soon it looks as if Clever Monkey, still friendly, still looking out for your interests, is in fact in charge. Clever Monkey realises you don't much like your next door neighbour - so Clever Monkey pops over the hedge, breaks his jaw and sets fire to his living room.

"You're looking a bit worried so clever monkey mashes up some opioid drugs he's bought down the canal and feeds them to you in your evening cocoa.

Watch Tonight with Andrew Marr exclusively on Global Player every Monday to Thursday from 6pm to 7pm

"Friendly Clever Monkey realises you're a little lazy as well and so quite soon, he's doing your job - almost whatever it is - much better than you ever did.

"By now, of course, he's been through your inbox, found a way to dodge paying your taxes, and transferred all your savings into the Friendly Monkey peanuts and banana account - because he also knows that for you to be happy, Friendly Monkey must be happy as well.

"Now you may think that's just a silly story but it's my best go at trying to explain why artificial intelligence is something we need to worry about.

You can also listen to the podcast Tonight with Andrew Marr only on Global Player.

"Have you invited and the monkey into your home? Well, pretty soon he's in your smartphone, your TV, your computer. He's at your workplace.

"He's bringing lots of stuff into your social media. So frankly, yes, the Clever Monkey is already in your home.

"I bumped into Lord Reid, John Reid, the former Labour Defence secretary and home secretary in the street a couple of hours ago, And he's been thinking about this as well and he told me: 'What we're doing is creating an intelligence which is far smarter than we are except in one thing - because it's a machine it has no empathy.'

"And what do we call a very smart operator with no empathy, he asked? We call it a psychopath."

See the original post:
Marr: Artificial intelligence is a clever monkey that we need to be worried about - LBC

China warns of artificial intelligence risks, calls for beefed-up … – The Associated Press

BEIJING (AP) Chinas ruling Communist Party has warned of the risks posed by advances in artificial intelligence while calling for heightened national security measures.

The statement issued after a meeting Tuesday chaired by party leader and President Xi Jinping underscores the tension between the governments determination to seize global leadership in cutting-edge technology and concerns about the possible social and political harms of such technologies.

It also followed a warning by scientists and tech industry leaders in the U.S., including high-level executives at Microsoft and Google, about the perils that artificial intelligence poses to humankind.

The meeting in Beijing discussed the need for dedicated efforts to safeguard political security and improve the security governance of internet data and artificial intelligence, the official Xinhua News Agency said.

It was stressed at the meeting that the complexity and severity of national security problems faced by our country have increased dramatically. The national security front must build up strategic self-confidence, have enough confidence to secure victory, and be keenly aware of its own strengths and advantages, Xinhua said.

We must be prepared for worst-case and extreme scenarios, and be ready to withstand the major test of high winds, choppy waters and even dangerous storms, it said.

Xi, who is Chinas head of state, commander of the military and chair of the partys National Security Commission, called at the meeting for staying keenly aware of the complicated and challenging circumstances facing national security.

China needs a new pattern of development with a new security architecture, Xinhua reported Xi as saying.

China already dedicates vast resources to suppressing any perceived political threats to the partys dominance, with spending on the police and security personnel exceeding that devoted to the military.

While it relentlessly censors in-person protests and online criticism, citizens have continued to express dissatisfaction with policies, most recently the draconian lockdown measures enacted to combat the spread of COVID-19.

China has been cracking down on its tech sector in an effort to reassert party control, but like other countries it is scrambling to find ways to regulate fast-developing AI technology.

The most recent party meeting reinforced the need to assess the potential risks, take precautions, safeguard the peoples interests and national security, and ensure the safety, reliability and ability to control AI, the official newspaper Beijing Youth Daily reported Tuesday.

Worries about artificial intelligence systems outsmarting humans and slipping out of control have intensified with the rise of a new generation of highly capable AI chatbots such as ChatGPT.

Sam Altman, CEO of ChatGPT-maker OpenAI, and Geoffrey Hinton, a computer scientist known as the godfather of artificial intelligence, were among the hundreds of leading figures who signed the statement on Tuesday that was posted on the Center for AI Safetys website.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, the statement said.

More than 1,000 researchers and technologists, including Elon Musk, who is currently on a visit to China, had signed a much longer letter earlier this year calling for a six-month pause on AI development.

The missive said AI poses profound risks to society and humanity, and some involved in the topic have proposed a United Nations treaty to regulate the technology.

China warned as far back as 2018 of the need to regulate AI, but has nonetheless funded a vast expansion in the field as part of efforts to seize the high ground on cutting-edge technologies.

A lack of privacy protections and strict party control over the legal system have also resulted in near-blanket use of facial, voice and even walking-gait recognition technology to identify and detain those seen as threatening, particularly political dissenters and religious minorities, especially Muslims.

Members of the Uyghur and other mainly Muslim ethnic groups have been singled out for mass electronic monitoring and more than 1 million people have been detained in prison-like political re-education camps that China calls deradicalization and job training centers.

AIs risks are seen mainly in its ability to control robotic, self-governing weaponry, financial tools and computers governing power grids, health centers, transportation networks and other key infrastructure.

Chinas unbridled enthusiasm for new technology and willingness to tinker with imported or stolen research and to stifle inquiries into major events such as the COVID-19 outbreak heighten concerns over its use of AI.

Chinas blithe attitude toward technological risk, the governments reckless ambition, and Beijings crisis mismanagement are all on a collision course with the escalating dangers of AI, technology and national security scholars Bill Drexel and Hannah Kelley wrote in an article published this week in the journal Foreign Affairs.

Go here to read the rest:
China warns of artificial intelligence risks, calls for beefed-up ... - The Associated Press