Category Archives: Artificial Intelligence

When Artificial intelligence writes the doctor’s letter, the doctor and … – Innovation Origins

In Germany alone, around 150 million doctors letters are written every year. This takes precious time, which could be used elsewhere. The doctors letter generator, currently being developed by scientists from the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, could provide a solution for creating the document in a fraction of the time. The application is based on a combination of algorithms and artificial intelligence for Natural Language Processing (NLP). The new white paper Natural Language Processing in the Medical Sector lists the numerous additional opportunities for hospitals using NLP.

Health data is currently one of the fastest-growing data sets. How we process this data and what possibilities it offers for patients, care professionals, and doctors is an exciting question and one to which we have at least part of the answer, explains Dario Antweiler, Healthcare Analytics team leader at Fraunhofer IAIS. He has authored a white paper with his team illuminating the current developments and opportunities for document-based processes in the medical field.

Thanks to Autoscriber, doctors pay more attention to you than to their computer

The pressure on healthcare providers is getting worse. Doctors are unable to give their patients their full attention because they spend a lot of time on administrative tasks.

In the paper, the experts discuss Large Language Models (LLM), which have undergone drastic development in recent months, catapulting them into the public spotlight. The best-known example of an LLM currently is ChatGPT, a chatbot that creates natural-sounding texts. In the not-too-distant future, these models will be able to work multimodally, meaning that theyll be able to process images and tabular data as well as the texts and spoken language with which they already work, explains Antweiler. This opens up new possibilities in the medical sector, which could free up staff for other tasks and improve patient treatment processes while considering data protection at all times.

The healthcare sector faces numerous challenges, such as staff shortages, cost pressures and an information overload from the constantly increasing amounts of data. Much of the hospital data is still laboriously analyzed by hand. Evaluating, analyzing, and drawing conclusions from the data costs valuable time at various points a commodity lacking in the stressful day-to-day of hospitals. In the worst cases, key information goes missing, making treatments more difficult, leading to expensive reexaminations or incomplete accounting, Antweiler explains.

To find a solution for these problems in hospitals, the Healthcare Analytics Team at Fraunhofer IAIS is working closely with medical professionals. Together with several university hospitals, including Essen University Hospital (Universittsmedizin Essen), it is currently developing various possibilities for information extraction from documents. The next objective is to bring the doctors letter generator to the market by the end of 2024, simplifying the creation of discharge letters. To do this, the AI analyzes all existing documents and creates a natural-sounding text containing easy-to-understand patient explanations. After a check, making changes or additions if required, the doctors can send the letter at the click of a button and in a fraction of the time needed to create it themselves from scratch. Another advantage is that patients, who often have to wait for this document on the day of their discharge, can leave the hospital more quickly.

AI halves development time medical innovations; faster and more reliable methods within reach

Research to implement innovations in healthcare takes an average of seventeen years. Especially given the speed of technological developments, that is far too long.

Other functions of Clinical NLP reduce the workload of the medical staff since the AI automatically collates critical information from a patients medical records and makes it available to all clinical staff in a clear, structured format. Information is available in next-to-no time and can be thoroughly processed and made wholly accessible to medical staff. Dario Antweiler says: In most hospitals, countless texts are evaluated manually every day. This is repeated in various departments and again after discharge by the family physician or specialist. Our applications make these processes fully automated, quick and precise, and as regards data protection secure, too. Healthcare systems, and especially staff and patients, would benefit from this.

The rest is here:
When Artificial intelligence writes the doctor's letter, the doctor and ... - Innovation Origins

Artificial Intelligence and Digital Diplomacy – E-International Relations

The coronavirus pandemic (COVID-19) has given a strong impetus to the development of science, the general processes of digitalization and the introduction of an increasing number of electronic services. In healthcare, these processes manifested in creating tracking applications, information-sharing platforms, telemedicine, and more. However, the boom in introducing such technologies also showed the need to develop particular policies and legal mechanisms to regulate their implementation, as although they can provide benefits, their use can also pose potential risks, such as cyberattacks. Digital technologies have also become widely used in politics. Due to the lockdowns around the world during 2020 and 2021, many ministerial meetings and meetings between heads of state were held online. International organizations such as the United Nations (UN) have resorted to mixed event formats allowing presidents to speak online.

The possibilities of the Internet and the application of digital technologies are not new. However, their entry into the political atmosphere, where everything is permeated with diplomatic protocols and certain secrecy, causes some concern. Perhaps the most apparent concern is using Deepfake technology to digitally manipulate another persons appearance. With modern AI technology, voice imitation is also possible.

Diplomatic channels may be scrutinized by the intelligence agencies of other countries and potential criminal groups that can gain access to specific technologies, such as wiretapping. Quite often, secret data (photos, videos, audio recordings) as well as fake news, which veracity an ordinary person cannot verify in any way, appear in the press. Such manipulations pose a significant threat to social stability and affect public opinion. Modern technologies can also be used in the political struggle against competing forces. Therefore, there is a need to rethink the familiar political process, considering new realities and possibly developing new digital or electronic diplomatic protocols.

The study of the application of the possibilities of AI in politics is a young field. Thus, a search as of June 23, 2023, in Google Academy for the query artificial intelligence in politics returns 61 results, and AI in politics has 77 results. Similar queries to the Google search engine for the same period produce 152,000 and 95,600 results, respectively. Publication sources are generally not political journals. More often, these journals publish articles on new technologies and deal with AI uses ethical aspects (Vousinas et al., 2022).

Speaking about the modern understanding of the concept, what is AI? Kaplan and Haenlein (2019) define it as a systems ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation. Suppose this definition is interpreted in relation to politics. In that case, we think AI can be described as a system that allows politicians to process information received from different sources and generalize it to develop a single database used in the decision-making process.

AI can also be used for internal political goals. Thus, a study in Portugal (Reis and Melo, 2013) suggested that the introduction of e-services plays an active role of governments in responding to the needs of their citizens, contributing to the development of e-democracy. The article points to increased transparency and trust in political institutions due to its widespread use. In our opinion, the paper lacks an analysis of possible counter-effects where AI can become a weapon for falsification and lowering the level of democracy. What are the mechanisms of interaction between the population and political institutions with the complete digitalization of the process? This issue requires a detailed assessment, especially in countries where the principles of democracy are often violated.

The possibility of AI bias poses a potential risk and presents a new challenge for the global community, including politicians. In 2018, a study published by the Council of Europe accessed possible risks of discrimination resulting from algorithmic decision-making and other types of AI use (Zuiderveen Borgesius, 2018). Today, with advanced technologies, transferring decision-making power to algorithms can lead to discrimination against vulnerable groups, such as people with disabilities. Therefore, program decision-making need not always be based on cost-effectiveness principles. E.g., caring for vulnerable population groups is a kind of burden for the countrys budget. However, it is obligatory in a state of law and ethically justified.

The possibility of making political decisions by the heads of state and government based entirely on AI proposals is also quite controversial. Since even the most rational decision from the algorithms point of view may be devoid of ethical grounds, contradict the political slogans of the leaders, or go against the objectives of the government or provisions of the law. Therefore, human control and policy adjustment at this stage of scientific development are mandatory; we believe it will continue to be relevant even in the future.

From the point of view of the private use of AI at the level of an individual state, the possibilities are also wide. For example, online search engines such as Google have significant information about users and their preferences. Accordingly, this information can be used for more harmless purposes. For example, it can be used for targeted advertising of political companies. Also, based on the processing of requests from the population, the most pressing issues that require a response can be identified. With the help of AI, special tools for collecting feedback from the population can be developed, improving communication between the government and the population potential voters. Accelerating and automating the delivery of services to the population, such as, for example, issuing a necessary document, or certificate of employment, is also among the potential beneficial results of the AI application. It should be noted that, to varying degrees, the mentioned opportunities are already actively used in countries with high economic development.

However, the application of AI can also be used to spread misinformation and manipulation of public opinion. Thus, AI tools are already used to launch campaigns for mass disinformation and disseminate fake content. Fake news is sometimes observed during election campaigns.

Today, the advent of new technologies creates new challenges. Thus, the GDPR (General Data Protection Regulation), adopted in Europe 2018, obliges informing about data collection. Moreover, in 2021 a new proposal for broad AI regulation within the EU was put forward. If adopted, the document will become the first comprehensive AI regulatory framework (Morse, 2023). Adopting such a law puts on the agenda the need for international regulation. Perhaps shortly, various countries around the world will begin the process of developing and adopting similar laws. However, the development and adoption of any law require the participation of political institutions, which creates a new direction of activity and research within political science.

The global application of AI laws is also a political issue. Thus, a similar document the UN cybercrime convention is already under discussion. However, such laws, especially on a global level, will also have to be based on protecting human rights to exclude the legitimization of increased political control over the population on the Internet. Moreover, in the context of globalization, the mechanism of control over AI-related crimes and punishment implementation are also unclear.

The use of digital platforms for diplomatic processes, such as negotiations, networking, and information exchange, has created a new field in the scientific literature Digital diplomacy. Digitalization of diplomacy takes place on different levels. Thus, ministries and politicians create their profiles on social media/networks, where they share their opinions on specific issues. It is no longer necessary to wait for an official briefing from the Foreign Ministry. Diplomats often express their position online, which can be considered a semi-official approach. Ultimately, the publication can permanently be deleted or referred to as the page has been hacked; in modern conditions, such a risk exists.

Recently, with the launch of ChatGPT, the media has been filled with articles about its role in the future of diplomacy. Diplomats can use AI to automate some of their work, such as preparing press releases. Another possibility is that the prepared information can be distributed simultaneously to all information platforms with a one click, which simplifies and speeds up the process. It is crucial as today residents receive information via the Internet most often directly to their smartphones. However, full automation, in this case, is also not without risks.

Although AI can be used to generate ideas, there is some concern about the secrecy of information processing. There is already a threat and information about data leakage entered into ChatGPT (Derico, 2023; Gurman, 2023). How safe is this in the case of secret or diplomatic documents? Or personal information of the diplomat who uses the platform? Moreover, the language of diplomacy is very sensitive regarding wording and expressions used. The text generated by the program may be ideal in terms of grammar but unacceptable in terms of diplomacy.

Use of AI and general digitalization in society also impact diplomacy. Nevertheless, are we ready for politics generated by AI? AI opens a new page in politics and creates challenges. Diplomacy always requires a certain amount of flexibility from diplomats, but it must be adapted to digital realities. Politicians and diplomats should be prepared for the possibility of data leakage on the Internet, as well as double-check incoming information.

The potential for bias in AI algorithms is also a significant issue. Moreover, the veracity can be zero since the program is designed to issue an answer, whether it is correct or not, and its content depends on the algorithms specified by the developers. Automation of collecting information in political processes is only sometimes justified. Thus, the human brain cannot physically remember and process the enormous amount of information generated daily. Moreover, if a political officer collects information from official resources, this can simplify the work. However, a reference to an unconfirmed resource may lead to a distortion of the original data and, accordingly, adversely affect the preparation of the report. However, such tools can be extremely useful for politicians when addressing public inquiries and identifying the most pressing issues.

The regulation of AI in practice has some peculiarities. At this stage of historical development, AI still cannot implement decisions independently in real-world practice. It can only implement the tasks that people have assigned it to do. We can analyze the benefits of its use, but ChatGPT and similar models only process information obtained from sources such as the Internet. Yet, the potential of regulation of global politics by AI or its specific programming can expose us to the threat of digital totalitarianism, when control can begin to interfere with privacy and human rights. Therefore, legal regulation of AI use is crucial. Moreover, its algorithms should undergo an ethical and political assessment before implementation. Moreover, various countries are interested in obtaining intelligence information in real-life conditions. Given the development of science, the intervention of intelligence services will receive new opportunities. However, in practice, regulation in this area is rarely possible. Moreover, AI is developing fast, and how it will be applied in practice when it reaches independence is an issue we will have to solve.

Reference

Derico, Ben. 2023. ChatGPT Bug Leaked Users Conversation Histories. BBC News, March 22, 2023, sec. Technology. https://www.bbc.com/news/technology-65047304.

Gurman, Mark. 2023. Samsung Bans Generative AI Use by Staff after ChatGPT Data Leak. Bloomberg.com, May 2, 2023. https://www.bloomberg.com/news/articles/2023-05-02/samsung-bans-chatgpt-and-other-generative-ai-use-by-staff-after-leak#xj4y7vzkg.

Morse, Chandler. 2023. Lessons from GDPR for Artificial Intelligence Regulation. World Economic Forum. June 16, 2023. https://www.weforum.org/agenda/2023/06/gdpr-artificial-intelligence-regulation-europe-us/.

Kaplan, Andreas, and Michael Haenlein. Siri, Siri, in my hand: Whos the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence.Business horizons62, no. 1 (2019): 15-25.

Reis, Joo Carlos Gonalves dos, and Nuno Melo. E-Democracy: Artificial Intelligence, Politics and State Modernization. (2023).

Vousinas, Georgios L., Ilektra Simitsi, Georgia Livieri, Georgia Chara Gkouva, and Iris Panagiota Efthymiou. Mapping the road of the ethical dilemmas behind Artificial Intelligence.Journal of Politics and Ethics in New Technologies and AI1, no. 1 (2022): e31238-e31238.

Zuiderveen Borgesius, Frederik. Discrimination, artificial intelligence, and algorithmic decision-making.lnea], Council of Europe(2018).

See original here:
Artificial Intelligence and Digital Diplomacy - E-International Relations

Artificial intelligence: Is Remini safe to use? Remini, baby AI generator takes TikTok by storm as some raise security concerns – WLS-TV

CHICAGO (WLS) -- Have you ever wondered what your future kids will look like?

Well, there's an app for that. It's called "Remini." The app uses artificial intelligence to generate photos of what your children could look like, but one cyber security expert is voicing concerns as Remini takes social media by storm. Daisy Reyes is a self-proclaimed TikTok influencer with nearly 500,000 followers.

"I'm always looking at what's the upcoming trend, the new trends. I've seen everybody, literally everybody doing this trend. So you already know, I had to hop on it!" Reyes said.

READ MORE | Looking at AI, ChatGPT: The possibilities and pitfalls of artificial intelligence

Reyes said she recently downloaded the Remini app, which allows you to see what your future children could look like. She uploaded a picture of herself and her boyfriend, Rex, and boom! An image of her future baby was generated by artificial intelligence.

"When she showed it to me, I was kind of stunned. I thought it did kind of look like me," said Rex Flores.

"I felt like it was a mixture, but definitely more like him," Reyes added.

Melissa McDuffie, who has nearly 200,000 followers on TikTok, said she also tried the Remini app.

"I really like this trend, because I'm at the age where I'm married, and I'm ready to start having children. I thought it would be interesting to go ahead and see what they might look like," McDuffie said. "Artificial intelligence, the AI, can take your pictures and formulate this image, and it's so realistic. I'm an aunt to eight children, and they look very similar to my nieces. And, it's exciting. They're really cute!"

SEE ALSO | Paul McCartney says AI was used to create the 'last Beatles record'

But, as fun as the app may be, cyber security expert David Barton said users should be cautious.

"The scary thing about this, Samantha, is how accurate it has been. I've seen video clips of folks who have taken the father and the mother, put them in the app and it kicks off a picture that looks like their kid. And, that's a little bit creepy. But, on the flip side, it's kind of cool. It's kind of novel," Barton said.

Barton said users need to read the terms and conditions of these types of apps to understand how your image and likeness could be used. It's also important to know what protections are in place to ensure the artificial intelligence isn't exploited by a third party.

"Are we unintentionally giving future pictures of our kids for folks who might be using it for malicious purposes? I don't know," Barton said. "My gut says, I wouldn't do it. But, I'm a little bit older and more conservative than a lot of people. If you're going to do it, understand there are risks. At the end of the day, we AI-manage our lives by the risks we deal with day in and day out."

READ MORE | Elon Musk announces new company xAI

The social media stars say they understand any potential risk, but believe the Remini app has brought joy to many of their followers who also use it.

"We post pictures on social media every day, let alone the new AI generator with this app. So, I feel like you take that risk and agree to be on social media," McDuffie said. "For myself, I want to be a mother so bad, I could cry. More than anything, I want to be a mother. So, to be able to see an image like that made me happy."

In a statement, the parent company of Remini, which is based out of Milan, Italy, told the I-Team:

"Remini gives users the ability to imagine their lives in many different ways, with stunning realism, and we care deeply about ensuring all our users have a safe and fun experience using our app. By its very nature, the app is constantly evolving, and we will continue to take action to apply safeguards and ensure user privacy...We take data protection and privacy very seriously and have robust protocols in place to ensure we safeguard user rights while allowing them to experience and enjoy the transformative power of generative AI."

McDuffie said the app has intensified her baby fever.

"It made me hopeful for the future, and excited," McDuffie said. "Because they were cute and beautiful. They were beautiful."

Bending Spoons, the company that owns Remini, told the I-Team that facial recognition is not used in the app, and that images are encrypted and stored with a reputable U.S.-based provider, using what they say are "state-of-the-art security standards." The company said users always retain control over their data, and that it does not sell, lease, or trade users' images to any third parties.

The company said it applies comprehensive safeguards to thwart misuse of content.

SEE ALSO | AI leaders warn the technology poses 'risk of extinction' like pandemics, nuclear war

Read more here:
Artificial intelligence: Is Remini safe to use? Remini, baby AI generator takes TikTok by storm as some raise security concerns - WLS-TV

OpenAIs Sam Altman links 2 hot tech trends with his new Worldcoin: artificial intelligence and crypto. But theres a lot more to the story – Fortune

Artificial intelligence has taken over much of the financial hype cycle that used to belong to cryptocurrency. Now comes a project thats trying to combine the two. Called Worldcoin, its an effort to create a global network of digital identities for a world in which AI robots become harder to distinguish from humans. Users of the service scan their eyeballs to create digital credentials and are rewarded with Worldcoin tokens though the cryptocurrency isnt available in the US. More than 2 million people have signed up for a World ID, a reflection of the novel compensation model and the reputation of one of its founders, Sam Altman, the chief executive officer of OpenAI, which created the popular ChatGPT chatbot service. But early scrutiny by international regulators and some data security problems have stirred controversy and threatened to slow Worldcoins momentum.

The project uses a device called an orb which looks like a Magic 8 Ball but bigger and silver-colored to scan a persons iris, which has a unique pattern in every human much as a fingerprint does. That creates a World ID, which grants its holders proof of personhood a way to verify their identities on various online services without disclosing their name or other personal data. Worldcoin is also the name of the cryptocurrency thats used to reward people who scan their eyeballs or who support the project. TheWorldcoin Foundationis listed as the steward of the technology, but the organizers say that it has no owners or shareholders and that holders of Worldcoin tokens will have a say in the direction of the project. Worldcoin is also affiliated with a tech company called Tools for Humanity Corp. that says it was established to accelerate the transition towards a more just economic system.

Worldcoin is promising to link two of the hottest contemporary financial trends: artificial intelligence and crypto. As AI becomes more popular, the argument goes, World ID will become more needed, to help distinguish between humans and AI-powered smart software. Another big reason for the build-up is the involvement of Altman, whos the public face of ChatGPT. The AI chatbot was introduced in November 2022 and ignited the publics imagination about what artificial intelligence can do.

There are several. One is that its creating tokens to compensate participants outside the US and the other excluded countries who scan their iris. Also, several of the projects early backers were swept up in last years crypto collapse, including FTX founder Sam Bankman-Fried, whos under house arrest and facing fraud charges. An MIT Technology Review investigation found evidence of what itcalleddeceptive and exploitative practices used by Worldcoin to attract participants in countries such as Indonesia, Ghana and Chile. The project is beingscrutinized in Europefor its collection of biometric data, which may run afoul of some countries privacy laws. There have also been issues with the theft of login credentials from some Worldcoin operators who were signing up new users, and with black-market sales of World IDs. Worldcoin said it upgraded its security in response.

The project had registered and created digital identities for more than 2.1 million people by the end of July, though the vast majority of those were issued before the official July 24 launch. The related cryptocurrency has fluctuated. The price of a Worldcoin token roughlydoubledon that day to as high as $3.58 before dropping to as low as $1.92 a week later. But Worldcoin still had a total market capitalization of $267 million on July 31, according to CoinMarketCap.

Altman, 38, is a seasoned entrepreneur. In addition to leading OpenAI, he was the longtime president of Y Combinator, the startup accelerator, and has investments in Airbnb, Stripe, Dropbox and Instacart. He also co-founded Loopt, a smartphone-location service.

Altman has said the project wouldnt offer tokens in the US and in some other countries where the regulatory rules regarding crypto were either uncertain or unclear. Indeed, Worldcoin is among many crypto projects that have chosen to stay out of the US market in recent years as US regulators and lawmakers continue to grapple with which coins are classified securities and which ones arent. Gary Gensler, the chairman of the US Securities and Exchange Commission, had long said that most coins were securities. But in a closely followed legal case, a judge ruled in July that Ripple Labs Inc.s XRP token is a security only when its sold to institutional investors but not when its sold to retail investors via exchanges. That left the matterunsettled. More litigation and regulation are sure to follow, leaving crypto issuers with uncertainty.

Originally posted here:
OpenAIs Sam Altman links 2 hot tech trends with his new Worldcoin: artificial intelligence and crypto. But theres a lot more to the story - Fortune

The EU Artificial Intelligence Act: What’s the Impact? – JD Supra

The EU Artificial Intelligence Act (or AI Act) is the worlds first legislation to regulate the use of AI. It leaves room for technical soft law; but, inevitably (being the first and being broad in scope), it will set principles and standards for AI development and governance. The UK is concentrating more on soft law, working towards a decentralized principle-based approach. The US and China are working on their own AI regulations, with the US focusing more on soft law, privacy, and ethics and China on explainable AI algorithms, aiming for companies to be transparent about their purpose. The AI Act marks a crucial step in regulating AI in Europe, and a global code of conduct on AI could harmonize practices worldwide, ensuring safe and ethical AI use. This article gives an overview of the EU Act, its main aspects as well as an overview of other AI legislative initiatives in the European Union and how these are influencing other jurisdictions, such us the UK, the US and China.

The AI Act: The First AI Legislation. Other Jurisdictions Are Catching Up.

On June 14, 2023, the European Parliament achieved a significant milestone by approving the Artificial Intelligence Act (or AI Act), making it the worlds first piece of legislation to regulate the use of artificial intelligence. This approval has initiated negotiations with the Council of the European Union which will determine the final wording of the Act. The final version of the AI Act is expected to be published by the end of 2023. Following this, the Regulation is expected to be fully effective in 2026. A two-year grace period similar to the one contemplated by the GDPR is currently being considered. This grace period would enable companies to adapt gradually and prepare for the changes until the rules come into force.

As the pioneers in regulating AI, the European institutions are actively engaged in discussions that are likely to establish both de facto (essential for the expansion and growth of AI businesses, just like any other industries) and de jure (creating healthy competition among jurisdictions) standards worldwide. These discussions aim to shape the development and governance of artificial intelligence, setting an influential precedent for the global AI community.

Both the United States and China are making efforts to catch up. In October 2022, the US government unveiled its Blueprint for an AI Bill of Rights, centered around privacy standards and rigorous testing before AI systems become publicly available. In April 2022, China followed a similar path by presenting a draft of rules mandating chatbot-makers to comply with state censorship laws.

The UK government, has unveiled an AI white paper to provide guidance on utilizing artificial intelligence in the UK. The objective is to encourage responsible innovation while upholding public confidence in this transformative technology.

While the passage of the Artificial Intelligence Act by the European Parliament represents an important step forward in regulating AI in Europe (and indirectly beyond, given the extraterritorial reach), the implementation of a global code of conduct on AI is also under development by the United Nations and is intended to play a crucial role in harmonizing global business practices concerning AI systems, ensuring their safe, ethical, and transparent use.

A Risk-Based Regulation

The European regulatory approach is based on assessing the risks associated with each use of artificial intelligence.

Complete bans are contemplated for intrusive and discriminatory uses that pose unacceptable risk to citizens fundamental rights, their health, safety, or other matters of public interest. Examples of artificial intelligence applications considered to carry unacceptable risks include cognitive behavioral manipulation targeting specific categories of vulnerable people or groups, such as talking toys for children, and social scoring, which involves ranking of people based on their behavior or characteristics. The approved draft regulation significantly expands the list of prohibitions on intrusive and discriminatory uses of AI. These prohibitions now include:

In contrast, those uses that need to be regulated (as opposed to simply banned) through data governance, risk management assessment, technical documentation, and criteria for transparency, are:

High-Risk AI systems are artificial intelligence systems that may adversely affect security or fundamental rights. They are divided into two categories:

(i) biometric identification and categorization of natural persons;(ii) management and operation of critical infrastructure;(iii) education and vocational training;(iv) employment, worker management and access to self-employment;(v) access to and use of essential private and public services and benefits;(vi) law enforcement;(vii) migration management, asylum, and border control;(viii) assistance in legal interpretation and enforcement of the law.

All high-risk artificial intelligence systems will be evaluated before being put on the market and throughout their life cycle.

The Generative and Basic AI systems/models can both be considered general-purpose AI because they are capable of performing different tasks and are not limited to a single task. The distinction between the two lies in the final output.

Generative AI, like the now-popular ChatGPT, uses neural networks to generate new text, images, videos or sounds that have never been seen or heard before, much as a human can. For this reason, the European Parliament has introduced higher transparency requirements:

Basic AI models, in contrast, do not create, but learn from large amounts of data, use it to perform a wide range of tasks, and have application in a variety of domains. Providers of these models will need to assess and mitigate the possible risks associated with them (to health, safety, fundamental rights, the environment, democracy, and the rule of law) and register their models in the EU database before they are released to the market.

Next are the minimal or low risk AI applications, such as those used to date for translation, image recognition, or weather forecasting. Limited-risk artificial intelligence systems should meet minimum transparency requirements that enable users to make informed decisions. After interacting with applications, users can decide whether they wish to continue using them. Users should be informed when interacting with AI. This includes artificial intelligence systems that generate or manipulate image, audio, or video content (e.g., deepfakes).

Finally, exemptions are provided for research activities and AI components provided under open-source licenses.

The European Union and the United States Aiming to Bridge the AI Legislative Gap

The United States is expected to closely follow Europe in developing its own legislation. In recent times, there has been a shift in focus from a light touch approach to AI regulation, towards emphasizing ethics and accountability in AI systems. This change is accompanied by increased investment in research and development to ensure the safe and ethical usage of AI technology. The Algorithm Accountability Act, which aims to enhance transparency and accountability of providers, is still in the proposal stage.

During the recent US-EU ministerial meeting of the Trade and Technology Council, the participants expressed a mutual intention to bridge the potential legislative gap on AI between Europe and the United States. These objectives gain significance given the final passage of the European AI Act. To achieve this goal, a voluntary code of conduct on AI is under development, and once completed, it will be presented as a joint transatlantic proposal to G7 leaders, encouraging companies to adopt it.

The United Kingdoms Pro-Innovation Approach in Regulating AI

On March 29, 2023, the UK government released a white paper outlining its approach to regulating artificial intelligence. The proposal aims to strike a balance between fostering a pro-innovation business environment and ensuring the development of trustworthy AI that addresses risks to individuals and society.

The regulatory framework is based on five core principles:

These principles are initially intended to be non-statutory, meaning no new legislation will be introduced in the United Kingdom for now. Instead, existing sector-specific regulators like the ICO, FCA, CMA, and MHRA will be required to create their own guidelines for implementing these principles within their domains.

The principles and sector-specific guidance will be supplemented by voluntary AI assurance standards and toolkits to aid in the responsible adoption of AI.

Contrasting with the EU AI Act, the UKs approach is more flexible and perhaps more proportionate, relying on regulators in specific sectors to develop compliance approaches with central high-level objectives that can evolve as technology and risks change.

The UK government intends to adopt this framework quickly across relevant sectors and domains. UK sector specific regulators have already received feedback on implementing the principles during a public consultation that ran until June 2023, and we anticipate further updates from each of them in the coming months.

The Difficult Balance between Regulation and Innovation

The ultimate goal of these legislative efforts is to find a delicate balance between the necessity to regulate the rapid development of technology, particularly regarding its impact on citizens lives, and the imperative not to stifle innovation, or burden smaller companies with overly strict laws.

Anticipating the level of success is challenging, if not impossible. Nevertheless, the scope for soft law such as setting up an ad hoc committee at a European level shows promise. Ultratechnical matters subject to rapid evolution require clear principles that stem from the value choices made by legislators. Moreover, such matters demand technical competence to understand what is being regulated at any given moment.

Organizations using AI across multiple jurisdictions will additionally face challenges in developing a consistent and sustainable global approach to AI governance and compliance due to the diverging regulatory standards. For instance, the UK approach may be seen as a baseline level of regulatory obligation with global relevance, while the EU approach may require higher compliance standards.

As exemplified by the recent Italian shutdown of ChatGPT (see ChatGPT: A GDPR-Ready Path Forward? we have witnessed firsthand the complexities involved. The Italian data protection authority assumed a prominent role and instead of contesting the suspension of the technology in court, the business chose to cooperate. As a result, the site was reopened to Italian users within approximately one month.

In line with Italy, various other data protection authorities are actively looking into ways to influence the development and design of AI systems. For instance the Spanish AEPD has implemented audit guidance for data processing involving AI systems, more detail here, while or the French CNIL has created a department dedicated to AI with open self-evaluation resources for AI businesses, more detail here. Additionally, the UKs Information Commissioners Office (ICO) has developed an AI toolkit (available here) designed to provide practical support to organizations.

From Safety to Liability: The AI Act is Prodromic to an AI Specific Liability Regime

The EU AI Act is part of a three-pillar package proposed by the EU Commission to support AI in Europe. The other pillars include an amendment to the EU Product Liability Directive (PLD) and a new AI liability directive (AILD). While the AI Act focuses on safety and ex ante protection/ prevention re fundamental rights, the PLD and AILD address damages caused by AI systems. Non-compliance with the AI Acts requirements could also trigger, based on the AI Act risk level of the AI system at issue, different forms and degrees of alleviation of the burden of proof under both the amended PLD, for the no-fault based product liability claims, the AILD, for any other (fault based) claim. The amended PLD and the AILD are less imminent than the AI Act: they have not yet been approved by the EU Parliament and, as directives, will require implementation at the national level. Yet the fact that they are coming is of immediate importance and use, as it gives businesses even more reason to follow and possibly cooperate and partake in the standard setting process currently in full swing.

Conclusion

Businesses using AI must navigate evolving regulatory frameworks and strike a balance between compliance and innovation. They should assess the potential impact of the regulatory framework on their operations and consider whether existing governance measures address the proposed principles. Prompt action is necessary, as regulators worldwide have already started publishing extensive guidance on AI regulation.

Monitoring these developments and assessing the use of AI is key for compliance and risk management. This approach is crucial not only for regulatory compliance but also to mitigate litigation risks with contractual parties and complaints from individuals. Collaboration with regulators, transparent communication, and global harmonization are vital for successful AI governance. Proactive adaptation is essential as regulations continue to develop.

[View source.]

Follow this link:
The EU Artificial Intelligence Act: What's the Impact? - JD Supra

Artificial intelligence could mean more litigation for restaurant … – Restaurant Business Online

Could artificial intelligence land more restaurant companies in court? If they rely on a computer brain to handle recruitment, the answer is definitely a yes, according to this weeks episode of the Working Lunch podcast.

The minute you use these tools, were going to see a lot of activity from the EEOC, or Equal Employment Opportunity Commission, said this weeks guest, Ed Egee, VP of government relations and workforce development for the National Retail Federation.

The issue, he told podcast co-hosts Joe Kefauver and Franklin Coley, is that trial lawyers are looking for a new gold mine, and this could be it.

He explained that using some artificial intelligence tool to sort through resumes can whittle down stacks of thousands to just the few that meet an employers key criteria for candidates. A machine would ignore everything but those desired characteristics.

The problem, Egee continued, is that lawyers could argue the process is discriminatory per se, since other traits or characteristics might be ignored. Similarly, some applicants might be ruled out instantly if theyre unskilled at drafting a resume, making them the victims of discrimination against the uneducated or poorly literate.

The likely way to avert discrimination suits, Egee said, would be involving humans in the screening function at some stage.

To learn more about this little-discussed risk associated with embracing artificial intelligence, download this weeks episode of Working Lunch from wherever you get your podcasts.

Members help make our journalism possible. Become a Restaurant Business member today and unlock exclusive benefits, including unlimited access to all of our content. Sign up here.

More here:
Artificial intelligence could mean more litigation for restaurant ... - Restaurant Business Online

VLOG: Is artificial intelligence for optic disc photos ready for the … – Ophthalmology Times

Video Transcript

Editor's note- This transcript has been edited for clarity.

Hello and welcome to yet another edition of the NeuroOp Guru. I'm here with my good friend Drew Carey from Johns Hopkins and the Wilmer Eye Institute. Hi Drew.

Hi Andy, happy to be here.

And today we're going to be talking about the question "Is artificial intelligence for optic disc photos ready for the clinic and the ER?" So Drew, maybe you could just give us a little background on why is this even a question?

Well, I think for a long time, we've come to realize that colleagues without ophthalmologic training, ER doctors, primary care doctors, neurologists are not so good at fundoscopy, looking at the back of the eye, and specifically the optic nerve. You know, they don't do it through dilated pupils like ophthalmologist do, they don't do it a lot. So it's not a skill set that they've kept up if they ever really refined it in medical school. But there are some conditions where it is really important to look in the back of the eye and at the optic nerve, especially if you have a patient coming in with headaches and vision changes. W

e want to know is the optic nerve swollen, you know, could this be ischemic optic neuropathy, an emergency like giant cell arteritis, or papilledema, where they could have some kind of Bergeon, intracranial CNS process going on. And we can't get an ophthalmologist in every emergency room in America, but it would be feasible to put a camera. And then the question is, well, who's going to look at the picture? It should be somebody who knows what an optic nerve is supposed to look like. Or could it be an artificial intelligence that's been trained. And so I think that was the major initiative for this project was trying to improve the diagnostic value of fundoscopy in conditions where it would be desired.

And so this AI trained on thousands of photos that they just loaded in there and taught it, what it's supposed to look for.

Yeah, so there's been, you know, a lot of work in in AI and subtypes of AI, including deep learning systems, machine learning. And it does, it takes thousands and thousands of images that have been carefully combed through, and that we labeled with what we call ground truth where we know exactly what that picture represents, to train the system.

Kind of like a resident has to see thousands of cases during their training, in order to, you know, develop good clinical intuition and understanding what's going on. So for this, this group, the BONSAI consortium, based out of Singapore, they, you know, asked for pictures from neuro-ophthalmologist all across the world to try and develop a diverse training set. With patients who they knew what the diagnosis was, they knew what that optic nerve was showing. And that's what they trained it on.

And so maybe you could just walk us through these results of the BONSAI and you can see that it was already 168 times faster, but let's see if it's better. We know it can be faster. But is it better? Just maybe you could walk us through A and B here in terms of error rate?

Yeah absolutely. So what they did is they took 800 new photos that the machine had never seen before. Which is really important. You don't want to ask the artificial intelligence to answer a question that it already knows the answer to. And they so they showed that to BONSAI, and then they showed it to 30 different clinicians, six were general ophthalmologists, six were optometrists, six neurologists, six internal medicine doctors and six emergency medicine doctors.

And they asked them to classify these optic nerve photos as normal papilledema or other. And so they said, they split the groups into two different the doctors into two different groups. They said these are folks with opthalmic expertise, ophthalmologist and optometrist and the other folks, the neurologists, internal medicine and emergency medicine. And so in A, they said the error rate for the doctors with ophthalmic expertise looking at one photo of one eye, so they didn't get the benefit of two eyes. They said the error rate was about 25% for doctors with ophthalmic expertise. And for doctors without ophthalmic expertise it was close to 45%.

And the deep learning system was about 16% compared to what we knew the actual photo was. We know that that the machine is really good. You know that's what it was trained to do. And then in B they broke it down. They said these are ophthalmologists, optometrists, neurologists, internist and the emergency medicine doctors. And the ophthalmologists and optometrists were both very similar at about 25%, which is what we saw when they were together. And then the neurologists were, you know, not quite as good running around 38%. While the internal medicine doctors who, I don't remember the last time my eye was looked at in a primary care doctor office visit, was 43%.

And then the emergency medicine doctors were about 45%. And this is, they didn't even have to look inside the eye, this is a good quality fundus photo that we were able to just give the doctor and say, you know, this is, this is what the optic nerves looks like. And again, you know, the deep learning system was running around 16% for, you know, all the pictures. So that's, that's the comparison, which I think is really good. And we know that right, it's a machine, it doesn't have to stop and think about it. And you know, go through, okay, what's this blood vessel doing? What's that blood vessel do? It just looks at it and runs it through the algorithm, and takes about 25 seconds for it to look at all 800 photos.

Versus 70 minutes for the doctors, which I think 70 minutes to look at 800 photos is still pretty good. So that's what we found out. So the BONSAI had significantly higher accuracies in 100% of the papilledema cases, 87% of the normal cases, and 93% of the other cases, compared to the clinicians. So it's really good. I don't think it's ready to replace doctors, you know, neuro-ophthalmologists, because it's not perfect. And there's a lot of other clinical information.

We all know how important that history is for neuro-ophthalmology that it can't do. But it could really help to risk stratify. You know, this is somebody that really needs to see neuro ophthalmology or get an ophthalmologist in here in person to look at the patient, or say no, this is normal. Or this is somebody who needs to proceed to neuroimaging, lumbar puncture, you know, even if we can't get an ophthalmologist in here.

So do you think it's more like a decision support right now, like helps you make a decision? Or do you think it's not even that?

I think that's where it would be, you know, if this could be clinically implemented into emergency rooms, neurologist office. You know, the patient comes in and every patient with a headache, they get their blood pressure check to make sure it's not hypertensive emergency. They should get a photo of their optic nerve to make sure it's not elevated intracranial pressure or hypertensive emergency.

So and then, you know, the doctor can look at it. And the other thing that we know about the AI, compared to a doctor is it's not just a yes, no, it also gives it probabilities. It'll say I'm 100% certain this is normal. Or it'll say I'm 100% certain this is pappilledema. Ot it might say, it's probably papilladema, but I'm 65%.

You say okay, well, let's get some more data. Let's get an ophthalmologist in here to look at both eyes and ask some important questions like, do you have headaches? Do you have wooshing sounds in your ears? If you're having transient visual obscurations when you're bending over or coughing?

Well, so maybe the answer is stay tuned to this channel. But it certainly sounds like the machine is faster. And maybe even better than the doctors. The question is, is it cheaper?

Well, like you know, a lot of emergency rooms don't have an ophthalmologist on call. And if you're asking how much is it going to cost to pay somebody to cover call, he's not going to do it for free. And, you know, we could bill for photos that might be revenue generating as opposed to revenue loss. I think the big questions is regulatory.

You know, I think in the United States, we have one, that I'm aware of, FDA approved AI system, which is for retinal screening for diabetic retinopathy. They're looking at it for implementing into neuroimaging for CT scans to help to triage, this is a CT scan, we need the neuroradiologist to look at right now or put this one at the end of the pile to finish by the end of their shift. Yeah, cost is a big question. And it still has to go through FDA approval and then you know, it's still wrong 16% of the time, who's liable when it's wrong? You know, what's, what's the safety mechanism for the patient?

But compared to the safety mechanism, without it, you know, either nobody's looking or somebody's looking who's gonna be wrong, like half the time, I'd say, you know, if I was in the emergency room, have the machine, take a picture and tell me how I'm doing.

Well, Drew, as always a pleasure to chat with you. And that concludes yet another edition of the NeuroOp Guru. We'll see you guys next time.

Visit link:
VLOG: Is artificial intelligence for optic disc photos ready for the ... - Ophthalmology Times

What is Artificial Intelligence (AI) Governance? Why Is It Important? – Techopedia

What is AI governance?

Artificial intelligence (AI) governance is about establishing a legal framework for ensuring the safe and responsible development of AI systems.

In the AI governance debate, society, regulators, and industry leaders are looking to implement controls to guide the development of AI solutions, from ChatGPT to other machine learning-driven solutions, to mitigate social, economic, or ethical risks that could harm society as a whole.

Risks associated with AI include societal and economic disruption, bias, misinformation, data leakage, intellectual property theft, unemployment due to automation, or even weaponization in the form of automated cyberattacks.

Ultimately, the end goal of AI governance is to encourage the development of safe, trustworthy, and responsible AI, defining acceptable use cases, risk management frameworks, privacy mechanisms, accuracy, and, where possible, impartiality.

AI governance and regulation are important for understanding and controlling the level of risk presented by AI development and adoption. Eventually, it will also help to develop a consensus on the level of acceptable risk for the use of machine learning technologies in society and the enterprise.

However, governing the development of AI is very difficult because not only is there no centralized regulation or risk management framework for developers or adopters to refer to, but it is also challenging to assess risk when this changes depending on the context the system is used within.

Looking at ChatGPT as an example, enterprises not only have to acknowledge that hallucinations can spread bias, inaccuracies, and misinformation, but they also have to be aware that user prompts can be considered leaked to OpenAI. They also need to consider the impact that AI-generated phishing emails will have on their cybersecurity.

More broadly, regulators, developers and industry leaders need to consider how to reduce the inaccuracies or misinformation presented by large language models (LLMs), as this information could potentially have the ability to influence public opinion and politics.

At the same time, regulators are attempting to strike a balance between mitigating risk without stifling innovation among smaller AI vendors.

Before regulators and industry leaders can have a more comprehensive perspective of AI-related risks, they first need more transparency over the decision-making processes of automated systems.

For instance, the better the industry understands how an AI platform comes to a decision after processing a dataset, the easier it is to identify whether that decision is ethical or not and whether the vendors processing activities respect user privacy and comply with data protection regulations such as the General Data Protection Regulation (GDPR).

The more transparent AI development is, the better risks can be understood and mitigated. As Brad Smith, vice chair and president of Microsoft, explained in a blog post in May 2023, When we at Microsoft adopted our six ethical principles for AI in 2018, we noted that one principle was the bedrock for everything else accountability.

This is the fundamental need: to ensure that machines remain subject to effective oversight by people, and the people who design and operate machines remain accountable to everyone else.

Without transparency over how AI systems process data, there is no way to assess whether they are developed with a concerted effort to remain impartial or if they are simply developed with the values and biases of their creators.

On 26 January 2023, the U.S. National Institute of Standards and Technology (NIST) released its AI risk management framework, a voluntary set of recommendations and guidelines designed to measure and manage AI risk.

NISTs standard is one of the first comprehensive risk management frameworks to enter the AI governance debate, which looks to promote the development of trustworthy AI.

Under this framework, NIST defines risks as anything that has the potential to threaten individuals civil liberties, which emerges due to the nature of AI systems themselves or how a user interacts with them. Crucially, NIST highlights that organizations and regulators need to be aware of the different contexts in which AI can be used to fully understand risk.

NIST also highlights four core functions organizations can use to start controlling AI risks:

It is important to note that NISTs framework has many critics due to the fact its a voluntary framework, so theres no regulatory obligation for organizations to develop AI responsibly at this stage.

One of the main barriers to AI governance at the moment is the black box development approach of AI leaders like Microsoft, Anthropic, and Google. Typically, these vendors will not disclose how their proprietary models work and make decisions in an attempt to maintain a competitive advantage.

While a black box development approach allows AI vendors to protect their intellectual property, it leaves users and regulators in the dark about the type of data and processing activities their AI solutions use to come to decisions or predictions.

Although other vendors in the industry, like Meta, are looking to move away from black box development to an open-source and transparent approach with LLMs like Llama 2, the opaqueness of many vendors makes it difficult to understand the level of accuracy or bias presented by these solutions.

AI governance is critical to guiding the development of the technology in the future and implementing guardrails to ensure that it has mainly positive outcomes for society as a whole.

Building a legal framework for measuring and controlling AI risk can help users and organizations to experiment with AI freely while looking to mitigate any adverse effects or disruption.

See the original post:
What is Artificial Intelligence (AI) Governance? Why Is It Important? - Techopedia

Study Predicts Increased Student Use of Artificial Intelligence in the … – Fagen wasanni

A recent study conducted by Junior Achievement USA has indicated that student reliance on artificial intelligence (AI) is expected to rise in the upcoming school year. The survey revealed that 44% of students expressed their likelihood of using AI to complete assignments. Moreover, 48% of students reported knowing someone who has utilized AI to complete tasks on their behalf.

Despite the increasing usage of AI, the majority of teenagers still consider it to be a form of cheating. Out of the students surveyed, 60% believed that using AI in their schoolwork is dishonest, while 62% viewed it as just another tool to aid them in completing their assignments.

Educators are now tasked with ensuring that students are utilizing AI as a supplementary tool rather than relying solely on it to complete their work. Many school districts have implemented policies to prevent misuse of AI, including the use of software that can detect artificially generated student work.

Some students, like Luke Nathan from All Saints Episcopal School, have faced consequences for using AI in their academic endeavors. Nathan admitted to being caught multiple times, emphasizing that the payoff is not worth the risk. He also expressed concerns about the rapid advancements of AI and its implications in an educational context.

With AIs rapid growth and its potential to analyze and interpret complex data, there is a sense of awe and caution among students. Nathan mentioned watching AI assist in stock investments and witnessing its tremendous success, highlighting its remarkable power.

As more students express their intent to utilize AI in their academic pursuits, it is crucial for educators to strike a balance between leveraging the benefits of this technology while ensuring an honest and ethical learning environment.

More here:
Study Predicts Increased Student Use of Artificial Intelligence in the ... - Fagen wasanni

The Impact and Risks of Artificial Intelligence – Fagen wasanni

Artificial intelligence (AI) has become a hot topic in recent times, raising questions about its nature and implications. Machine learning, which involves learning from massive amounts of data, is at the heart of AI. According to Eric Chown, a computer science professor at Bowdoin College, connecting numerous seemingly unintelligent components can result in something intelligent.

Chown, an expert with a Ph.D. in artificial intelligence, emphasizes that AI is already influencing our lives in ways we may not even realize. It affects the content we see on social media platforms, the recommendations we receive on streaming services like Netflix, and even the news stories we encounter on Facebook.

While discussing the risks associated with AI, Chown highlights the importance of not allowing computers, smartphones, and social media to have excessive control. He cautions against blindly accepting AIs decisions and urges individuals to critically evaluate its outputs. Accountability is crucial, and he believes that people need to be aware of the decision-making processes employed by AI.

This issue is particularly relevant in the context of journalism and the dissemination of information. Chown emphasizes the need for individuals to fact-check and verify the news they come across, especially in an era where information spreads rapidly and misinformation abounds. He suggests that society should prioritize teaching skills related to discerning reliable sources and distinguishing between fact and fiction.

Chown also draws attention to the flood of AI-generated content on the internet. He warns that relying on AIs own writings to train future AI programs could hinder progress and pose challenges for improvement.

Recognizing the demand for trustworthy news sources becomes crucial as AI-generated content continues to proliferate. However, this requires individuals to possess the necessary intelligence to discern reliable information from misleading or inaccurate content. By fostering a sense of accountability and critical thinking, society can better navigate the era of AI and ensure the responsible use of technology in our everyday lives.

See original here:
The Impact and Risks of Artificial Intelligence - Fagen wasanni