Category Archives: Artificial Intelligence
Integrated Intelligence: Human Uses of, Strategies on, and Rules for … – Newlines Institute
Executive Summary
Humans have entered an age of artificial intelligence or, rather, of integrated intelligence. Already becoming more familiar with some forms of artificial intelligence in their daily lives, theyll inevitably embrace new technologies and techniques in everything from workplace productivity systems to drug design, manufacturing defect detection, and autonomous weapons. Given tiered societies and the complexity of consequences, American and other leaders must avoid trapping themselves in poor policies and practices. Rather than reacting counterproductively, they must strive for the sweet spot between important and urgent, innovative and responsible, private and public. Because they wont soon be able to resolve substantial uncertainty regarding how strongly or how rapidly people will experience the effects of artificial intelligence, American and other policymakers must get curious, be active, and prepare for a range of potential outcomes. They must work on all fronts, from domestic legislation and international coordination to enterprise policies and personal practices, while accepting that they cant control the future.
In this special report, the Future Frontiers team at New Lines Institute considers and proposes human uses of, strategies on, and rules for artificial intelligence in the 21st century. To do so, we summarize how humans have mythologized, theorized, and made machines since antiquity; explain how scientists and engineers have developed contemporary artificial intelligence during the industrial age, especially after World War II; provide an overview of artificial intelligences complex consequences in the age of adoption; offer ideas on how American and other leaders may create strategies, policies, and laws on the technology; and consider whether and how people in every segment of society may adopt standards and practices in the coming age of integrated intelligence.
View post:
Integrated Intelligence: Human Uses of, Strategies on, and Rules for ... - Newlines Institute
$3 Million Grant Awarded to Develop Artificial Intelligence to Help … – Lupus Foundation of America
Recently, University of Houston researchers were awarded $3 million by the National Institute of Diabetes and Digestive and Kidney Diseases to develop an artificial intelligence (AI) system that will read and classify kidney biopsy results to more accurately diagnose lupus nephritis (LN, lupus-related kidney disease).
Diagnosing LN can be challenging. It typically requires a kidney biopsy, a painful and invasive procedure where a small piece of kidney tissue is collected and examined for signs of inflammation and damage. A pathologist then reads the biopsy report, but there are often differing interpretations of the results based on who reads the report. This research project aims to automate the classification of biopsy samples to aid in diagnostic accuracy. According to the researchers, using AI to train a neural network to learn how to read and classify the biopsies will lead to higher accuracy and translate to better treatment of LN.
The use of AI is novel in lupus study. Its ability to detect and select patterns can make the technology useful for classifying disease, which could revolutionize diagnosis, treatment, and management of LN. Continue to follow the Lupus Foundation of American for developments stemming from this grant, as well as other news on lupus treatments and clinical trials.
Read the announcement
Read more from the original source:
$3 Million Grant Awarded to Develop Artificial Intelligence to Help ... - Lupus Foundation of America
The Class Action Weekly Wire Episode 31: Artificial Intelligence … – Duane Morris
Duane Morris Takeaway: This weeks episode of the Class Action Weekly Wire features Duane Morris partner Jerry Maatman and special counsel Brandon Spurlock with their discussion of the Senate Banking Committees hearing this week regarding consumer protection in the financial sector from the risks of artificial intelligence, as well as their analysis of the potential implications in the regulatory environment and class action space as AI continues to be utilized in workplace and commercial operations.
Episode Transcript
Jerry Maatman: Welcome, loyal blog readers and listeners to our Friday weekly podcast series. Im joined by my colleague Brandon Spurlock today, and were going to be focusing on artificial intelligence and the fact that that issue has been foremost in the mind of legislators in Washington, D.C. Brandon, welcome to our weekly podcast.
Brandon Spurlock: Thanks, Jerry. Always happy to be here.
Jerry: Brandon, there was quite a lot of activity at the Senate Banking Committee this week with respect to artificial intelligence. It involved consumers and protection of consumers. To me AI is everywhere and in the news, in terms of how it impacts the workplace, how it impacts consumers whats your take away from what occurred in Washington, D.C. this week?
Brandon: Yeah, Jerry, this topic is exploding everywhere, and the changes in every sector are fast and furious is AI advances. The hearing was led by the committees chairman, Democratic Senator Sherrod Brown from Ohio. Brown opened the hearing by highlighting positive aspects of technology for society in the financial world. And you think about things like ATM machines providing quick access to money, smartphone apps that can access banking online and bill paying, but also explain that automation has led to many of the financial crises that weve seen in the past two decades. Brown stressed that any AI use in the financial sector should be utilized to make the economy better for consumers, and that there should be significant safeguards in place to ensure that it does so.
His Republican counterpart, Senator Mike Rounds of South Dakota, who was filling in for the committees ranking member, also stressed the risks of AI, but took a different stance on the issues of regulation. He stated that there should be regulations regarding transparency and explainability in decision-making, especially where credit is involved but that Congress should take a pro-innovative stance so the U.S. can attract talent, and that halting the progress of AI in the financial sector could put the U.S. at a competitive disadvantage.
Jerry: It struck me that here is a great example of technology accelerating faster than the law, and the law is trying to catch up, and government regulators are thinking about the void that exists in the system about regulation. I know that the Senate Committee and Senator Brown focused on fraud and antitrust concerns, but the overlay also was in the fear that artificial intelligence incorporates a bias, that use of the artificial intelligence could have an adverse impact on protected minority groups. Whats your takeaway in terms of what were going to see in the future in this particular area?
Brandon: Well, thats spot on Jerry. Brown highlighted several AI tools that companies in the financial sector already use have been shown to have ingrained discriminatory biases towards Black and Latino American borrowers. Specifically, banks use algorithms and machine learning AI models and consumer lending that can determine a borrowers creditworthiness. But it often automates, super charges the biases that end up excluding minorities.
Jerry: I know that the Consumer Financial Protection Bureau is dabbling in this area, also focusing on regulations. But it seems to me that this is an area that the plaintiffs class action bar is following. And my sense is that were going to see a tipping point soon where there is going to be private plaintiff lawsuits brought over these issues with allegations that either the use of the AI implicated antitrust or fraud concerns or discrimination, either in the employment arena in the workplace, or with the extension of credit or with loans. Whats your takeaway of class action risks in this area?
Brandon: Well, you know there was a committee witness attending the hearing, Daniel Gorfine. Hes the founder and CEO of advisory firm Gattaca Horizons, and hes a former chief innovation officer with the CFTC. He noted the risk of AI, but stated the speculative fear or fear of future harm should not broadly block development of AI in financial services.
Another witness, University of Michigan computer science and engineering professor Michael Wellman, urged that public and open knowledge on what practices can create risk will help better prepare financial systems for AI and inspire market rules and systems that remain resilient to AIs inevitable impacts.
So with all this said, Jerry, there will probably be no shortage of class action lawsuits that are filed, and I think as we see how those class actions progress, well also see how they impact the regulatory environment. I think both are going to have an impact on one another.
Jerry: Brandon, youre a thought leader in this area, and well be closely following artificial intelligence and its implications in litigation and government regulation, and in terms of what it means to companies in the private sector. Sincerely appreciate you lending your expertise today to our podcast and thanks so much for joining us.
Brandon: Thanks for having me, Jerry.
Originally posted here:
The Class Action Weekly Wire Episode 31: Artificial Intelligence ... - Duane Morris
Artificial Intelligence tools shed light on millions of proteins – Science Daily
A research team at the University of Basel and the SIB Swiss Institute of Bioinformatics uncovered a treasure trove of uncharacterised proteins. Embracing the recent deep learning revolution, they discovered hundreds of new protein families and even a novel predicted protein fold. The study has now been published in Nature.
In the past years, AlphaFold has revolutionised protein science. This Artificial Intelligence (AI) tool was trained on protein data collected by life scientists for over 50 years, and is able to predict the 3D shape of proteins with high accuracy. Its success prompted the modelling of an astounding 215 million proteins last year, providing insights into the shapes of almost any protein. This is particularly interesting for proteins that have not been studied experimentally, a complex and time-consuming process.
"There are now many sources of protein information, enclosing valuable insights into how proteins evolve and work" says Joana Pereira, the leader of the study. Nevertheless, research has long been faced with a data jungle. The research team led by Professor Torsten Schwede, group leader at the Biozentrum, University of Basel, and the Swiss Institute of Bioinformatics (SIB), has now succeeded in decrypting some of the concealed information.
A bird's eye view reveals new protein families and folds
The researchers constructed an interactive network of 53 million proteins with high quality AlphaFold structures. "This network serves as a valuable source for theoretically predicting unknown protein families and their functions on a large scale," underlines Dr. Janani Durairaj, the first author. The team was able to identify 290 new protein families and one new protein fold that resembles the shape of a flower.
Building on the expertise of the Schwede group in developing and maintaining the leading software SWISS-MODEL, they made the network available as an interactive web resource, termed the "Protein Universe Atlas."
AI as a valuable tool in research
The team has employed Deep Learning-based tools for finding novelties in this network, paving the way to innovations in life sciences, from basic to applied research. "Understanding the structure and function of proteins is typically one of the first steps to develop a new drug, or modify their functions by protein engineering, for example," says Pereira. The work was supported by a 'kickstarter' grant from SIB to encourage the adoption of AI in life science resources. It underscores the transformative potential of Deep Learning and intelligent algorithms in research.
With the Protein Universe Atlas, scientists can now learn more about proteins relevant to their research. "We hope this resource will help not only researchers and biocurators but also students and teachers by providing a new platform for learning about protein diversity, from structure, to function, to evolution," says Janani Durairaj.
Read this article:
Artificial Intelligence tools shed light on millions of proteins - Science Daily
Generative Artificial Intelligence (AI): Canadian Government … – JD Supra
The Canadian government continues to take note and react to the widespread use of generative artificial intelligence (AI). Generative AI is a type of AI that generates output that can include text, images or other materials, and is based on material and information that the user inputs (e.g., ChatGPT, Dall-E 2 and Midjourney). In recent development, the Canadian government has: (1) opened up consultation on a proposed Code of Practice (the Code) and provided a proposed framework for the Code;1and (2) published a Guide on the use of Generative AI for federal institutions on September 6th, 2023 (the Guide).2
More generally, as discussed below, as Canadian companies continue to adopt generative AI solutions, they may take note of the initial framework set out for the Code, as well as the information in the Guide, in order to minimize risk and ensure compliance with future AI legislation. A summary of the key points of the proposed Code and Guide is provided below.
The Code is intended for developers, deployers and operators of generative AI systems to avoid harmful impacts of their AI systems and to prepare for, and transition smoothly into, future compliance with the Artificial Intelligence and Data Act (AIDA),3should the legislation receive royal assent.
In particular, the Government has stated that it is committed to developing a code of practice, which would be implemented on a voluntary basis by Canadian firms ahead of the coming into force of AIDA.4For a detailed look into what future AI regulation may look like in Canada, please refer to our blog, Artificial IntelligenceA Companion Document Offers a New Roadmap for Future AI Regulation in Canada.
In the process of developing the Code, the Canadian government has set out a framework for the Code, and has now opened consultation on this framework. To that end, the government is requesting comments on the following elements of the proposed framework:
In the proposed framework for the Code, developers and deployers would be asked to identify ways that their system may attract malicious use (e.g., impersonate real individuals) and take steps to prevent such use from occurring.
Additionally, developers, deployers and operators would be asked to identify the ways that their system may attract harmful inappropriate use (e.g., use of a large language model for medical or legal advice) and again, take steps to prevent this inappropriate from occurring.
To this end, it would be suggested by the Code that developers assess and curate datasets to avoid low-quality data and non-representative datasets/biases. Further, developers, deployers and operators would be advised to implement measures to assess and mitigate risk of biased output (e.g., fine-tuning).
Accordingly, a future Code would recommend that developers and deployers provide a reliable and freely available method to detect content generated by the AI system (e.g., watermarking), as well as provide a meaningful explanation of the process used to develop the system (e.g., provenance of training data, as well as measures taken to identify and address risks).
Additionally, operators would be asked to ensure that systems that could be mistaken for humans are clearly and prominently identified as AI systems.
A future Code would potentially recommend that deployers and operators of generative AI systems provide human oversight in the deployment and operations of their system. Further, developers, deployers and operators would be asked to implement mechanisms to allow adverse impacts to be identified and reported after the system is made available.
In this vein, a future Code would recommend that developers use a wide variety of testing methods across a spectrum of tasks and contexts (e.g., adversarial testing) to measure performance and identify vulnerabilities. As well, developers, deployers and operators would be asked to employ appropriate cybersecurity measures to prevent or identify adversarial attacks on the system (e.g., data poisoning).
Developers, deployers and operators of generative AI systems may therefore ensure that multiple lines of defence are in place to secure the safety of their system, such as ensuring that both internal and external (independent) audits of their system are undertaken before and after the system is put into operation; and develop policies, procedures and training to ensure that roles and responsibilities are clearly defined, and that staff are familiar with their duties and the organization's risk management practices.
Accordingly, as the framework for the Code evolves through the consultative process, it is expected that it will ultimately provide a helpful guide for Canadian companies involved in the development, deployment and operation of generative AI systems as they prepare for the coming-into-force of AIDA.
The Guide is another example of the Canadian government accounting for the use of generative AI. The Guide provides guidance to federal institutions and their employees on their use of generative AI tools, including identifying challenges and concerns relating to its use, putting forward principles for using it responsibly, and offering policy considerations and best practices.
While the Guide is intended for federal institutions, the issues it addresses may have more universal application to the use of generative AI systems, broadly. Accordingly, organizations may consider referring to the Guide as a guiding template, while developing their own internal AI policies for use of generative AI.
In more detail, the Guide identifies challenges and concerns with the use of generative AI, including the generation of inaccurate or incorrect content (known as "hallucinations") and/or the amplification of biases. More generally, the government notes that generative AI may pose "risks to the integrity and security of federal institutions."8
To mitigate these challenges and risks, the Guide recommends that federal institutions adopt the "FASTER" approach:
Organizations may take heed of the FASTER approach as a potential guiding framework to the development of their own policies on the use of generative AI.
Various other highlights noted by the Guide include the following:
In view of the foregoing, Canadian companies exploring the use of generative AI may take a note of the FASTER principles set out by the Guide, as well as the various best practices proposed.
Taken together, the Code and the Guide provide helpful guidance for organizations who wish to be proactive as they develop their AI policies and ensure they are compliant with AIDA should it receive royal assent.
1Government of Canada, Canadian Guardrails for Generative AI Code of Practice, last modified 16 August 2023 ["Consultation Announcement"].
2Government of Canada, Guide on the use of Generative AI, last modified 6 September 2023 ["The Guide"].
3Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, 1st Sess, 44th Parl, 2021 (second reading completed by the House of Commons on 24 April 2023).
4Consultation Announcement.
5Consultation Announcement.
6Consultation Announcement.
7Consultation Announcement.
Read the original post:
Generative Artificial Intelligence (AI): Canadian Government ... - JD Supra
Using Artificial Intelligence To Advance Development In Africa – Africa.com
By John Njogu
Africa is at the cusp of a technological renaissance, and at the heart of this transformation lies the ever-expanding realm of artificial intelligence (AI). As the continent grapples with both longstanding challenges and emerging opportunities, AI is a potent force that could reshape its future in profound ways. From bolstering healthcare delivery in remote villages to revolutionizing agriculture and leapfrogging infrastructural limitations, Africas journey with AI is not just a story of innovation but a testament to the continents resilience and determination to bridge the digital divide. In this era of AI-driven progress, the potential for Africa is limitless, provided we navigate the terrain of ethical, socio-economic, and policy considerations with unwavering commitment and foresight.
In the era of AIalso known as the Fourth Industrial Revolution, countries worldwide, particularly in North America, Europe and Asia, are investing significantly in leveraging AIs potential for socio-economic growth. These countries are releasing AI policy frameworks correspondingly, while Africa lags in comprehensive AI policy formulation and effective utilization of AI for its own development.
The state of AI adoption in Africa
Numerous obstacles hinder Africas embrace of AI, as reported by Abejide in Responsible AI in Africa. These include basic challenges such as inadequate sanitation, food insecurity, limited internet access and poor education systems. According to a 2022 demographic report on internet usage released by Statista, internet usage is expanding in Africa, with an estimated 570 million users. However, there is variation in technological uptake within the continent. Nations including Nigeria, Egypt and South Africa lead in smartphone adoption, but the continents overall internet penetration is just 43%, well below the global average of 67%.
In the agricultural sector, most Africans still depend on subsistence farming for their livelihoods. Climate change has however drastically affected farm outputs, resulting in many farmers turning to other sources of income that seem more lucrative. Health care in many African countries, especially in rural areas, is not digitalized. Many local clinics and hospitals still use paper for orders and records, an indication of how far we are from a universally digitized healthcare system.
Despite these challenges, the silver lining is that technology has the capacity to rapidly transform challenges into opportunities. For example, in Sierra Leone, various stakeholders like World Vision Sierra Leone have partnered with the local government to invest heavily in digitizing their health system through the Ministry of Health. In Mali, the company Robots Mali employs language processing to create educational content for school children by teaching fundamental concepts in Bambara, a widely spoken local dialect. This initiative addresses the challenge of poor early education performance among young learners who are submitted to instruction in a language that is foreign to them.
The data dilemma: privacy, exploitation and surveillance
The success of AI relies heavily on the availability of robust data to train models. Private companies are collecting massive amounts of data from individuals, some without their knowledge. Data privacy policies are generally impenetrable to the average person, too long and technical to bother with, and difficult to revisit for verification.
The exploitation of African data, and the general absence of data sovereignty, underscores the urgent necessity to improve data privacy and transparency across the continent. At the extreme, people are manipulated into surrendering biographic data, such as in Kenya, where a recent World Coin cryptocurrency campaign had a multitude of Kenyans queuing for hours to surrender their iris biometric data for a meager $49 inducement.
This incidence reflects a repeated narrative in African nations, where personal data is amassed by private entities without sufficient informed consent from the public and without due consideration for data privacy and transparency. It took a groundswell of concern by Kenyan citizens about utilization storage of their data for the government to halt World Coin activities. This incident serves as a potent reminder of the broader issue at hand that needs solving before AI can flourish in Africa a lack of control over personal data and the imperative to safeguard data privacy and transparency in Africa and beyond.
AI policy initiatives and impact on African development
Yet the embrace of AI holds the potential to help African economies grow exponentially. Many African countries have realized this and have started to develop policies that position themselves as AI leaders. The Science for Africa Foundations Science Policy Engagement for African Research (SPEAR) programme is seeking to address the AI policy gaps across sectors in Africa, on the belief that the transformative potential of AI can accelerate achievement of the UNs Sustainable Development Goals and improve economies across the continent.
The SFA Foundations approach emphasizes the importance of diverse, equitable, inclusive, adaptable and stakeholder-owned policies. Through holding regional convenings to engage stakeholders to identify country- and regional-level policy needs, and encouraging dialogue and collaboration, the Foundation aims to formulate effective policies that are evidence-based, and align with development goals, and local contexts. Two of these workshops have already been held in Southern Africa and in West Africa, with more to follow targeting the three remaining regions. At the end of the convenings the SFA Foundation will generate a report on the status of AI in global health in Africa.
Seizing the AI opportunity for a prosperous Africa
AI is not just a distant aspiration for Africa; it is already at work, transforming lives in Sierra Leone and Mali. AI holds immense potential to address the issues Africa is grappling with, ranging from internet access disparities and climate-induced agricultural woes to the critical need for healthcare digitization. However, the journey ahead demands a collective effort.
To harness AIs full potential, Africa must prioritize data privacy and transparency, and safeguard its data resources from exploitation. Its crucial that African nations continue to develop equitable, stakeholder-owned policies through initiatives like the SFA Foundations SPEAR programme. The time is ripe to invest in education and nurture local talent to effectively manage data. In doing so, we can ensure that AI doesnt exacerbate existing inequalities but instead drives us toward a safer, sustainable, and more equitable African future. Let us seize this opportunity to propel Africas development, bridging the inequity gap and foster prosperity through the transformative power of Artificial Intelligence.
See the original post:
Using Artificial Intelligence To Advance Development In Africa - Africa.com
$4.5 million for three FRQS Dual Chairs in Artificial Intelligence and … – McGill Newsroom
The FRQS Dual AI Chairs Program supports research collaborations across disciplines in pursuit of the significant potential of AI to address some of humanitys greatest health challenges.
The rapid development and deployment of artificial intelligence demands we connect our best and brightest minds, and work to train the next generation of research leaders. In June, the Fonds de Recherche du Qubec Sant (FRQS) announced $4.5 million for three Dual Chairs in Artificial Intelligence and Health/Digital Health and Life Sciences, all three of which were awarded to teams co-directed by McGill researchers. The program brings together researchers with complementary expertise in AI, data sciences and life sciences to address issues and challenges impacting the health of Canadians and the efficiency and effectiveness of the healthcare system. With the investment from this and a previous call in 2021, the program will facilitate simultaneous research training for more than 60 students and postdoctoral fellows in the fields of AI and life sciences.
Each chair will receive $1.5 million, distributed over three years. The Dual AI Chairs are supported in part by the ministre de lconomie, de l'Innovation et de l'nergie. As of July 1st, the programs are actively recruiting trainees.
With this significant support from Le Fonds de recherche du Qubec Sant (FRQS) an emerging generation of researchers will develop the skills and expertise they need to design the health solutions of the future, to make medicine safer, and to advance treatment for some of the most devasting diseases and disorders, said Martha Crago, Vice-Principal, Research and Innovation. The fact that McGill researchers are co-directing all three FRQS Dual AI Chairs is truly impressive, and a testament to the expansive expertise and collaborative spirit of our AI, data sciences, and life sciences research communities, she added.
Professor of Neurology and Neurosurgery and Director of the Centre for Research in Neuroscience (RI-MUHC), Keith Murai, and McGill Professor of Computer Science, Kaleem Siddiqi, will co-direct the Dual AI Chair, Cracking the nanoscopic structural code of the brain: Artificial intelligence and computer vision approaches for brain health, which promises to advance understanding of Alzheimers and other neurodegenerative diseases.
McGill Associate Professor of Medical Physics, John Kildea, and Associate Professor in the Department of Computer Engineering and Software Engineering at Polytechnique Montral, Amal Zouaq, will co-direct the Dual AI Chair, Smart data for smart cancer care a research program that combines expertise in natural language processing, semantic web technologies, and patient-centered data to create knowledge bases in oncology. With the goals of reducing risk and making cancer treatment safer and more effective, Kildea and Zouaq are collaborating to build an AI solution that will combine, consolidate, and exploit unstructured health data.
Mathieu Blanchette, Associate Professor in McGills School of Computer Science will co-direct the Dual AI Chair, Dveloppement dapproches en intelligence artificielle pour lucider les codes de rgulation des ARN et exploiter leur potentiel thrapeutique, with ric Lcuyer of the Montral Clinical Research Institute (IRCM). This program aims to tap into the potential of AI to facilitate discoveries in RNA biology and therapeutics.
Learn more about the three FRQS Dual AI Chairs
Originally posted here:
$4.5 million for three FRQS Dual Chairs in Artificial Intelligence and ... - McGill Newsroom
Councilwoman talks about the future of artificial intelligence in NYC – Spectrum News NY1
As city schools embrace the use of emerging artificial intelligence technology, local lawmakers are looking to understand the benefits and the risks involved.
The City Council held a hearing Wednesday to get some answers from education officials on how AI tools are being used in schools, as well as how third-party vendors are vetted.
Education officials testified that AI tools are important to students education and career training.
City Councilwoman Jennifer Gutirrez, chair of the councils committee on technology, joined NY1 political anchor Errol Louis on Inside City Hall to discuss more.
We want to ensure that our schools, obviously, are completely connected and that they have access, that our young people have access. That they can start thinking about careers in this industry as early as first, second and third grade, said Gutirrez.
Her district includes the Brooklyn neighborhoods Williamsburg and Bushwick, as well as Ridgewood in Queens.
Continued here:
Councilwoman talks about the future of artificial intelligence in NYC - Spectrum News NY1
PLANNING AHEAD: What can happen when the law meets artificial intelligence – The Mercury
JANET COLLITON
It seems sometimes that everywhere you go and in every news media you consult, a major subject of interest is Artificial Intelligence otherwise known as AI. What AI is and what it means for the future has been the subject of television interviews such as the one appearing on the popular television program Sixty Minutes between interviewer Scott Pelley and Google CEO Sundar Pi (July 9, 2023).
AI has also inspired legal writings such as the articles appearing in the July/August, 2023 edition of The Pennsylvania Lawyer, a publication of the Pennsylvania Bar Association. For better or for worse AI has been described as impacting everything from the way we work to the way we write, think and organize data.
Artificial Intelligence has been described as the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.
Another simple description is the science of making machines that can think like humans. It can do things that are considered smart. AI technology can process large amounts of data in ways unlike humans. The goal for AI is to be able to do things such as recognize patterns, make decisions, and judge like humans As early as 2005 and 2006 chess programs based in AI were able to win decisive victories against human international chess champions. In 2023, Code X The Stanford Center for Legal Informatics and the legal technology company Casetextannounced what they called a watershed moment. Research collaborators had deployed GPT-4, the latest generation Large Language Model to take and pass the Uniform Bar Exam. GPT-4 didnt just squeak by. It passed the multiple-choice portion of the exam and both components of the written portion, exceeding not only all prior Large Language Models scores, but also the average score of real-life bar exam takers, scoring in the 90th percentile
Legal use of Artificial Intelligence obviously goes well beyond competition between machines and students in passing the bar exam. How it can be used is a subject of ongoing debate. It is pointed out that AI itself does not think in the way we do or feel. It takes massive amounts of data, organizes it and arrives at conclusions. It can even make up answers and deceive which is a subject of great concern.
In the previously cited Pennsylvania Bar Association magazine, The Pennsylvania Lawyer, two articles, including the cover article A Cautionary Tale of AI as a Research Tool, and another The Not-So-Quiet Revolution: AI and the Practice of Law, explore AI, GPT and the actual and potential effects of this revolution.
In In re Estate of Bupp: A Cautionary Tale, the author describes his adventures as an associate attorney who was tasked by a partner to research a statute of limitations issue regarding an accounting. The associate decided to use GPT (an AI system) to find the answer and shortly came across the case of Elwood P. Bupp who had filed a petition to be appointed guardian for Florence P. Zook, an elderly woman. The case described a hearing where Bupp was removed as guardian and cited later appellate decisions. The only problem was that Bupp never existed. Neither did Zook or the hearing dates or the decisions described. The story was completely made up by AI. This was the cautionary tale.
The second article, The Not-So-Quiet Revolution: AI and the Practice of Law gave a more nuanced view of AI and its possible practical uses. The technology could sort massive amounts of data (the kind frequently produced in discovery) and locate and organize information at a rate of speed unknown to humans. The author also suggested it might help some individuals without legal access to be able to handle some matters on their own. Always, I would note, however, there would be the Bupp concern in mind regarding accuracy.
This was not the end of my learning about AI and the law. Last week I attended the National Elder Law Forum in Chicago where the lead speaker took us through some further positives and negatives. There is still much to be learned.
Janet Colliton Esq. is a Certified Elder Law Attorney (CELA) by the National Elder Law Foundation and limits her practice to elder law, retirement, life care, special needs, and estate planning and administration with offices at 790 East Market St., Ste. 250, West Chester, 610-436-6674, colliton@collitonlaw.com. She is a member of the National Academy of Elder Law Attorneys and Pennsylvania Association of Elder Law Attorneys and, with Jeffrey Jones, CSA, co-founder of Life Transition Services LLC, a service for families with long term care needs.
See original here:
PLANNING AHEAD: What can happen when the law meets artificial intelligence - The Mercury
Artificial intelligence at universities: a pressing issue University … – University Affairs
The overwhelming rise of text generators raises the need for reflection and guidelines to ensure their ethical use in an academic setting.
Just a year ago, the debate around artificial intelligence (AI) was largely theoretical. According to Caroline Quesnel, president of the Fdration nationale des enseignantes et enseignants du Qubec (FNEEQ-CSN) and literature instructor at Collge Jean-de-Brbeuf, the start of the 2023 winter semester marked a turning point as ChatGPT became a focal point in classrooms. Other forms of generative AI are also available to the public, such as QuillBot (text writing and correction), DeepL (translation) and UPDF (PDF summarization).
Martine Peters, a professor of educational science at the Universit du Qubec en Outaouais, surveyed 900 students and found that 22 per cent were already using ChatGPT (sometimes, often or always) to do their assignments. And that was in February!, she noted. It is an alarming statistic, particularly as neither faculty nor universities were prepared to deal with the new technology. Trying to ban AI now would be futile, so what can universities do to ensure its ethical use?
Dr. Peters is convinced that AI can be used for educational purposes. It can help a person understand an article by summarizing, translating or serving as a starting point for further research. In her opinion, outside of language courses (which specifically assess language skills), it could also be used to correct a text or improve syntax, much like grammar software or writing services that some students have relied upon for years.
However, plagiarism remains a major concern for academics. And for the moment, there is no effective tool for detecting the use of AI. In fact, Open AI, the company behind ChatGPT, abandoned its detection software this summer for lack of reliable results. This is a rat race were bound to lose, argued Dr. Quesnel. Should professors return to pen-and-paper tests and classroom discussions? Satisfactory solutions have yet to be found, but as Dr. Quesnel added, its clear that AI creates tension, especially considering the massive pressures in academia. Right now, were spending a lot of energy looking at the benefits of AI instead of its pitfalls.
Beyond plagiarism, AI tools raise all kinds of issues (bias, no guarantee of accuracy, etc.) that the academic community needs to better understand. ChatGPT confidently spouts nonsense and makes up references; its not very good at solving problems in philosophy or advanced physics. You cant use it with your eyes closed, warned Bruno Poellhuber, a professor in the department of psychopedagogy and andragogy at the Universit de Montral.
More training is needed to help professors and students understand both the potential and drawbacks of these technologies. You have to know and understand the beast, Dr. Poellhuber added.
Dr. Peters agreed. For years, we didnt teach how to do proper web searches. If we want our students to use AI ethically, we have to show them how, and right now nobody seems to be taking that on, she said.
Universities are responsible for training their instructors, who can then pass this knowledge on to students. Students need to know when its appropriate to use AI, explained Mlanie Rembert, ethics advisor at the Commission de lthique en science et en technologie (CEST).
The Universit de Montral and the Ple montralais denseignement suprieur en intelligence artificielle (PIA) organized a day of reflection and information for the academic community (professors, university management, etc.) in May. The aim was to demystify the possibilities of generative AI and discuss its risks and challenges, Dr. Poellhuber explained.
This event followed an initial activity organized by the Quebec Ministre de lEnseignement suprieur and IVADO, which gave rise to a joint Conseil suprieur de lducation (CSE) and CEST committee. The committee is currently conducting discussions, consultations and analysis among a wide range of experts on the use of generative AI in higher education. Our two organizations saw the need for more documentation, reflection and analysis around this issue, said Ms. Rembert, who coordinates the expert committees work. Briefs were solicited from higher education institutions and from student and faculty organizations. The report, scheduled to be released in late fall, will be available online.
Given the scale of the disruption, faculty members could also benefit from the experience of others and the support of a community of practice. Thats the idea behind LiteratIA, a sharing group co-founded by Sandrine Prom Tep, associate professor in the management sciences school at the Universit du Qubec Montral. Its all very well to talk about theory and risks, but teachers want tangible tools. They want to know what to do, she explained. Instead of letting themselves be outpaced by students who are going to use AI anyway teachers should adopt a strategy of transparency and sharing. If we dont get on board, students will be calling the shots.
Universities and government alike will have to take a close look at the situation and set concrete, practical and enforceable guidelines. We cant dawdle: AI is already in classrooms, said Dr. Quesnel, adding that faculty are currently shouldering a burden that should be shared by teaching institutions and the Ministre de lEnseignement suprieur. We need tools that teachers can rely on.
So far, very few universities have issued guidelines, those that exist are often vague and difficult to apply. There isnt much in terms of procedures, rules or policies, or tools and resources for applying them. Basically, teachers have to decide whether or not to allow AI, and make their own rules, Dr. Prom Tep added. Institutions will need to define clear policies for permissible and impermissible use, including but not limited to plagiarism (for example, how to use it for correcting assignments, how to cite ChatGPT, etc.).
Rolling out policies and legislation can take time. Its like when the web became prominent: legislation had to play catch up, noted Dr. Prom Tep. The Observatoire international sur les impacts socitaux de lIA et du numrique (OBVIA), funded by the Fonds de recherche du Qubec, is expected to make recommendations to the government, as is the joint expert committee. But is that enough? Do we need broader consultations? questioned Dr. Prom Tep, who would like to see permanent working groups set up. In her opinion, to avoid each institution reinventing the wheel, these reflections will have to be collective and shared, and neutral places for discussion will have to be created.
Link:
Artificial intelligence at universities: a pressing issue University ... - University Affairs