Category Archives: Artificial Intelligence
Artificial Intelligence Added as Honor Code Violation The Oberlin … – The Oberlin Review
On May 17, Oberlin changed the schools Honor Code Charter to include the use of artificial intelligence as a punishable offense under the cheating section of the Code. The Honor Code Charter is reviewed by the Honor Committee every three years.
The amended Charter prohibits the use of artificial intelligence software or other related programs to create or assist with assignments on the behalf of a student unless otherwise specified by the faculty member and/or the Office of Disability & Access.
The decision comes in the wake of questions surrounding the threat to academic integrity posed by generative AI chatbots, such as OpenAIs ChatGPT.
AI cases could have been pursued under the old Charter, Associate Dean of Students Thom Julian said, and the new clause simply acts as a clarification rather than a change in policy.
The school felt it necessary to add to the Honor Code, Julian said. We started to see issues come up within the classroom last year, and [at] a lot of our peer institutions, I saw that they were also having some similar issues. We just wanted to be able to provide really clear guidance around it, not just for faculty, but for students, so everyone has clear expectations within the classroom.
The Student Honor Committee and liaison made most of the edits, according to College second-year Kash Radocha, a member and panelist of the SHC. The proposed changes were then reviewed by peer institutions, and a legal review was conducted. After it attained approval from the SHC, it was also approved by the Faculty Honor Committee and General Faculty Council. The new changes came into effect for the fall 2023 semester after going through the Student Senate and the General Faculty.
The addition of AI is only one of the changes that were made to the Charter. Another revision allows both claimants and respondents to appeal decisions, unlike the earlier system where only respondents were allowed to. Additionally, if a student is found not guilty by SHC, the faculty member is recommended to grade the assignment in accordance to its merit and note the reported violation. The College has also increased the maximum number of SHC seats from 15 to 20 and changed the process of removal of a member.
The amendment of the Charter was a highly collaborative process, Radocha said. The process for amending the Honor Code Charter is not a light one, and multiple checks and balances are in place to ensure the changes are valid and widely accepted.
Professors have also had to grapple with the consequences of their students knowledge of AI. Assistant Professor of Philosophy Amy Berg, who teaches a class on Ethics and Technology, said that when she taught the class in the spring of 2023, her students were already familiar with large language models, such as ChatGPT. However, since ChatGPT had only been released a few months prior to that classs start, she had not been able to add much to the curriculum.
[T]he academic or philosophical or ethical work on ChatGPT just has not caught up to its use, she said. So, I know when I teach the class next time, Ill have to spend a lot more time on AI and, specifically, on whatever forms of AI are current at the time.
Some members of the faculty have begun to add ChatGPT to their curriculums in creative ways to allow students to understand what its capabilities are. One example is Assistant Professor of Politics Joshua Freedman, who spoke about a ChatGPT assignment he gave to students in spring 2023.
I thought that for both my own sake and the students that we should use it to figure out what its capable of, Freedman said. And so, I had students ask ChatGPT a question of relevance to the course and then, in a series of follow-up questions, I had them dig deeper and deeper, keep pushing the AI to give them the best possible answer. To see how well does this large language model answer the questions that were trying to answer in this course.
Professor Freedman said that he likes the idea of a default ban while giving faculty the power to allow the use of AI for certain assignments. While Professor Berg has not yet changed the structure of her assignments to include ChatGPT, she does think that the way classes are conducted will change.
I would expect that many professors, maybe me included, will move to oral assignments, in-class assignments, more in-class writing, less done out of class, because were concerned that, for various reasons, people will take shortcuts, Professor Berg said. I think, also, some professors will look for ways to integrate ChatGPT into the writing or thinking process, and there are good reasons to do that, too.
According to Radocha, the addition to the Honor Charter allows the school to better plan for the future of AI.
It is a precautionary measure for us to include it within the Charter now, so that by the time we review it again in 2026, we can amend the current AI clause based on how we have experienced it via cases in that timespan of three years, Radocha said.
Originally posted here:
Artificial Intelligence Added as Honor Code Violation The Oberlin ... - The Oberlin Review
CFPB Issues Guidance on Credit Denials by Lenders Using Artificial … – Consumer Financial Protection Bureau
WASHINGTON, D.C. Today, the Consumer Financial Protection Bureau (CFPB) issued guidance about certain legal requirements that lenders must adhere to when using artificial intelligence and other complex models. The guidance describes how lenders must use specific and accurate reasons when taking adverse actions against consumers. This means that creditors cannot simply use CFPB sample adverse action forms and checklists if they do not reflect the actual reason for the denial of credit or a change of credit conditions. This requirement is especially important with the growth of advanced algorithms and personal consumer data in credit underwriting. Explaining the reasons for adverse actions help improve consumers chances for future credit, and protect consumers from illegal discrimination.
Technology marketed as artificial intelligence is expanding the data used for lending decisions, and also growing the list of potential reasons for why credit is denied, said CFPB Director Rohit Chopra. Creditors must be able to specifically explain their reasons for denial. There is no special exemption for artificial intelligence.
In todays marketplace, creditors are increasingly using complex algorithms, marketed as artificial intelligence, and other predictive decision-making technologies in their underwriting models. Creditors often feed these complex algorithms with large datasets, sometimes including data that may be harvested from consumer surveillance. As a result, a consumer may be denied credit for reasons they may not consider particularly relevant to their finances. Despite the potentially expansive list of reasons for adverse credit actions, some creditors may inappropriately rely on a checklist of reasons provided in CFPB sample forms. However, the Equal Credit Opportunity Act does not allow creditors to simply conduct check-the-box exercises when delivering notices of adverse action if doing so fails to accurately inform consumers why adverse actions were taken.
In fact, the CFPB confirmed in a circular from last year that the Equal Credit Opportunity Act requires creditors to explain the specific reasons for taking adverse actions. This requirement remains even if those companies use complex algorithms and black-box credit models that make it difficult to identify those reasons. Todays guidance expands on last years circular by explaining that sample adverse action checklists should not be considered exhaustive, nor do they automatically cover a creditors legal requirements.
Specifically, todays guidance explains that even for adverse decisions made by complex algorithms, creditors must provide accurate and specific reasons. Generally, creditors cannot state the reasons for adverse actions by pointing to a broad bucket. For instance, if a creditor decides to lower the limit on a consumers credit line based on behavioral spending data, the explanation would likely need to provide more details about the specific negative behaviors that led to the reduction beyond a general reason like purchasing history.
Creditors that simply select the closest factors from the checklist of sample reasons are not in compliance with the law if those reasons do not sufficiently reflect the actual reason for the action taken. Creditors must disclose the specific reasons, even if consumers may be surprised, upset, or angered to learn their credit applications were being graded on data that may not intuitively relate to their finances.
In addition to todays and last years circulars, the CFPB has issued an advisory opinion that consumer financial protection law requires lenders to provide adverse action notices to borrowers when changes are made to their existing credit.
The CFPB has made the intersection of fair lending and technology a priority. For instance, as the demand for digital, algorithmic scoring of prospective tenants has increased among corporate landlords, the CFPB reminded landlords that prospective tenants must receive adverse action notices when denied housing. The CFPB also has joined with other federal agencies to issue a proposed rule on automated valuation models, and is actively working to ensure that black-box models do not lead to acts of digital redlining in the mortgage market.
Read Consumer Financial Protection Circular 2023-03, Adverse action notification requirements and the proper use of the CFPBs sample forms provided in Regulation B.
Consumers can submit complaints about financial products and services by visiting the CFPBs website or by calling (855) 411-CFPB (2372).
Employees who believe their companies have violated federal consumer financial protection laws are encouraged to send information about what they know to whistleblower@cfpb.gov. Workers in technical fields, including those that design, develop, and implement artificial intelligence, may also report potential misconduct to the CFPB. To learn more, visit the CFPBs website.
The Consumer Financial Protection Bureau is a 21st century agency that implements and enforces Federal consumer financial law and ensures that markets for consumer financial products are fair, transparent, and competitive. For more information, visit consumerfinance.gov.
Go here to see the original:
CFPB Issues Guidance on Credit Denials by Lenders Using Artificial ... - Consumer Financial Protection Bureau
NYU and KAIST Launch Major New Initiative on Artificial Intelligence … – New York University
NYU President Linda G. Mills and Korean Advanced Institute of Science and Technology (KAIST) President Kwang Hyung Lee were joined by Sung Bae Jun, president of the Institute of Information & Communications Technology Planning & Evaluation, and Joon Hee Joh, president of the Korea Software Industry Association in signing an agreement to collaborate on a major Artificial Intelligence (AI) and digital technologies research effort.
Senior public officialsincluding the President of the Republic of Korea, Yoon Suk Yeol; Koreas Minister of Science and Information and Communications Technology Jong-Ho Lee; the Director of the US National Science Foundation Sethuraman Panchanathan; NYC Deputy Mayor for Housing, Economic Development, and Workforce Maria Torres-Springerand Turing Prize-winning AI scientist and NYU faculty member Yann LeCun convened at NYUs Greenwich Village campus to mark the new partnership and launch a Digital Vision Forum with leading thinkers on AI and digital governance from around the world. Senator Charles Schumer participated in the proceedings via video. The event also importantly marked the anniversary of the first Digital Vision Forum, which was held precisely a year ago at NYU to initiate the partnership between NYU, the Republic of Korea, and KAIST, an event that also featured remarks by President Yoon.
Todays historic event positions NYU, New York City, and Korea at the forefront of the global science and tech ecosystem of the future. NYU President Mills said, We are honored to bring together leaders in government, academia, and industry to commemorate a vital and historic partnership that will propel scholarship and advancements in technology. We are thrilled by this partnership, which exemplifies both NYUs commitment to global learning and research as well as our role in fueling the growth of New York Citys tech, science, and innovation sector.
Senator Schumer said, I want to commend President Yoon and my friend, NYU President Linda Mills, on todays announcement of a historic joint research program between NYU and the South Korean government. The partnership is a partnership made in heaven: NYU, one of the nations leading research institutions, and South Korea, one of Americas strongest allies and partners, and also a leader in research and science, collaborating on one of the most important issues of our time, artificial intelligence.
NSF Director Panchanathan said, As our two presidents affirmed at the State Visit in April, the U.S. and the Republic of Korea have a truly global alliance that champions democratic principles, enriches economic cooperation, and empowers technological advances. NSF shares in President Yoon's conviction that human values are important in the development of new technology. Values including openness and transparency, and the creation of AI tools that are responsible and ethical, without bias, and protect the security and privacy of our people.
The research effortthe ROK Institutions-NYU AI and Digital Partnershipaims to conduct world-class research in AI and digital technologies. The partnership is expected to be headquartered at NYU.
Todays event marks the expansion of NYUs previously announced partnership and strengthens the Universitys links to Korea and its institutions. The event included a wide-ranging panel discussion about AI and digital governance by prominent scholars in the field. The panel was moderated by Professor Matthew Liao, director of the Center for Bioethics at NYUs School of Global Public Health, the panelists included:
Professor Kyung-hyun Cho, Deputy Director for NYU Center for Data Science & Courant Institute
Professor Luciano Floridi, Founding Director of the Digital Ethics Center, Yale University
Professor Urs Gasser, Rector of the Hochschule fur Politik, Technical University of Munich
Professor Shannon Vallor, Baillie Gifford Professor in the Ethics of Data and Artificial Intelligence, University of Edinburgh
Professor Stefaan Verhulst, Co-Founder & Director of GovLab's Data Program, NYU Tandon School of Engineering, and
Professor Jong Chul Ye, Director of Promotion Council for Digital Health, KAIST
For NYUs new president, Linda G. Mills, this is the third major global agreement she has signed this month. She earlier signed a research partnership agreement with IIT Kanpur in India (an agreement cited by US President Joe Biden and Indian Prime Minister Narendra Modi in their joint statement) and renewed a partnership agreement between NYUs Shanghai campus and East China Normal University. Building on NYUs unrivaled global presence, strength, and character is expected to be a major priority of her administration.
About NYU
Founded in 1831, NYU is one of the worlds foremost research universities (with more than $1 billion per year in research expenditures) and is a member of the selective Association of American Universities. NYU has degree-granting university campuses in New York, Abu Dhabi, and Shanghai; has 12 other global academic sites, including London, Paris, Florence, Tel Aviv, Buenos Aires, and Accra; and both sends more students to study abroad and educates more international students than any other U.S. college or university. Through its numerous schools and colleges, NYU is a leader in conducting research and providing education in the arts and sciences, law, medicine, engineering, business, dentistry, education, nursing, the cinematic and performing arts, music and studio arts, public administration, social work, and professional studies, among other areas.
About KAIST
Since KAIST was established in 1971, KAIST and its alumni have been the gateway to advanced science and technology, innovation, and entrepreneurship and have made a significant contribution to creating the dynamic economy of todays Korea. KAIST has now emerged as one of the most innovative universities; it ranked 1st among the Most Innovative Universities in Asia from 2016 to 2018 and 11th in the Worlds Most Innovative Universities in 2018 by Thomson Reuters. KAIST was named as one of the Global 100 Innovators in 2021 by Clarivate, the only university listed. QS ranked KAIST the 20th-best university in engineering and technology in 2022, and the Nature Indexs Young University Ranking placed KAIST 4th in the world. KAIST continues to spearhead innovation and lead the advance of science and technology in Korea and beyond, and aims to contribute to the development of new dynamic engines of growth and innovation through collaboration with NYU to foster more future-oriented, creative global talents, young researchers, and entrepreneurs in the creative environment of New York City.
Visit link:
NYU and KAIST Launch Major New Initiative on Artificial Intelligence ... - New York University
IBM Commits To Training Two Million in Artificial Intelligence – Facility Executive Magazine
Adobe Stock/kasto
To help close the global artificial intelligence (AI) skills gap, IBM announced a commitment to train two million learners in AI by the end of 2026, with a focus on underrepresented communities.
To achieve this goal at a global scale, IBM is expanding AI education collaborations with universities globally, collaborating with partners to deliver AI training to adult learners, and launching new generative AI coursework throughIBM SkillsBuild. This will expand upon IBMs existing programs and career-building platforms to offer enhanced access to AI education and in-demand technical roles.
According to a recentglobal studyconducted by IBM Institute of Business Value, surveyed executives estimate that implementing AI and automation will require 40% of their workforce to reskill over the next three years, mostly those in entry-level positions. This further reinforces that generative AI is creating a demand for new roles and skills.
IBM is collaborating with universitiesat a global level to build capacity around AI leveraging IBMs network of experts. University faculty will have access to IBM-led training such as lectures and immersive skilling experiences, including certificates upon completion. Also, IBM will provide courseware for faculty to use in the classroom, including self-directed AI learning paths. In addition to faculty training, IBM will offer students flexible and adaptable resources, including free, online courses on generative AI and Red Hat open source technologies.
Through IBM SkillsBuild, learners across the world can benefit from AI education developed by IBM experts to provide the latest in cutting edge technology developments. IBM SkillsBuild already offers free coursework in AI fundamentals, chatbots, and crucial topics such as AI ethics. The new generative AI roadmap includes coursework and enhanced features.
These courses are all completely free and available to learners around the world. At course completion, participants will be able to earn IBM-branded digital credentials that are recognized by potential employers.
This new effort builds on IBMs existing commitment to skill 30 million people by 2030, and is intended to address the urgent needs facing todays workforce. Since 2021, over 7 million learners have enrolled in IBM courses. Worldwide, the skills gap presents a major obstacle to the successful application of AI and digitalization, across industries, and beyond technology experts. This requires a comprehensive world view to be developed and implemented. IBMs legacy of investing in the future of work includes making free online learning widely available, with clear pathways to employment, and a focus on historically underrepresented communities in tech, where the skills gap is wider.
View post:
IBM Commits To Training Two Million in Artificial Intelligence - Facility Executive Magazine
When Artificial Intelligence Gets It Wrong – Innocence Project
Porcha Woodruff was eight months pregnant when she was arrested for carjacking. The Detroit police used facial recognition technology to run an image of the carjacking suspect through a mugshot database, and Ms. Woodruffs photo was among those returned.
Ms. Woodruff, an aesthetician and nursing student who was preparing her two daughters for school, was shocked when officers told her that she was being arrested for a crime she did not commit. She was questioned over the course of 11 hours at the Detroit Detention Center.
A month later, the prosecutor dismissed the case against her based on insufficient evidence.
Ms. Woodruffs ordeal demonstrates the very real risk that cutting-edge artificial intelligence-based technology like the facial recognition software at issue in her case presents to innocent people, especially when such technology is neither rigorously tested nor regulated before it is deployed.
Time and again, facial recognition technology gets it wrong, as it did in Ms. Woodruffs case. Although its accuracy has improved over recent years, this technology still relies heavily on vast quantities of information that it is incapable of assessing for reliability. And, in many cases, that information is biased.
In 2016, Georgetown Universitys Center on Privacy & Technology noted that at least 26 states allow police officers to run or request to have facial recognition searches run against their drivers license and ID databases. Based on this figure, the center estimated that one in two American adults has their image stored in a law enforcement facial recognition network. Furthermore, given the disproportionate rate at which African Americans are subject to arrest, the center found that facial recognition systems that rely on mug shot databases are likely to include an equally disproportionate number of African Americans.
More disturbingly, facial recognition software is significantly less reliable for Black and Asian people, who, according to a study by the National Institute of Standards and Technology, were 10 to 100 times more likely to be misidentified than white people. The institute, along with other independent studies, found that the systems algorithms struggled to distinguish between facial structures and darker skin tones.
The use of such biased technology has had real-world consequences for innocent people throughout the country. To date, six people that we know of have reported being falsely accused of a crime following a facial recognition match all six were Black. Three of those who were falsely accused in Detroit have filed lawsuits, one of which urges the city to gather more evidence in cases involving facial recognition searches and to end the facial recognition to line-up pipeline.
Former Detroit Police Chief James Craig acknowledged that if the citys officers were to use facial recognition by itself, it would yield misidentifications 96% of the time.
Even when an AI-powered technology is properly tested, the risks of a wrongful arrest and wrongful conviction remain and are exacerbated by these new tools.
Thats because when AI identifies a suspect, it can create a powerful, unconscious bias against the technology-identified person that hardens the focus of an investigation away from other suspects.
Indeed, such technology-induced tunnel vision has already had damaging ramifications.
For example, in 2021, Michael Williams was jailed in Chicago for the first-degree murder of Safarian Herring based on a ShotSpotter alert that police received. Although ShotSpotter purports to triangulate a gunshots location through an AI algorithm and a network of microphones, an investigation by the Associated Press found that the system is deeply statistically unreliable because it can frequently miss live gunfire or mistake other sounds for gunshots. Still, based on the alert and a noiseless security video that showed a car driving through an intersection, Mr. Williams was arrested and jailed for nearly a year even though police and prosecutors never established a motive explaining his alleged involvement, had no witnesses to the murder, and found no physical evidence tying him to the crime. According to a federal lawsuit later filed by Mr. Williams, investigators also ignored other leads, including reports that another person had previously attempted to shoot Mr. Herring. Mr. Williams spent nearly a year in jail before the case against him was dismissed.
Cases like Ms. Woodruffs and Mr. Williams highlight the dangers of law enforcements overreliance on AI technology, including an unfounded belief that such technology is a fair and objective processor of data.
Absent comprehensive testing or oversight, the introduction of additional AI-driven technology will only increase the risk of wrongful conviction and may displace the effective policing strategies, such as community engagement and relationship-building, that we know can reduce wrongful arrests.
We enter this fall with a number of significant victories under our belt including 7 exonerations since the start of the year.Through the cases of people like Rosa Jimenez and Leonard Mack, weve leveraged significant advances in DNA technology and other sciences to free innocent people from prison.
We are committed to countering the harmful effects of emerging technologies, advocating for research on AIs reliability and validity, and urging consideration of the ethical, legal, social and racial justice implications of its use.
We support a moratorium on the use of facial recognition technology in the criminal legal system until such time as research establishes its validity and impacted communities are given the opportunity to weigh in on the scope of its implementation.
We are pushing for more transparency around so-called black box technologies technologies whose inner workings are hidden from users.
We believe that any law enforcement reliance on AI technology in a criminal case must be immediately disclosed to the defense and subjected to rigorous adversarial testing in the courtroom.
Building on President Bidens executive order directing the National Academy of Sciences to study certain AI-based technologies that can lead to wrongful convictions, we are also collaborating with various partners to collect the necessary data to enact reforms.
And, finally, we encourage Congress to make explicit the ways in which it will regulate investigative technologies to protect personal data.
It is only through these efforts can we protect innocent people from further risk of wrongful conviction in todays digital age.
With gratitude,
Christina SwarnsExecutive Director, Innocence Project
See the rest here:
When Artificial Intelligence Gets It Wrong - Innocence Project
Can Artificial Intelligence Make the Travel Industry Sustainable? – Impakter
Sustainability is no longer a passing trend; its a global imperative. The travel industry, in particular, is experiencing a surge in demand for sustainable options. According to the Booking.com 2023 Sustainable Travel Report, three-quarters of global consumers now seek more environmentally friendly travel choices, marking an 8 percent increase from the previous year.
Moreover, nearly half of travelers are willing to pay extra to reduce their carbon footprint while journeying. Despite these growing eco-friendly aspirations, many travelers struggle to find sustainable options, with just over half believing that such choices are limited, and 44 percent uncertain about where to locate them.
Today, hotels spend an estimated $8 billion annually on sustainability management, primarily due to cumbersome, fragmented, and predominantly manual processes. These antiquated data collection methods, which rely on tools such as email, Excel, and online surveys, pervade the sustainability data ecosystem. This extends from how hotels transmit their information to third-party green certification bodies and various sales channels to how they gather data from diverse sources like suppliers or various operators within hotel chains.
The result? Hotels are struggling to meet sustainability targets, and global carbon emissions from the hotel sector are projected to rise. According to the Sustainable Hospitality Alliance, hotels must reduce their carbon emissions by 66 percent per room by 2030 and by 90 percent by 2050 to avert further environmental damage. However, without reliable, easily interpretable data, stakeholders lack the means to make significant, positive changes. They cannot benchmark progress, set goals, or effectively communicate successes to customers. Manual data collection, processing, and analysis demand human resources that many hotels lack, and even if available, the staff might not possess the required training.
The business case for technology-backed sustainability management is compelling. Hotels with eco-credentials attract four times more guests compared to those lacking sustainability certifications. Taking into account guests willingness to pay more for eco-friendly accommodations, it is estimated that poor sustainability management costs the industry $21 billion, including $13 billion in missed revenue.
What does an effective tech-supported sustainability management system look like? First and foremost, it should serve as a central hub for the collection of all sustainability data. This platform should be intuitive, scalable, and equipped with AI capabilities. It should cater to staff at the property level as well as higher-ups responsible for sustainability oversight across the organization.
Secondly, it should seamlessly integrate with third parties such as regulatory bodies and sales channels through plug-and-play APIs, facilitating the smooth transmission of sustainability data. Platforms like Booking.com, for example, enable consumers searching for hotels to filter accommodations based on their sustainability score, from level 1 to 3+. Specific actions taken by a hotel to earn its sustainability score, such as the elimination of single-use plastics and the installation of electric car charging stations, as well as any certifications earned (e.g., Green Key), should be readily accessible. A hotel that automatically communicates its sustainability data to sales channels, as opposed to sporadic data dumps, can leverage sustainability metrics as a key differentiator and showcase progress over time.
Taking it a step further, an ideal platform should automate sustainability data collection by integrating with tools like smart meters. This approach reduces the risk of human error and provides real-time, transparent insights into energy consumption and waste management. Armed with accurate, real-time data, an AI-enabled platform can offer automatic sustainability improvement suggestions across various parameters, enhancing operations, reducing water consumption, and meeting certification requirements. This, in turn, frees up valuable human resources for more strategic tasks.
Recognizing that in-house development of such technology and expertise is often unsustainable, especially in light of labor shortages, many hotel brands are seeking partners with easy-to-implement solutions to streamline sustainability reporting and enhance efficiency. However, its crucial for brands to choose partners with proven experience in sustainability management within the hospitality sector. A partner specializing in the hotel industry is uniquely positioned to understand the industrys nuances and challenges, enabling them to create tailored solutions that address pain points. This approach empowers brands to implement more ambitious, eco-friendly policies that save money, boost revenue, and enhance their competitive advantage.
In conclusion, sustainability management and Environmental, Social, and Governance (ESG) reporting have evolved beyond mere compliance requirements for companies. They have become essential drivers of business success. As both investors and consumers increasingly demand transparency and accountability from companies, sustainability has become a critical component of overall business strategy. By adopting a proactive stance, companies can leverage sustainability and ESG reporting to gain a competitive edge in the marketplace.
Editors Note:The opinions expressed here by the authors are their own. Not those of Impakter.comIn the Featured Photo: Sustainable travel. Featured Photo Credit:Unsplash.
View original post here:
Can Artificial Intelligence Make the Travel Industry Sustainable? - Impakter
Artificial Intelligence: Key Business and Legal Issues to Consider – Sidley Austin LLP
The rapid growth of artificial intelligence (AI) development and adoption, particularly generative AI and machine learning applications, has captured the attention of business leaders, academics, investors, and regulators worldwide. AI is also requiring companies to confront an evolving host of questions across different areas of law, including privacy, cybersecurity, commercial and intellectual property transactions, intellectual property ownership and rights, products liability, labor and employment, insurance, consumer protection, corporate governance, national security, ethics, government policy, and regulation.
Below, we outline questions that companies and their boards should consider as they navigate this ever-evolving technological innovation. Many of these questions are industry-agnostic, but all companies must also address challenges specific to the industry and regulatory environment in which they operate.
Sidley has a multi-disciplinary AI industry team focused on providing our clients with practical and actionable guidance on the wide range of regulatory, transactional, and litigation issues companies face in evaluating, leveraging, and mitigating risk from AI.
To discuss the business and legal implications for your company, please contact one of the individuals below or one of the dedicated Sidley lawyers with whom you work.
Updated: September 19, 2023
In light of the evolving situation, we are reviewing and frequently updating information provided in the PDF.
Read the original:
Artificial Intelligence: Key Business and Legal Issues to Consider - Sidley Austin LLP
Dartmouth to Host Conference on Artificial Intelligence – Dartmouth News
An inauguralDartmouth AI Conferenceto be held on Sept. 29 will honor the institutions legacy as the birthplace for artificial intelligence while also discussing the rapid advancements and challenges permeating the current AI landscape.
Spearheaded by theTuck School of Businessand theTuck Center for Digital Strategies, the conference will convene industry stalwarts from diverse sectors including banking, health care, technology, venture capital, and consulting.
Patrick Wheeler, executive director of the Tuck Center for Digital Strategies, emphasized the timeliness and relevance of the discussions slated for the conference. AI application is evolving rapidly across both academia and the business sector. Dartmouth stands at the crossroads of this evolution, fostering AI developments that are technically sound, ethically responsible, and practically beneficial for society, Wheeler says.
The one-day conference, to be held at Tucks Raether McLaughlin Atrium, offers a rich platform for students, faculty, and staff to interact with leaders and experts steering the current innovations in the field. Alumni can participate virtually, ensuring the Dartmouth community worldwide can engage in the event.
An impressive roster of speakers will be featured at the event, including:
A central theme of the conference will be the responsible and ethical creation and utilization of AI. Dartmouth, with its rich interdisciplinary tradition, is uniquely positioned to lead discussions that meld deep technical expertise with a liberal arts approach to the ethical dimensions inherent in AI development.
This event is a great opportunity to synthesize and showcase all the innovations in this exciting and dynamic field happening in different pockets around campus to audiences within and outside Dartmouth, saysLaMar Bunts, chief transformation officer.
Dartmouth has a long history with AI. The1956 Dartmouth Summer Research Project on Artificial Intelligence is widely seen to be the foundational eventthat kickstarted research in artificial intelligence.
Follow this link:
Dartmouth to Host Conference on Artificial Intelligence - Dartmouth News
Seeing is Believing: The Role of Artificial Intelligence in … – MD Magazine
Credit: Bruno Henrique/Pexels
The phrase "artificial intelligence" is everywhere in the public consciousness both a buzzword promising a brighter tomorrow and a curse looming over our collective heads.
The public debate concerning the ethics and validity of artificial intelligence (AI) is likely to rage on for decades, but the evolving role of AI in ophthalmology suggests its vast potential to enhance clinical care and patient outcomes. AI imaging technologies may better track ophthalmic disease development, while AI chatbots could inform the next generation of medical leaders.
However, regardless of this excitement, ophthalmologists will face significant challenges. Given the rapidly expanding scope of artificial intelligence, the specialty will need to overcome ethical concerns and issues with interpretation to safeguard patient outcomes.
We already know, just like any other technology, the first and second generations are not going to be widely used, and theyre going to have to work out their kinks, said Jonathan Jonisch, MD, partner, Vitreoretinal Consultants of New York. I think were a way away from using machine learning to guide our treatments, but I think the value of artificial intelligence imaging is here already and will just continue taking steps to move forward.
The role of AI may take the form of an added tool in the growing armamentarium of clinicians, beginning with imaging technologies. A variety of deep and machine learning models have been deployed across the specialty, being worked and reworked to improve imaging and better detect ophthalmic disease and disease progression.
When you have AI algorithms that have been trained to look at imaging, and perhaps using biomarkers that we may not see with the naked eye, theres a potential for AI to allow for decision support that may be even better than what can be done by humans, said Rajeev Muni, MD, MSc, a vitreoretinal surgeon in the department of ophthalmology at St. Michaels Hospital and Unity Health Toronto.1
Data from these studies have indicated the benefit of AI in tracking disease developments. An analysis of real-world data in China found an AI-based fundus screening system had the ability to detect 5 prevalent ocular conditions, with a particularly favorable efficacy for diabetic retinopathy, retinal vein occlusion (RVO), and pathological myopia.
Investigators described the potential benefits of the clinical application of AI including its ease of use and limited need for resources, particularly for fundus screening, and ability to collect epidemiological data.2
Multiple studies presented at the 83rd Scientific Sessions of the American Diabetes Association (ADA) focused on the translation of AI systems to detect diabetic eye diseases. This included the real-world deployment of an autonomous AI system at Johns Hopkins School of Medicine being linked to improved testing adherence for diabetic eye diseases across primary care clinics.
In particular, the deployment improved access and equity for those traditionally disadvantaged in medical care. Investigators suggested the use of AI to overcome historic disparities will not only benefit ophthalmology, but medicine as a whole.3
Another analysis found machine learning models allowed for the accurate and feasible identification of the progression of diabetic retinopathy. The AI model predicted approximately 91% of the ultra-widefield images with the correct labels, often indicating greater disease progression than human graders. These algorithms, as a result, may further refine patient risk and introduce personalized screening intervals.4
According to Jonisch, machine learning may allow for the analysis of nuanced features on images and better prediction of the disease progression. With this knowledge, ophthalmologists could determine the most beneficial therapy for patients, as well as better determine the risk of failure or chance of success in a relevant clinical trial.
Artificial intelligence and machine learning do a really good job of taking many more data points than we could analyze as a human at one time, Jonisch said. I would envision a time where machine learning can help us predict disease progression better.
Racial bias in imaging, however, could be a residual concern for ophthalmologists. Race, although sociologically a social construct, is a phenotypic feature that can affect image-based classification performance.
An AI system has the capability to be deployed at a greater scale than an individual clinician. Thus, the potential harm from these biases may be increased, particularly when introduced in demographics different from those on which the system trained.
A diagnostic study conducted at Oregon Health & Science University found AI imaging could infer self-reported race from retinal fundus images and vessel maps previously believed to not contain information relevant to race something human graders cannot do.5
As the use of AI grows in medicine, clinicians and researchers may need to place focus on strategies to mitigate AI biases, from the data collection stage to the evaluation and post-authorization deployment stage.
Recent analyses indicate the role of artificial intelligence chatbots could soon be extended from a fun novelty to a test preparation tool in ophthalmology.
New data suggest the increasing benefit of the popular AI chatbot, ChatGPT, for preparation for ophthalmology board certification. The investigative team from the University of Toronto found ChatGPT 4.0 answered 84% of multiple-choice practice questions taken from OphthoQuestions, a common practice resource for board examination, correctly in July 2023.6
Based on previous findings from the investigative team, the chatbot only answered 46% of multiple-choice questions correctly in January and improved slightly to 58% in February.7
We can see almost in real time how this AI chatbot has evolved in terms of its ophthalmic knowledge and the gains in the performance of the chatbot, were seeing in virtually every subspecialty area of ophthalmology, from cornea to glaucoma and retina, said Marko M. Popovic, MD, MPH, a resident physician in the department of ophthalmology and vision sciences at the University of Toronto, and one of the study investigators.1
While there remains work to be done, Popovic suggested the dramatic advances in the capability of ChatGPT in preparing for board certification in a short period of time lend credence to its future potential.
Another analysis suggested large language models provide appropriate ophthalmic advice to patient questions. Investigators from Stanford University noted, in particular, that the generated AI answers did not significantly differ from an ophthalmologist regarding incorrect information or the likelihood of harm.8
However, there are notable limitations to a chatbots benefit, stemming from its capability for "hallucinations," when a largelanguage model responds with incorrect information or facts that aren't based on real data.9
In a study from the New England Eye Center, a large language model-based platform provided largely inaccurate information on questions regarding vitreoretinal diseases, including age-related macular degeneration (AMD), diabetic retinopathy, and retinal vein occlusion, with inconsistencies on repeat inquiries. Exactly half (50.0%) of answers were materially different, even after no functional changes were made to the platform between the first and second question submissions.10
Popovic suggested that inaccurate or incomplete responses could lead to suboptimal care and issues down the line, particularly if young physicians rely on answers from ChatGPT in the beginning stages of their careers.
I think the bottom line here is at the end of the day, in the diagnosis and treatment of patients, the AI chatbot cannot be held accountable for what it provides, he said.1 And thats particularly challenging in the situation where you ask ChatGPT what the symptoms of X condition are, and it provides 10 symptoms and only 9 of them are correct.
As disease patterns and treatment responses may be recognized much faster with the use of AI tools, these tools could mark a forward leap for the field. Jay Duker, MD, the president, and chief executive officer of Eyepoint Pharmaceuticals, believes relying on AI for certain abilities may not be the worst thing for the specialty.
Ive been saying for a while to young residents that in 10 years, were not going to be diagnosticians, Duker said.11 The optical coherence tomography is going to tell you what the patient has, and everyone says Oh, that wont be fun anymore. It will, because now were going to concentrate on the patient, instead of concentrating on what they have, and were going to connect with them at a more personal level.
Still, these specialists indicate more data is required for validation and the specialty should be cautious when implementing artificial intelligence into full-time clinical care. There is also a creeping, and understandable, fear of a future where AI replaces human intuition with ones and zeroes.
But it may be important to remember what these machines can and cannot do. In conjunction with a specialists expertise, an AI system could improve patient outcomes without sacrificing the Hippocratic oath to do no harm.
I think when these technologies are initially rolled out, we dont need to fully trust it, Jonisch said. It can be used in addition to our current therapy, not instead of. That is how a lot of areas of medicine incorporate newer technologies, you dont initially fully rely on them. You do it in conjunction with what youre already doing.
References
Go here to read the rest:
Seeing is Believing: The Role of Artificial Intelligence in ... - MD Magazine
Exposed in the face of artificial intelligence – EL PAS USA
Whenever I go into doomsday mode (becoming a kind of technological Cassandra), I think about a mansplaining memo that reveals a truth thats unknown to me, a woman. It expresses itself with forceful words, reproaching me: The truth is that its not a knife that kills: a man kills, it reads. Who could defend themselves against these words? Who dares contradict the engineers of progress?
What this memo tends to forget is that, if the instrument in question doesnt have a sharp end and if it wasnt available in the stores of any neighborhood, town or district it wouldnt be suitable to kill anyone at any time. Before the authors of the memo sneeringly tell me that were not going to ban knives or erect gates around the countryside, I would like to bring up the following reflection (which probably wont interest you, nor change your mind).
The argument of guns dont kill people, people kill people is used repeatedly by the NRA (along with its extreme interpretation of the Second Amendment) to avoid the imposition of any type of gun control. In order to continue making money, this organization is capable of blaming the countrys mental health problem (which, by the way, theyre not willing to spend a penny on) rather than recognizing that the only function of a weapon is to injure or kill. A gun isnt useful when youre trying to cut a steak or open a box. Its only capable of causing thousands of deaths 31,059 deaths in the United States in 2023 so far, according to the Gun Violence Archive, a website that counts American firearm deaths in real-time.
In Europe, since we do put gates up around the countryside (thats how paranoid we are), the possession and use of firearms is strongly limited. This is because were aware, precisely, that theyre instruments meant for killing. Whats more, in Spain according to the national laws that regulate weapons an individual can only possess or carry knives that are less than four inches long and have just a single edge. Automatic and double-edged knives are prohibited; no citizen can possess knives, machetes, and other bladed weapons that are duly classified as weapons by the competent authorities.
Thanks to the cultural evolution of our laws, weve been able to prevent many people from dying, simply by limiting the availability of tools that have the capacity to kill. No one thinks of limiting the number of people capable of killing as a solution to the problem, because if that were to occur nobody would be left.
This same form of thinking should be applied to technology. There are both civilian and military uses and classification for different types of tools, just as with pharmaceutical drugs: some can only be used in healthcare settings, under the prescription and control of a doctor, while some are over-the-counter. Meanwhile, certain aspects of medicine are subject to even strict international prohibitions, such as the cloning of humans. When were able to analyze risks, were able to limit and manage them through regulations.
And then theres data, social media and internet technology things that have been utilized by many people since birth. Generations have grown up and matured around technological instruments, not recognizing any danger in them. After all, who would have frowned upon the evolution of the personal computer or microchips, which allowed man to step on the Moon? Nobody, of course. Technology is neutral, cold, dispassionate and, therefore, beneficial. At least, thats what the tech giants would have you believe. That is, the same men who are asking tech writer Douglas Ruskoff about how they can protect themselves from their own robots.
Many people have been enriched by making certain online tools available to eight-year-old boys, which teach them that sexual violence is a normal way of interacting with girls. They have created apps that allow 11-year-old kids to take pictures every 30 seconds and share them with billions of people. Theyve even created free babysitting services, in the form of the iPads that parents hand their toddlers.
The tech leaders are the ones to blame when thanks to the democratization of AI apps are used to turn innocent photos of young girls into child pornography, via digitally-generated nudes. They have given consumers total access to technologies that should never have left highly-controlled environments technologies that shouldnt be operated by simply anyone.
I could hide a snack in a nuclear briefcase and blame my dog for the extinction of humanity after he accidentally pushes a button while trying to get a hold of it. I could blame him, if I were a psychopathic billionaire, but since Im merely a lawyer, what Ill do instead is not leave anything lethal within my dogs reach. Ill work with his basic impulses, instead of blaming him for them. This is a reminder to the tech people who put out that memo: guns kill, and AI shouldnt be accessible to teenagers raised on YouPorn.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAS USA Edition
See the rest here:
Exposed in the face of artificial intelligence - EL PAS USA