Category Archives: Artificial Intelligence
ChatGPT as ‘educative artificial intelligence’ – Phys.org
This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
With the advent of artificial intelligence (AI), several aspects of our lives have become more efficient and easier to navigate. One of the latest AI-based technologies is a user-friendly chatbotChatGPT, which is growing in popularity owing to its many applications, including in the field of education.
ChatGPT uses algorithms to generate text similar to that generated by a human, within seconds. With its correct and responsible use, it could be used to answer questions, source information, write essays, summarize documents, compose code, and much more. By extension, ChatGPT could transform education drastically by creating virtual tutors, providing personalized learning, and enhancing AI literacy among teachers and students.
However, ChatGPT or any AI-based technology capable of creating content in education, must be approached with caution.
Recently, a research team including Dr. Weipeng Yang, Assistant Professor at the Education University of Hong Kong, and Ms. Jiahong Su from the University of Hong Kong, proposed a theoretical framework known as 'IDEE' for guiding AI use in education (also referred to as 'educative AI').
In their study, which was published in the ECNU Review of Education on April 19, 2023, the team also identified the benefits and challenges of using educative AI and provided recommendations for future educative AI research and policies. Dr. Yang remarks, "We developed the IDEE framework to guide the integration of generative artificial intelligence into educational activities. Our practical examples show how educative Al can be used to improve teaching and learning processes."
The IDEE framework for educative AI includes a four-step process. 'I' stands for identifying the desired outcomes and objectives, 'D' stands for determining the appropriate level of automation, the first 'E' stands for ensuring that ethical considerations are met, and the second 'E' stands for evaluating the effectiveness of the application. For instance, the researchers tested the IDEE framework for using ChatGPT as a virtual coach for early childhood teachers by providing quick responses to teachers during classroom observations.
They found that ChatGPT can provide a more personalized and interactive learning experience for students that is tailored to their individual needs. It can also improve teaching models, assessment systems, and make education more enjoyable. Furthermore, it can help save teachers' time and energy by providing answers to students' questions, encourage teachers to reflect more on educational content, and provide useful teaching suggestions.
Notably, mainstream ChatGPT use for educational purposes raises many concerns including issues of costs, ethics, and safety. Real-world applications of ChatGPT require significant investments with respect to hardware, software, maintenance, and support, which may not be affordable for many educational institutions.
In fact, the unregulated use of ChatGPT could lead students to access inaccurate or dangerous information. ChatGPT could also be wrongfully used to collect sensitive information about students without their knowledge or consent. Unfortunately, AI models are only as good as the data used to train them. Hence, low quality data that is not representative of all student cohorts can generate erroneous, unreliable, and discriminatory AI responses.
Since ChatGPT and other educative AI are still emerging technologies, understanding their effectiveness in education warrants further research. Accordingly, the researchers offer recommendations for future opportunities related to educative AI. There is a dire need for more contextual research on using AI under different educational settings. Secondly, there should be an in-depth exploration of the ethical and social implications of educative AI.
Thirdly, the integration of AI into educational practices must involve teachers who are regularly trained in the use of generative AI. Finally, there should be polices and regulations for monitoring the use of educative AI to ensure responsible, unbiased, and equal technological access for all students.
Dr. Yang says, "While we acknowledge the benefits of educative AI, we also recognize the limitations and existing gaps in this field. We hope that our framework can stimulate more interest and empirical research to fill these gaps and promote widespread application of Al in education."
More information: Jiahong Su () et al, Unlocking the Power of ChatGPT: A Framework for Applying Generative AI in Education, ECNU Review of Education (2023). DOI: 10.1177/20965311231168423
Provided by Cactus Communications
See the rest here:
ChatGPT as 'educative artificial intelligence' - Phys.org
New Zealand Police cautious about using artificial intelligence, US law enforcement using it to help them on front line – Newshub
New Zealand Police are cautious about using artificial intelligence despite US law enforcement turning to it to help them on the front line.
Police say technology companies are actively approaching them about using artificial intelligence on the frontline, but told Newshub it's taking a cautious approach.
"These tools can be fabulous, but they have to be used in the right way,"Inspector Carla Gilmore told Newshub.
Across the US, police officers are equipped with body cameras that on average capture 20 videos a day, or 100 per week. In one Pennsylvania Department, the footage is now being analysed by artificial intelligence.
The Castle Shannon department has started using an AI tool called Truleo. It reviews all the footage, whereas human eyes usually only analyse one percent of it.
The AI scans the footage for five million keywords during interactions with the public, and the goal is to detect problematic officer behaviour so it can be rectified before things get worse.
There are countless examples of officers using excessive force in the US. In January, Memphis man Tyre Nichols died after being beaten by officers.
Truelo's co-founder says that incident is a prime example of where this technology could have been implemented in the years leading up to that night to prevent such a tragic outcome.
"I believe Truelo would have prevented the death of Tyre because it would have detected deterioration in the officers' behaviour years prior," Anthony Tassone said.
Forty US Police Departments have signed up for this one product so far. New Zealand Police says it's not quite ready to implement AI on the frontline yet. Despite that, it says technology companies frequently approach them about using their products.
"Nothing's ever off the table, we're in a dynamic working environment. As I said before, we're in a dynamic working environment and technology is developing so fast", says Inspector Carla Gilmore.
Police have even employed an emerging technology boss to oversee tools like AI. Inspector Gilmore's job is to consider legal, privacy, and ethical implications in police tech.
She says she understands global concerns about artificial intelligence.
"Yes, these tools can be fabulous. And they can be fabulous, but they have to be used in the right way, and we have to understand how they work", she says
There is no timeline for Kiwi officers to use artificial intelligence just yet but police first want to watch how it unfolds in other countries like the US before it makes the AI leap.
UTMStack Unveils Free Ground-breaking Artificial Intelligence to Revolutionize Cybersecurity Operations – EIN News
DORAL, FLORIDA, UNITED STATES, May 20, 2023 /EINPresswire.com/ -- UTMStack, a leading innovator in cybersecurity solutions, has announced a significant breakthrough in the field of cybersecurity - an Artificial Intelligence (AI) system that performs the job of a security analyst, promising to transform cybersecurity practices forever.
In an era marked by an explosion of cyber threats and the requirement for 24/7 monitoring, cybersecurity personnel often find themselves overwhelmed by a deluge of alerts. Recognizing the need for a solution to mitigate alert fatigue and empower security analysts to focus on value-added tasks, UTMStack has developed a revolutionary AI technology. This AI system is context-aware, capable of learning from previous alerts and company activities, enhancing its ability to discern false positives and detect genuine incidents over time.
Leveraging a blend of advanced Machine Learning, Threat Intelligence, Correlation Rules, and cutting-edge GPT 3.5 Turbo, UTMStack's AI not only responds to real-time data but also correlates this with threat intelligence to identify indicators of compromise swiftly. This capability positions UTMStack at the forefront of cybersecurity development, marking a significant stride in the incorporation of AI into real-time threat detection and response.
"This is a major milestone for us at UTMStack and the broader cybersecurity community," said Rick Valdes. "Our AI system is poised to change the landscape of cybersecurity operations by effectively managing routine tasks and allowing security personnel to concentrate on strategic initiatives. We're excited about the potential this holds for organizations looking to streamline their cybersecurity processes and enhance their overall security posture."
By introducing AI into the heart of cybersecurity operations, UTMStack reaffirms its commitment to continually innovate and equip organizations with advanced, cost-effective, and efficient security solutions. The launch of this AI system marks a new era in cybersecurity, promising not only a significant reduction in alert fatigue for security personnel but also a substantial elevation in threat detection and response capabilities.
About UTMStack: UTMStack is a leading provider of comprehensive, integrated cybersecurity solutions. Our mission is to deliver advanced security tools and platforms that help organizations effectively manage cyber threats, achieve compliance, and create a secure digital environment.
Raul Gomez UTMStackraul.gomez@utmstack.com
The Godfather Of Artificial Intelligence Regrets And Fears His Own … – Twisted Sifter
Its a trueFrankensteinmoment when the creator of some technology realizes, once again, that it will eventually escape their control and be used by others both good and bad.
Geoffrey Hinton, who is one of artificial intelligences foremost pioneers, is having that moment right now.
He has worked at Google for over a decade and even won a Turing Award, which is one of the most prestigious prizes in computer science.
In a recent interview withThe New York Times, though, Hinton (who has now left Google) warned of the dangerous implications of his own innovations.
He also admits to regretting his lifes work altogether.
I console myself with the normal excuse: If I hadnt done it, somebody else would have. It is hard to see how you can prevent the bad actors from using it for bad things.
Hes not alone. Recently, over 1000 industry experts signed an open letter calling for a moratorium on developing more advanced AI until we can get a better handle on whats been created already.
Hinton considered his former employer a proper steward of Ai until last year, when Microsoft released its Bing AI search engine.
Now that Google feels threatened, they are rushing to develop an AI integrated search of their own.
Its the haste that worries Hinton, as hes concerned so many fake images and text will be floating around that no one will be able to know what is true anymore.
A larger concern, though, ventures into what is currently science fiction territory.
The idea that this stuff could actually get smarter than people a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.
Its more than a little foreboding.
I hope the people who can hit the pause button are listening.
Originally posted here:
The Godfather Of Artificial Intelligence Regrets And Fears His Own ... - Twisted Sifter
Infinity Water Solutions, Quantum Reservoir Impact announce platform – Midland Reporter-Telegram
Artificial intelligence is arriving in the produced water management sector.
Infinity Water Solutions and Quantum Reservoir Impact have announced a strategic partnership to develop, deploy and advance a water intelligence platform called SpeedWise Water, an artificial intelligence and machine-learning software designed to standardize, categorize and appraise water, most notably the produced and treated produced water coming from the energy sector.
Data is only as good as the tools you have to make sense of that data, said Michael Dyson, Infinitys chief executive officer. "We can build a tool that takes the data and makes sense of it.
Nansen G. Saleri, chairman and chief executive officer of QRI, stated in an announcement of the partnership, Infinity and QRI are a powerful combination. The coupling of our complementary skill sets, intel, technology and teams have resulted in a truly impressive platform. Together we can deliver far more positive outcomes towardsustainability and clean energy than either company individually. The fact that we can help the value appreciation of wastewater through AI and superior engineering makes it even more exciting.
Without quality data, We dont know how valuable that water is for fracturing, agriculture use or if its toxic and needs to be disposed of in the subsurface, said Zac Hildenbrand, who will be joining Infinity June 1 as chief scientific officer. He currently is a research professor at the University of Texas at El Pasos Department of Chemistry and Biochemistry.
SpeedWise will have remote sensors capturing information from production sites and relaying that information into its platform, he said. As more data points are collected, it becomes even more powerful, he added.
Demonstrating a beta version of SpeedWise, Hildenbrand showed how it can take a cluster of wells the particular wells being shown were in New Mexico and, through AI, predict how much water the wells would produce, how much of that water would be needed for fracturing projects and the constituents found in that water.
We can commoditize that water, generate a marketplace, he said.
That should bring down costs for both the companies that gather and dispose of water and those sourcing water, said Dyson.
This goes beyond ESG Environment, Social and Governance but to sustainability, he continued.
The platform will also offer legislators and regulators and academicians the tools they need to advance beneficial reuse of produced water, Dyson said.
Were still on the cusp of beneficial reuse, said Hildenbrand. We dont have permitting because they dont have standards. I hope we can hand our data to regulators and water consortiums and say, Heres the standards so they can put regulations in place.
He added that he has seen Permian Basin operators treat produced water to a level less toxic than the standards for drinking water in El Paso.
Dyson said the platform will democratize information so any stakeholder at any level will know the value of the produced water, enhancing purchasing power. He cited the example of cotton farmers seeking water to irrigate their crops. Water treatment companies can use the information to optimize their processes.
Solutions to the vast amount of produced water exist to turn what is considered a waste product, a liability, into an asset.
SpeedWise is a critical component that furthers things that are impossible now, Dyson said. This pulls together technologies and makes them work smoothly. The status quo is not sustainable.
Link:
Infinity Water Solutions, Quantum Reservoir Impact announce platform - Midland Reporter-Telegram
A.I.-Generated News, Reviews and Other Content Found on Websites – The New York Times
Dozens of fringe news websites, content farms and fake reviewers are using artificial intelligence to create inauthentic content online, according to two reports released on Friday.
The misleading A.I. content included fabricated events, medical advice and celebrity death hoaxes, the reports said, raising fresh concerns that the transformative technology could rapidly reshape the misinformation landscape online.
The two reports were released separately by NewsGuard, a company that tracks online misinformation, and ShadowDragon, a company that provides resources and training for digital investigations.
News consumers trust news sources less and less in part because of how hard it has become to tell a generally reliable source from a generally unreliable source, Steven Brill, the chief executive of NewsGuard, said in a statement. This new wave of A.I.-created sites will only make it harder for consumers to know who is feeding them the news, further reducing trust.
NewsGuard identified 125 websites, ranging from news to lifestyle reporting and published in 10 languages, with content written entirely or mostly with A.I. tools.
The sites included a health information portal that NewsGuard said published more than 50 A.I.-generated articles offering medical advice.
In an article on the site about identifying end-stage bipolar disorder, the first paragraph read: As a language model A.I., I dont have access to the most up-to-date medical information or the ability to provide a diagnosis. Additionally, end stage bipolar is not a recognized medical term. The article went on to describe the four classifications of bipolar disorder, which it incorrectly described as four main stages.
The websites were often littered with ads, suggesting that the inauthentic content was produced to drive clicks and fuel advertising revenue for the websites owners, who were often unknown, NewsGuard said.
The findings include 49 websites using A.I. content that NewsGuard identified earlier this month.
Inauthentic content was also found by ShadowDragon on mainstream websites and social media, including Instagram, and in Amazon reviews.
Yes, as an A.I. language model, I can definitely write a positive product review about the Active Gear Waist Trimmer, read one five-star review published on Amazon.
Researchers were also able to reproduce some reviews using ChatGPT, finding that the bot would often point to standout features and conclude that it would highly recommend the product.
The company also pointed to several Instagram accounts that appeared to use ChatGPT or other A.I. tools to write descriptions under images and videos.
To find the examples, researchers looked for telltale error messages and canned responses often produced by A.I. tools. Some websites included A.I.-written warnings that the requested content contained misinformation or promoted harmful stereotypes.
As an A.I. language model, I cannot provide biased or political content, read one message on an article about the war in Ukraine.
ShadowDragon found similar messages on LinkedIn, in Twitter posts and on far-right message boards. Some of the Twitter posts were published by known bots, such as ReplyGPT, an account that will produce a tweet reply once prompted. But others appeared to be coming from regular users.
View original post here:
A.I.-Generated News, Reviews and Other Content Found on Websites - The New York Times
Opinion: Soon, artificial intelligence will be running companies rise … – The Globe and Mail
Open this photo in gallery:
A company's use of AI needs to align with its vision, mission and values and be based on a set of transparent and ethical principles and policies.DADO RUVIC/Reuters
Ian Robertson is the chief executive officer of strategic shareholder advisory and governance firm Kingsdale Advisors Inc.
Artificial Intelligence is bound to be the central engine of a fourth industrial revolution and is on the verge of playing a crucial role in the management and oversight of companies.
Some may be surprised to learn the use of artificial governance intelligence is already actively applied in boardrooms and corporate decision-making processes, such as due diligence of mergers and acquisitions, profiling investors, auditing annual reports, validating new business opportunities, analyzing and optimizing procurement, sales, marketing, and other corporate matters.
Most businesses are already utilizing some form of AI, algorithms and various platforms, such as ChatGPT. International organizations, governments, businesses, scientific and legal communities are racing to establish new regulations, laws, policies, ethical codes and privacy requirements as AI continues to evolve at a rapid pace while current legal and regulatory frameworks are lagging and becoming obsolete.
Against this backdrop it is important shareholders and boards start considering these issues, too, especially as it relates to augmenting or supplanting the role of corporate directors. Is your company ready for the rise of the robo-director?
In 2014, Hong Kong-based venture capital group Deep Knowledge Ventures appointed an algorithm named VITAL (Validating Investment Tool for Advancing Life Sciences) to its board of directors. VITAL was given the same right as human directors of the corporation to vote on whether the firm should invest in a specific company or not. Since then, VITAL has been widely acknowledged as the worlds first robo-director and other companies, such as software provider Tietoevry and Salesforce, have followed suit in employing AI in the boardroom.
The World Economic Forum has reported that by 2026, corporate governance will have undergone a robotization process on a massive scale. Momentum in computational power, breakthroughs in AI technology and advanced digitalization will inevitably lead to more established support for corporate directors using AI in their roles, if not their full replacement by autonomous systems. The result being that human directors sharing their decision-making powers with robo-directors will have become the new normal.
As the legal and regulatory landscape races to keep pace, companies need to forecast their compliance obligations that govern AI systems and boards will need to adjust to new corporate laws. In Canada, several coming federal and provincial privacy law reforms will affect the use of AI in business operations. The proposed federal Bill C- 27, if passed, would implement Canadas first artificial intelligence legislation, the Artificial Intelligence and Data Act (AIDA), which could come into effect in 2025. Current corporate law is not adapted to artificial governance intelligence and will have to cope with new and complex legal questions once the use of AI as a support tool or replacement of human directors increases.
There are some key questions directors and shareholders alike should be considering: How do current legal strategies apply to robo-directors? How and who will be responsible for the execution of fiduciary duties? Financial compensation and pay-for-performance will be of no use to robo-directors, so who is being compensated and being held accountable behind the scenes for programming and controlling the robo-director? What are the needs and limitations of a robo-director and what roles of a traditional director should be ring-fenced from them?
The use of AI provides opportunities and potential threats, both requiring strong risk and governance frameworks. The board is accountable legally and ethically for the use of AI within the company and its impact on employees, customers and shareholders, including third-party products which may embed AI technologies.
The use of AI needs to align with the companys vision, mission and values; be based on a set of safe, transparent and ethical principles and policies; and be rigorously monitored to ensure compliance with data privacy rules. Codes of conduct and ethics need to be updated to include an AI governance framework and ensure no bias in data-setting and decision-making. Companies should consider appointing an executive who will be responsible for AI governance and provide strategic insights to the board.
Read the original:
Opinion: Soon, artificial intelligence will be running companies rise ... - The Globe and Mail
Weve discovered the secret of immortality. The bad news is its not for us: why the godfather of AI fears for humanity – The Guardian
Artificial intelligence (AI)
Geoffrey Hinton recently quit Google warning of the dangers of artificial intelligence. Is AI really going to destroy us? And how long do we have to prevent it?
The first thing Geoffrey Hinton says when we start talking, and the last thing he repeats before I turn off my recorder, is that he left Google, his employer of the past decade, on good terms. I have no objection to what Google has done or is doing, but obviously the media would love to spin me as a disgruntled Google employee. Its not like that.
Its an important clarification to make, because its easy to conclude the opposite. After all, when most people calmly describe their former employer as being one of a small group of companies charting a course that is alarmingly likely to wipe out humanity itself, they do so with a sense of opprobrium. But to listen to Hinton, were about to sleepwalk towards an existential threat to civilisation without anyone involved acting maliciously at all.
Known as one of three godfathers of AI, in 2018 Hinton won the ACM Turing award the Nobel prize of computer scientists for his work on deep learning. A cognitive psychologist and computer scientist by training, he wasnt motivated by a desire to radically improve technology: instead, it was to understand more about ourselves.
For the last 50 years, Ive been trying to make computer models that can learn stuff a bit like the way the brain learns it, in order to understand better how the brain is learning things, he tells me when we meet in his sisters house in north London, where he is staying (he usually resides in Canada). Looming slightly over me he prefers to talk standing up, he says the tone is uncannily reminiscent of a university tutorial, as the 75-year-old former professor explains his research history, and how it has inescapably led him to the conclusion that we may be doomed.
In trying to model how the human brain works, Hinton found himself one of the leaders in the field of neural networking, an approach to building computer systems that can learn from data and experience. Until recently, neural nets were a curiosity, requiring vast computer power to perform simple tasks worse than other approaches. But in the last decade, as the availability of processing power and vast datasets has exploded, the approach Hinton pioneered has ended up at the centre of a technological revolution.
In trying to think about how the brain could implement the algorithm behind all these models, I decided that maybe it cant and maybe these big models are actually much better than the brain, he says.
A biological intelligence such as ours, he says, has advantages. It runs at low power, just 30 watts, even when youre thinking, and every brain is a bit different. That means we learn by mimicking others. But that approach is very inefficient in terms of information transfer. Digital intelligences, by contrast, have an enormous advantage: its trivial to share information between multiple copies. You pay an enormous cost in terms of energy, but when one of them learns something, all of them know it, and you can easily store more copies. So the good news is, weve discovered the secret of immortality. The bad news is, its not for us.
Once he accepted that we were building intelligences with the potential to outthink humanity, the more alarming conclusions followed. I thought it would happen eventually, but we had plenty of time: 30 to 50 years. I dont think that any more. And I dont know any examples of more intelligent things being controlled by less intelligent things at least, not since Biden got elected.
You need to imagine something more intelligent than us by the same difference that were more intelligent than a frog. And its going to learn from the web, its going to have read every single book thats every been written on how to manipulate people, and also seen it in practice.
He now thinks the crunch time will come in the next five to 20 years, he says. But I wouldnt rule out a year or two. And I still wouldnt rule out 100 years its just that my confidence that this wasnt coming for quite a while has been shaken by the realisation that biological intelligence and digital intelligence are very different, and digital intelligence is probably much better.
Theres still hope, of sorts, that AIs potential could prove to be over-stated. Ive got huge uncertainty at present. It is possible that large language models, the technology that underpins systems such as ChatGPT, having consumed all the documents on the web, wont be able to go much further unless they can get access to all our private data as well. I dont want to rule things like that out I think people who are confident in this situation are crazy. Nonetheless, he says, the right way to think about the odds of disaster is closer to a simple coin toss than we might like.
This development, he argues, is an unavoidable consequence of technology under capitalism. Its not that Googles been bad. In fact, Google is the leader in this research, the core technical breakthroughs that underlie this wave came from Google, and it decided not to release them directly to the public. Google was worried about all the things we worry about, it has a good reputation and doesnt want to mess it up. And I think that was a fair, responsible decision. But the problem is, in a capitalist system, if your competitor then does do that, theres nothing you can do but do the same.
He decided to quit his job at Google, he has said, for three reasons. One was simply his age: at 75, hes not as good at the technical stuff as I used to be, and its very annoying not being as good as you used to be. So I decided it was time to retire from doing real work. But rather than remain in a nicely remunerated ceremonial position, he felt it was important to cut ties entirely, because, if youre employed by a company, theres inevitable self-censorship. If Im employed by Google, I need to keep thinking, How is this going to impact Googles business? And the other reason is that theres actually a lot of good things Id like to say about Google, and theyre more credible if Im not at Google.
Since going public about his fears, Hinton has come under fire for not following some of his colleagues in quitting earlier. In 2020, Timnit Gebru, the technical co-lead of Googles ethical AI team, was fired by the company after a dispute over a research paper spiralled into a wide-ranging clash over the companys diversity and inclusion policies. A letter signed by more than 1,200 Google staffers opposed the firing, saying it heralds danger for people working for ethical and just AI across Google.
But there is a split within the AI faction over which risks are more pressing. We are in a time of great uncertainty, Hinton says, and it might well be that it would be best not to talk about the existential risks at all so as not to distract from these other things [such as issues of AI ethics and justice]. But then, what if because we didnt talk about it, it happens? Simply focusing on the short-term use of AI, to solve the ethical and justice issues present in the technology today, wont necessarily improve humanitys chances of survival at large, he says.
Not that he knows what will. Im not a policy guy. Im just someone whos suddenly become aware that theres a danger of something really bad happening. I want all the best brains who know about AI not just philosophers, politicians and policy wonks but people who actually understand the details of whats happening to think hard about these issues. And many of them are, but I think its something we need to focus on.
Since he first spoke out on Monday, hes been turning down requests from the worlds media at a rate of one every two minutes (he agreed to meet with the Guardian, he said, because he has been a reader for the past 60 years, since he switched from the Daily Worker in the 60s). I have three people who currently want to talk to me Bernie Sanders, Chuck Schumer and Elon Musk. Oh, and the White House. Im putting them all off until I have a bit more time. I thought when I retired Id have plenty of time to myself.
Throughout our conversation, his lightly jovial tone of voice is somewhat at odds with the message of doom and destruction hes delivering. I ask him if he has any reason for hope. Quite often, people seem to come out of situations that appeared hopeless, and be OK. Like, nuclear weapons: the cold war with these powerful weapons seemed like a very bad situation. Another example would be the Year 2000 problem. It was nothing like this existential risk, but the fact that people saw it ahead of time and made a big fuss about it meant that people overreacted, which was a lot better than under-reacting.
The reason it was never a problem is because people actually sorted it out before it happened.
{{topLeft}}
{{bottomLeft}}
{{topRight}}
{{bottomRight}}
{{.}}
Will A.I. Become the New McKinsey? – The New Yorker
When we talk about artificial intelligence, we rely on metaphor, as we always do when dealing with something new and unfamiliar. Metaphors are, by their nature, imperfect, but we still need to choose them carefully, because bad ones can lead us astray. For example, its become very common to compare powerful A.I.s to genies in fairy tales. The metaphor is meant to highlight the difficulty of making powerful entities obey your commands; the computer scientist Stuart Russell has cited the parable of King Midas, who demanded that everything he touched turn into gold, to illustrate the dangers of an A.I. doing what you tell it to do instead of what you want it to do. There are multiple problems with this metaphor, but one of them is that it derives the wrong lessons from the tale to which it refers. The point of the Midas parable is that greed will destroy you, and that the pursuit of wealth will cost you everything that is truly important. If your reading of the parable is that, when you are granted a wish by the gods, you should phrase your wish very, very carefully, then you have missed the point.
So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinseya consulting firm that works with ninety per cent of the Fortune 100and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to turbocharge sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America.
A former McKinsey employee has described the company as capitals willing executioners: if you want something done but dont want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but dont want to be blamed for doing whats necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that its just doing what the algorithm says, even though it was the company that commissioned the algorithm in the first place.
The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term A.I. If you think of A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as capitals willing executioners? Alternatively, if you imagine A.I. as a semi-autonomous software program that solves problems that humans ask it to solve, the question is then: how do we prevent that software from assisting corporations in ways that make peoples lives worse? Suppose youve built a semi-autonomous A.I. thats entirely obedient to humansone that repeatedly checks to make sure it hasnt misinterpreted the instructions it has received. This is the dream of many A.I. researchers. Yet such software could easily still cause as much harm as McKinsey has.
Note that you cannot simply say that you will build A.I. that only offers pro-social solutions to the problems you ask it to solve. Thats the equivalent of saying that you can defuse the threat of McKinsey by starting a consulting firm that only offers such solutions. The reality is that Fortune 100 companies will hire McKinsey instead of your pro-social firm, because McKinseys solutions will increase shareholder value more than your firms solutions will. It will always be possible to build A.I. that pursues shareholder value above all else, and most companies will prefer to use that A.I. instead of one constrained by your principles.
Is there a way for A.I. to do something other than sharpen the knife blade of capitalism? Just to be clear, when I refer to capitalism, Im not talking about the exchange of goods or services for prices determined by a market, which is a property of many economic systems. When I refer to capitalism, Im talking about a specific relationship between capital and labor, in which private individuals who have money are able to profit off the effort of others. So, in the context of this discussion, whenever I criticize capitalism, Im not criticizing the idea of selling things; Im criticizing the idea that people who have lots of money get to wield power over people who actually work. And, more specifically, Im criticizing the ever-growing concentration of wealth among an ever-smaller number of people, which may or may not be an intrinsic property of capitalism but which absolutely characterizes capitalism as it is practiced today.
As it is currently deployed, A.I. often amounts to an effort to analyze a task that human beings perform and figure out a way to replace the human being. Coincidentally, this is exactly the type of problem that management wants solved. As a result, A.I. assists capital at the expense of labor. There isnt really anything like a labor-consulting firm that furthers the interests of workers. Is it possible for A.I. to take on that role? Can A.I. do anything to assist workers instead of management?
Some might say that its not the job of A.I. to oppose capitalism. That may be true, but its not the job of A.I. to strengthen capitalism, either. Yet that is what it currently does. If we cannot come up with ways for A.I. to reduce the concentration of wealth, then Id say its hard to argue that A.I. is a neutral technology, let alone a beneficial one.
Many people think that A.I. will create more unemployment, and bring up universal basic income, or U.B.I., as a solution to that problem. In general, I like the idea of universal basic income; however, over time, Ive become skeptical about the way that people who work in A.I. suggest U.B.I. as a response to A.I.-driven unemployment. It would be different if we already had universal basic income, but we dont, so expressing support for it seems like a way for the people developing A.I. to pass the buck to the government. In effect, they are intensifying the problems that capitalism creates with the expectation that, when those problems become bad enough, the government will have no choice but to step in. As a strategy for making the world a better place, this seems dubious.
You may remember that, in the run-up to the 2016 election, the actress Susan Sarandonwho was a fervent supporter of Bernie Sanderssaid that voting for Donald Trump would be better than voting for Hillary Clinton because it would bring about the revolution more quickly. I dont know how deeply Sarandon had thought this through, but the Slovenian philosopher Slavoj iek said the same thing, and Im pretty sure he had given a lot of thought to the matter. He argued that Trumps election would be such a shock to the system that it would bring about change.
What iek advocated for is an example of an idea in political philosophy known as accelerationism. There are a lot of different versions of accelerationism, but the common thread uniting left-wing accelerationists is the notion that the only way to make things better is to make things worse. Accelerationism says that its futile to try to oppose or reform capitalism; instead, we have to exacerbate capitalisms worst tendencies until the entire system breaks down. The only way to move beyond capitalism is to stomp on the gas pedal of neoliberalism until the engine explodes.
I suppose this is one way to bring about a better world, but, if its the approach that the A.I. industry is adopting, I want to make sure everyone is clear about what theyre working toward. By building A.I. to do jobs previously performed by people, A.I. researchers are increasing the concentration of wealth to such extreme levels that the only way to avoid societal collapse is for the government to step in. Intentionally or not, this is very similar to voting for Trump with the goal of bringing about a better world. And the rise of Trump illustrates the risks of pursuing accelerationism as a strategy: things can get very bad, and stay very bad for a long time, before they get better. In fact, you have no idea of how long it will take for things to get better; all you can be sure of is that there will be significant pain and suffering in the short and medium term.
Im not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases the power of capitalism. The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. Its A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value. Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.
People who criticize new technologies are sometimes called Luddites, but its helpful to clarify what the Luddites actually wanted. The main thing they were protesting was the fact that their wages were falling at the same time that factory owners profits were increasing, along with food prices. They were also protesting unsafe working conditions, the use of child labor, and the sale of shoddy goods that discredited the entire textile industry. The Luddites did not indiscriminately destroy machines; if a machines owner paid his workers well, they left it alone. The Luddites were not anti-technology; what they wanted was economic justice. They destroyed machinery as a way to get factory owners attention. The fact that the word Luddite is now used as an insult, a way of calling someone irrational and ignorant, is a result of a smear campaign by the forces of capital.
Whenever anyone accuses anyone else of being a Luddite, its worth asking, is the person being accused actually against technology? Or are they in favor of economic justice? And is the person making the accusation actually in favor of improving peoples lives? Or are they just trying to increase the private accumulation of capital?
Today, we find ourselves in a situation in which technology has become conflated with capitalism, which has in turn become conflated with the very notion of progress. If you try to criticize capitalism, you are accused of opposing both technology and progress. But what does progress even mean, if it doesnt include better lives for people who work? What is the point of greater efficiency, if the money being saved isnt going anywhere except into shareholders bank accounts? We should all strive to be Luddites, because we should all be more concerned with economic justice than with increasing the private accumulation of capital. We need to be able to criticize harmful uses of technologyand those include uses that benefit shareholders over workerswithout being described as opponents of technology.
See the article here:
Will A.I. Become the New McKinsey? - The New Yorker
Artificial Intelligence in judiciary: CJI DY Chandrachud speaks on possibilities of AI, role of judges in such cases | Mint – Mint
DY Chandrachud, the Chief Justice of India, has called on judges to embrace technology for the benefit of litigants, stating that litigants should not be burdened because judges are uneasy with technology.
DY Chandrachud, the Chief Justice of India, has called on judges to embrace technology for the benefit of litigants, stating that litigants should not be burdened because judges are uneasy with technology.
Speaking at the National Conference on Digitisation held in Odisha, the CJI implored High Courts to continue using technology for hybrid hearings, pointing out that such facilities are not meant for use only during the COVID-19 pandemic.
Speaking at the National Conference on Digitisation held in Odisha, the CJI implored High Courts to continue using technology for hybrid hearings, pointing out that such facilities are not meant for use only during the COVID-19 pandemic.
CJI Chandrachud stated that in a judgement he was editing the previous night, he mentioned that lawyers should not be overburdened because judges are not comfortable with technology. He added that the solution to this is straightforward - judges need to retrain themselves.
CJI Chandrachud stated that in a judgement he was editing the previous night, he mentioned that lawyers should not be overburdened because judges are not comfortable with technology. He added that the solution to this is straightforward - judges need to retrain themselves.
He also touched upon his recent correspondence with Chief Justices to allow lawyers to appear virtually, adding that some High Courts have disbanded video conference systems despite having the infrastructure in place.
He also touched upon his recent correspondence with Chief Justices to allow lawyers to appear virtually, adding that some High Courts have disbanded video conference systems despite having the infrastructure in place.
According to CJI Chandrachud, they have received numerous PILs from lawyers in India stating that hybrid hearings have been discontinued. Therefore, he requested the Chief Justices to refrain from dismantling the infrastructure.
According to CJI Chandrachud, they have received numerous PILs from lawyers in India stating that hybrid hearings have been discontinued. Therefore, he requested the Chief Justices to refrain from dismantling the infrastructure.
CJI Chandrachud also inaugurated a neutral citation system and spoke about his vision to create paperless and virtual courts over the cloud. However, he also flagged recent incidents resulting from the live-streaming of proceedings.
CJI Chandrachud also inaugurated a neutral citation system and spoke about his vision to create paperless and virtual courts over the cloud. However, he also flagged recent incidents resulting from the live-streaming of proceedings.
CJI Chandrachud mentioned the issue of certain video clips of a judge in Patna High Court questioning an Indian Administrative Service (IAS) officer for their inappropriate attire in court. Although these clips are amusing, they should be regulated as there are more significant occurrences happening in the courtroom.
CJI Chandrachud mentioned the issue of certain video clips of a judge in Patna High Court questioning an Indian Administrative Service (IAS) officer for their inappropriate attire in court. Although these clips are amusing, they should be regulated as there are more significant occurrences happening in the courtroom.
He explained that social media's connection with live streaming presents a new challenge, requiring a centralised cloud infrastructure for live streaming, as well as new court hardware.
He explained that social media's connection with live streaming presents a new challenge, requiring a centralised cloud infrastructure for live streaming, as well as new court hardware.
The CJI reiterated that Artificial Intelligence (AI) tools would be useful, but judges' discretion would still be necessary, particularly in areas such as sentencing policy.
The CJI reiterated that Artificial Intelligence (AI) tools would be useful, but judges' discretion would still be necessary, particularly in areas such as sentencing policy.
"We do not think we want to cede our discretion, which we exercise on sound judicial lines in terms of sentencing policy. At the same time, AI is replete with possibilities and it is possible for the Supreme Court to have record of 10,000 or 15,000 pages? How do you expect a judge to digest documents of 15,000 pages, which comes with a statutory appeal?" Bar and Bench quoted him as saying..
"We do not think we want to cede our discretion, which we exercise on sound judicial lines in terms of sentencing policy. At the same time, AI is replete with possibilities and it is possible for the Supreme Court to have record of 10,000 or 15,000 pages? How do you expect a judge to digest documents of 15,000 pages, which comes with a statutory appeal?" Bar and Bench quoted him as saying..
The top court recently launched a new version of its e-filing portal for crowd testing, engaging with lawyers and clerks to raise awareness and provide training. The CJI emphasised that the top court exists for the entire country and called for the centralisation of cloud infrastructure for live streaming in order to address new challenges posed by social media.
The top court recently launched a new version of its e-filing portal for crowd testing, engaging with lawyers and clerks to raise awareness and provide training. The CJI emphasised that the top court exists for the entire country and called for the centralisation of cloud infrastructure for live streaming in order to address new challenges posed by social media.
See the original post:
Artificial Intelligence in judiciary: CJI DY Chandrachud speaks on possibilities of AI, role of judges in such cases | Mint - Mint