Category Archives: Artificial Intelligence
Pentagon faces future with lethal AI weapons on the battlefield – NBC Chicago
Artificial intelligence employed by the U.S. military has piloted pint-sized surveillance drones in special operations forces missions and helped Ukraine in its war against Russia. It tracks soldiers fitness, predicts when Air Force planes need maintenance and helps keep tabs on rivals in space.
Now, the Pentagon is intent on fielding multiple thousands of relatively inexpensive, expendable AI-enabled autonomous vehicles by 2026 to keep pace with China. The ambitious initiative dubbed Replicator seeks to galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap, and many, Deputy Secretary of Defense Kathleen Hicks said in August.
While its funding is uncertain and details vague, Replicator is expected to accelerate hard decisions on what AI tech is mature and trustworthy enough to deploy - including on weaponized systems.
There is little dispute among scientists, industry experts and Pentagon officials believe that the U.S. will within the next few years have fully autonomous lethal weapons. And though officials insist humans will always be in control, experts say advances in data-processing speed and machine-to-machine communications will inevitably relegate people to supervisory roles.
Thats especially true if, as expected, lethal weapons are deployed en masse in drone swarms. Many countries are working on them and neither China, Russia, Iran, India or Pakistan have signed a U.S.-initiated pledge to use military AI responsibly.
Its unclear if the Pentagon is currently formally assessing any fully autonomous lethal weapons system for deployment, as required by a 2012 directive. A Pentagon spokeswoman would not say.
Replicator highlights immense technological and personnel challenges for Pentagon procurement and development as the AI revolution promises to transform how wars are fought.
"The Department of Defense is struggling to adopt the AI developments from the last machine-learning breakthrough, said Gregory Allen, a former top Pentagon AI official now at the Center for Strategic and International Studies think tank.
The Pentagon's portfolio boasts more than 800 AI-related unclassified projects, much still in testing. Typically, machine-learning and neural networks are helping humans gain insights and create efficiencies.
The AI that weve got in the Department of Defense right now is heavily leveraged and augments people, said Missy Cummings, director of George Mason Universitys robotics center and a former Navy fighter pilot. Theres no AI running around on its own. People are using it to try to understand the fog of war better.
One domain where AI-assisted tools are tracking potential threats is space, the latest frontier in military competition.
China envisions using AI, including on satellites, to "make decisions on who is and isnt an adversary, U.S. Space Force chief technology and innovation officer Lisa Costa, told an online conference this month.
Nine humanoid robots gathered at the AI for Good conference in Geneva, Switzerland, where organizers are seeking to make the case for artificial intelligence to help resolve some of the worlds biggest challenges.
The U.S. aims to keep pace.
An operational prototype called Machina used by Space Force keeps tabs autonomously on more than 40,000 objects in space, orchestrating thousands of data collections nightly with a global telescope network.
Machina's algorithms marshal telescope sensors. Computer vision and large language models tell them what objects to track. And AI choreographs drawing instantly on astrodynamics and physics datasets, Col. Wallace Rhet Turnbull of Space Systems Command told a conference in August.
Another AI project at Space Force analyzes radar data to detect imminent adversary missile launches, he said.
Elsewhere, AI's predictive powers help the Air Force keep its fleet aloft, anticipating the maintenance needs of more than 2,600 aircraft including B-1 bombers and Blackhawk helicopters.
Machine-learning models identify possible failures dozens of hours before they happen, said Tom Siebel, CEO of Silicon Valley-based C3 AI, which has the contract. C3's tech also models the trajectories of missiles for the the U.S. Missile Defense Agency and identifies insider threats in the federal workforce for the Defense Counterintelligence and Security Agency.
Among health-related efforts is a pilot project tracking the fitness of the Army's entire Third Infantry Division more than 13,000 soldiers. Predictive modeling and AI help reduce injuries and increase performance, said Maj. Matt Visser.
In Ukraine, AI provided by the Pentagon and its NATO allies helps thwart Russian aggression.
NATO allies share intelligence from data gathered by satellites, drones and humans, some aggregated with software from U.S. contractor Palantir. Some data comes from Maven, the Pentagons pathfinding AI project now mostly managed by the National Geospatial-Intelligence Agency, say officials including retired Air Force Gen. Jack Shanahan, the inaugural Pentagon AI director,
Maven began in 2017 as an effort to process video from drones in the Middle East spurred by U.S. Special Operations forces fighting ISIS and al-Qaeda and now aggregates and analyzes a wide array of sensor- and human-derived data.
AI has also helped the U.S.-created Security Assistance Group-Ukraine help organize logistics for military assistance from a coalition of 40 countries, Pentagon officials say.
To survive on the battlefield these days, military units must be small, mostly invisible and move quickly because exponentially growing networks of sensors let anyone see anywhere on the globe at any moment, then-Joint Chiefs chairman Gen. Mark Milley observed in a June speech. And what you can see, you can shoot.
To more quickly connect combatants, the Pentagon has prioritized the development of intertwined battle networks called Joint All-Domain Command and Control to automate the processing of optical, infrared, radar and other data across the armed services. But the challenge is huge and fraught with bureaucracy.
Christian Brose, a former Senate Armed Services Committee staff director now at the defense tech firm Anduril, is among military reform advocates who nevertheless believe they "may be winning here to a certain extent."
The argument may be less about whether this is the right thing to do, and increasingly more about how do we actually do it -- and on the rapid timelines required," he said. Brose's 2020 book, The Kill Chain, argues for urgent retooling to match China in the race to develop smarter and cheaper networked weapons systems.
To that end, the U.S. military is hard at work on "human-machine teaming." Dozens of uncrewed air and sea vehicles currently keep tabs on Iranian activity. U.S. Marines and Special Forces also use Andurils autonomous Ghost mini-copter, sensor towers and counter-drone tech to protect American forces.
Industry advances in computer vision have been essential. Shield AI lets drones operate without GPS, communications or even remote pilots. It's the key to its Nova, a quadcopter, which U.S. special operations units have used in conflict areas to scout buildings.
On the horizon: The Air Forces loyal wingman program intends to pair piloted aircraft with autonomous ones. An F-16 pilot might, for instance, send out drones to scout, draw enemy fire or attack targets. Air Force leaders are aiming for a debut later this decade.
The loyal wingman timeline doesn't quite mesh with Replicator's, which many consider overly ambitious. The Pentagon's vagueness on Replicator, meantime, may partly intend to keep rivals guessing, though planners may also still be feeling their way on feature and mission goals, said Paul Scharre, a military AI expert and author of Four Battlegrounds.
Anduril and Shield AI, each backed by hundreds of millions in venture capital funding, are among companies vying for contracts.
Nathan Michael, chief technology officer at Shield AI, estimates they will have an autonomous swarm of at least three uncrewed aircraft ready in a year using its V-BAT aerial drone. The U.S. military currently uses the V-BAT -- without an AI mind -- on Navy ships, on counter-drug missions and in support of Marine Expeditionary Units, the company says.
It will take some time before larger swarms can be reliably fielded, Michael said. Everything is crawl, walk, run -- unless youre setting yourself up for failure.
The only weapons systems that Shanahan, the inaugural Pentagon AI chief, currently trusts to operate autonomously are wholly defensive, like Phalanx anti-missile systems on ships. He worries less about autonomous weapons making decisions on their own than about systems that dont work as advertised or kill noncombatants or friendly forces.
President Joe Biden issued an executive order on Monday designed to protect privacy, support workers and set new standards for safety and security around artificial intelligence.
The department's current chief digital and AI officer Craig Martell is determined not to let that happen.
Regardless of the autonomy of the system, there will always be a responsible agent that understands the limitations of the system, has trained well with the system, has justified confidence of when and where its deployable -- and will always take the responsibility, said Martell, who previously headed machine-learning at LinkedIn and Lyft. That will never not be the case.
As to when AI will be reliable enough for lethal autonomy, Martell said it makes no sense to generalize. For example, Martell trusts his car's adaptive cruise control but not the tech thats supposed to keep it from changing lanes. As the responsible agent, I would not deploy that except in very constrained situations, he said. Now extrapolate that to the military.
Martells office is evaluating potential generative AI use cases it has a special task force for that but focuses more on testing and evaluating AI in development.
One urgent challenge, says Jane Pinelis, chief AI engineer at Johns Hopkins Universitys Applied Physics Lab and former chief of AI assurance in Martells office, is recruiting and retaining the talent needed to test AI tech. The Pentagon can't compete on salaries. Computer science PhDs with AI-related skills can earn more than the military's top-ranking generals and admirals.
Testing and evaluation standards are also immature, a recent National Academy of Sciences report on Air Force AI highlighted.
Might that mean the U.S. one day fielding under duress autonomous weapons that dont fully pass muster?
We are still operating under the assumption that we have time to do this as rigorously and as diligently as possible, said Pinelis. I think if were less than ready and its time to take action, somebody is going to be forced to make a decision.
Read more:
Pentagon faces future with lethal AI weapons on the battlefield - NBC Chicago
Breda O’Brien: Artificial intelligence is only as awful as the humans … – The Irish Times
Sam Altman, in the news for his dizzyingly fast transition from being fired to being reinstated as chief executive of OpenAI, likes to tell people that he shares a birthday with the father of the atomic bomb, J Robert Oppenheimer.
Altman, one of the founders of OpenAI which developed ChatGPT, believes that work on artificial intelligence resembles the famous Manhattan Project, which gathered the best minds to beat Germany in the race to produce nuclear weapons.
It would seem to be an unfortunate analogy, but Altman believes that by foreseeing the potential for disaster we can work to avoid it and benefit human beings instead.
The recent debacle demonstrates how unlikely it is that this optimistic vision will prevail. OpenAI started as a non-profit in 2015 but soon ran into funding difficulties.
A for-profit subsidiary was initiated in 2019 under the scrutiny of the non-profit board, which was to ensure that safe artificial intelligence is developed and benefits all of humanity. This was to take precedence over any obligation to create a profit.
Loosely speaking, the board had more doomsayers, those who worry that AI has the potential to be dangerous to the extent of wiping out all of humanity, while Altman is more of an accelerationist, who believes that the potential benefits far outweigh the risks.
What happened when the board no longer had faith in Altman because he was not consistently candid in his communications with the board? Altman jumped ship to Microsoft, followed by Greg Brockman, another founder, and the majority of OpenAI employees threatened to do likewise. Yes, Microsoft, which was last year criticised by a group of German data-protection regulators over its compliance with GDPR.
[What will happen when the middle class get hit by ChatGPT?]
The pressure to reinstate Altman may not have been motivated purely by uncritical adoration, as staff and investors knew that firing him meant that a potential $86 billion deal to sell employee shares would probably not happen.
The boards first real attempt to rein Altman in failed miserably, in other words. The new board includes Larry Summers, former US treasury secretary and superstar economist, who has been the subject of a number of recent controversies, including over his connection to Jeffrey Epstein. When he was president of Harvard, Summers was forced to apologise for substantially understating the impact of socialisation and discrimination on the numbers of women employed in higher education in science and maths. He had suggested that it was mostly down to genetic factors rather than discrimination.
At a recent seminar in Bonnevaux, France, at the headquarters of the World Community of Christian Meditators, former Archbishop of Canterbury Rowan Williams addressed the question of how worried we should be about artificial intelligence. He made a valid point, echoed by people such as Jaron Lanier, computer scientist and virtual reality pioneer, that artificial intelligence is a misnomer for what we now have. He compared the kind of holistic learning that his two-year-old grandson demonstrates with the high-order data processing of large language models. His grandson is learning to navigate a complex landscape without bumping too much into things or people, to code and decode messages including metaphors and singing, all in a holistic way where it is difficult to disentangle the strands of what is going on. Unlike AI, his grandson is also capable of wonder.
While Archbishop Williamss distinction between human learning and machine learning is sound, the problem may not be the ways in which AI does not resemble us, or learn like us. We may need to fear AI most when it mirrors the worst aspects of our humanity without the leavening influence of our higher qualities.
As cartoonist Walt Kelly had his character, Pogo, say in an Earth Day poster, We have met the enemy and he is us
Take hallucinations, the polite term for when ChatGPT lies to you, such as falsely accusing a legal scholar of sexual harassment, as outlined in a Washington Post article this year. (To add insult to injury, it cited a non-existent Washington Post article as evidence of the non-existent harassment.) As yet, no one has succeeded in programming a large language model so that it does not hallucinate, partly for technical reasons and partly because these chatbots are scraping enormous amounts of information from the internet and reassembling it in plausible ways. As the early computer scientists used to say, garbage in, garbage out.
Human beings used the internet from the beginning to lie and spread disinformation. Human beings created the large language models that mimic humanity so effectively. We allow them to continue to develop even though OpenAI has not shared, for commercial reasons, how it designed and trained its model.
Talking about regulation, as Altman does with plausible earnestness, is meaningless if we do not understand what we are regulating. The real fears of potential mass destruction are brushed aside.
As cartoonist Walt Kelly had his character, Pogo, say in an Earth Day poster, We have met the enemy and he is us. Our inability to cry halt or even pause shows our worst qualities greed, naive belief in inevitable progress, and the inability to think with future generations in mind. We should perhaps focus less on the terrors of AI, and more on the astonishing hubris of those who have created and unleashed them.
Read the rest here:
Breda O'Brien: Artificial intelligence is only as awful as the humans ... - The Irish Times
Live chat: A new writing course for the age of artificial intelligence – Yale News
How is academia dealing with the influence of AI on student writing? Just ask ChatGPT, and itll deliver a list of 10 ways in which the rapidly expanding technology is creating both opportunities and challenges for faculty everywhere.
On the one hand, for example, while there are ethical concerns about AI compromising students academic integrity, there is also growing awareness of the ways in which AI tools might actually support students in their research and writing.
Students in Writing Essays with AI, a new English seminar taught by Yales Ben Glaser, are exploring the many ways in which the expanding number of AI tools are influencing written expression, and how they might help or harm their own development as writers.
We talk about how large language models are already and will continue to be quite transformative, Glaser said, not just of college writing but of communication in general.
An associate professor of English in Yales Faculty of Arts and Sciences, Glaser sat down with Yale News to talk about the need for AI literacy, ChatGPTs love of lists, and how the generative chatbot helped him write the course syllabus.
Ben Glaser: Its more the former. None of the final written work for the class is written with ChatGPT or any other large language model or chatbot, although we talk about using AI research tools like Elicit and other things in the research process. Some of the small assignments directly call for students to engage with ChatGPT, get outputs, and then reflect on it. And in that process, they learn how to correctly cite ChatGPT.
The Poorvu Center for Teaching and Learning has a pretty useful page with AI guidelines. As part of this class, we read that website and talked about whether those guidelines seem to match students own experience of usage and what their friends are doing.
Glaser: I dont get the sense that they are confused about it in my class because we talk about it all the time. These are students who simultaneously want to understand the technology better, maybe go into that field, and they also want to learn how to write. They dont think theyre going to learn how to write by using those AI tools better. But they want to think about it.
Thats a very optimistic take, but I think that Yale makes that possible through the resources it has for writing help, and students are often directed to those resources. If youre in a class where the writing has many stages drafting, revision its hard to imagine where ChatGPT is going to give you anything good, partly because youre going to have to revise it so much.
That said, its a totally different world if youre in high school or a large university without those resources. And then of course there are situations that have always led to plagiarism, where youre strung out at the last minute and you copy something from Google.
Glaser: First of all, its a really interesting thing to study. Thats not what youre asking youre asking what it can do or where does it belong in a writing process. But when you talk to a chatbot, you get this fuzzy, weird image of culture back. You might get counterpoints to your ideas, and then you need to evaluate whether those counterpoints or supporting evidence for your ideas are actually good ones. Theres no understanding behind the model. Its based on statistical probabilities its guessing which word comes next. It sometimes does so in a way that speeds things along.
If you say, give me some points and counterpoints in, say, AI use in second-language learning, it might spit out 10 good things and 10 bad things. It loves to give lists. And theres a kind of literacy to reading those outputs. Students in this class are gaining some of that literacy.
Glaser: I dont love the word brainstorming, but I think there is a moment where you have a blank page, and you think you have a topic, and the process of refining that involves research. ChatGPTs not the most wonderful research tool, but it sure is an easy one.
I asked it to write the syllabus for this course initially. What it did was it helped me locate some researchers that I didnt know, it gave me some ideas for units. And then I had to write the whole thing over again, of course. But that was somewhat helpful.
Glaser: It can be. I think thats a limited and effective use of it in many contexts.
One of my favorite class days was when we went to the library and had a library session. Its an insanely amazing resource at Yale. Students have personal librarians, if they want them. Also, Yale pays for these massive databases that are curating stuff for the students. The students quickly saw that these resources are probably going to make things go smoother long-term if they know how to use them.
So it's not a simple AI tool bad, Yale resource good. You might start with the quickly accessible AI tool, and then go to a librarian, and say, like, heres a different version of this. And then youre inside the research process.
Glaser: One thing that some writers have done is, if you interact with it long enough, and give it new prompts and develop its outputs, you can get something pretty cool. At that point youve done just as much work, and youve done a different kind of creative or intellectual project. And Im all for that. If everythings cited, and you develop a creative work through some elaborate back-and-forth or programming effort including these tools, youre just doing something wild and interesting.
Glaser: Im glad that I could offer a class that students who are coming from computer science and STEM disciplines, but also want to learn how to write, could be excited about. AI-generated language, thats the new medium of language. The Web is full of it. Part of making students critical consumers and readers is learning to think about AI language as not totally separate from human language, but as this medium, this soup if you want, that were floating around in.
Go here to read the rest:
Live chat: A new writing course for the age of artificial intelligence - Yale News
Bill Gates says using AI could lead to 3-day work week – Fox Business
C3 AI CEO Tom Siebel provides insight on the 'unimaginably powerful' technology on 'The Claman Countdown.'
Bill Gates is weighing in on the potential of artificial intelligence (AI) and how it could allow humans to work just three days a week.
"If you zoom out, the purpose of life is not just to do jobs," the Microsoft co-founder said Monday on an episode of Trevor Noah's "What Now? With Trevor Noah" podcast. "So if you eventually get a society where you only have to work three days a week or something, thats probably OK if the machines can make all the food and the stuff and we dont have to work as hard."
"The demand for labor to do good things is still there if you match the skills to it, and then if you ever get beyond that, then, OK, you have a lot of leisure time and will have to figure out what to do with it," Gates said.
Bill Gates, co-chairman of the Bill and Melinda Gates Foundation, during the EEI 2023 event in Austin, Texas, on June 12, 2023. (Jordan Vonderhaar/Bloomberg via Getty Images / Getty Images)
Gates also acknowledged that job displacement happens with new technologies.
"If they come slow enough, theyre generational," he said. The billionaire gave an example of fewer farmers in this generation compared to prior ones.
JPMORGAN CHASE CEO JAMIE DIMON ON WHETHER AI WILL REPLACE SOME JOBS: OF COURSE, YEAH
"So if it proceeds at a reasonable pace, and the government helps those people who have to learn new things, then its all good," he told Noah. "Its the aging society, its OK because the software makes things more productive."
Gates argued earlier this year that AI could provide major benefits to productivity, health care and education. He has also more recently talked about the potential of AI-powered personal assistants called "agents" that eventually "will be able to help with virtually any activity and any area of life" online.
Microsoft co-founder Bill Gates reacts during a visit with Britain's Prime Minister Rishi Sunak at Imperial College University on Feb. 15, 2023, in London. (Photo by Justin Tallis - WPA Pool/Getty Images / Getty Images)
AMAZON LOOKING TO HELP 2M PEOPLE GROW THEIR AI SKILLS
In March, while touting AI, Gates also called for establishing "rules of the road" so "any downsides of artificial intelligence are far outweighed by its benefits."
The potential future impact of AI on jobs and workflows has come up more as companies increasingly move to embrace the technology.
Former TD Ameritrade Chairman & CEO Joe Moglia joins 'Mornings with Maria' to discuss his outlook for A.I., durable goods, the economy and the impact of the Federal Reserve's rate hikes.
In April, the World Economic Forum found that nearly three-quarters of the companies it surveyed around the world indicated they would likely adopt AI. Half of the surveyed companies said they expected AI to create job growth, while 25% thought it would lead to job cuts.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
LinkedIn recently reported that it experienced a surge in the number of job advertisements referencing AI compared with November of last year.
Go here to see the original:
Bill Gates says using AI could lead to 3-day work week - Fox Business
Artificial General Intelligence will make you feel special, very special – Deccan Herald
Artificial General Intelligence will make you feel special, very specialThe IT crowd is not schooled in the arts, humanities or social sciences. But most of all, history.
Last Updated 26 November 2023, 04:27 IST
Roger Marshall is a computer scientist, a newly minted Luddite and a cynic
Thrilled that Artificial General Intelligence (AGI) will make your life even more efficient? Wait till you find out what else is in the works. If you feel you count for nothing in this world, you are mistaken. AGI will make you feel special, very special. AGI takes education seriously -- because it is always learning new things about you.
If you have transited through any of the major airports in the US, Western Europe or Japan, no doubt you will have noticed that, except for the travellers and the security people, hardly anyone else is around. The shops, restaurants and airline check-in counters, the few that are still in existence, dont have any staff to speak of. They have all gone self-service.
If you want to get a bite to eat, there is no paper menu for you to look at nor are there any wait staff to take your order. You need a smartphone to satisfy your thirst or your hunger. You point your phone at the QR code prominently displayed at each table, place your order electronically, and sullenly glare at your phone while waiting for it to be delivered by the sole hapless soul from the kitchen. Good luck finding someone to complain to if an incorrect or badly prepared item was delivered. Robotic service, but no robots. Not yet. Even graveyards are livelier than modern airports or restaurants.
In a few short months, you will be surprised to learn that the price that you will be charged for anything you buy -- food, clothing, travel tickets, taxi ride, etc., -- is dependent on who you are and how much you can afford. This is profiling at its finest, no personal detail too small to ignore. Items sold in stores will no longer have fixed prices attached to them. They can be instantaneously changed by AGI programmes to suit the customer. Or the occasion. Remember congestion pricing?
Advanced systems such as ChatGPT-4 are based on large language models. Other large AI models in the fields of physics, chemistry, economics, medicine, and climate (several such models already exist in rudimentary form) will be deployed in the near-future. Predictions are that when these models interact in a neural network learning environment, a vast treasure trove of new knowledge will be created that can ultimately prove beneficial to humanity. All in the belief that the private sector is much better than the public sector in all manner of things. However, no one engaged in AI research has come up with a satisfactory explanation for how exactly a system such as AGI would work and, more importantly, why the output of the system should be trusted.
In his New York Times essay of June 30, 2023, titled, The True Threat of Artificial Intelligence, author Evgeny Morozov takes issue with Silicon Valley IT stalwarts and their fawning admirers in scientific and academic circles who are cheerleading ongoing efforts to promote AGI. Morozov argues that for-profit corporations solutionism approach to what ails public spaces and organisations is always market-based and that the privatisation of public enterprises and laissez-faire economic policies of the 1980s which are still in vogue serve to explain why AGI is a harbinger of bad things waiting to happen.
If you live in the US, you simply cannot avoid interacting with AI systems (intelligent agents) that are already in place. These software agents evaluate your applications for jobs, college admissions, loans, etc., and pronounce judgement. If any problems are encountered, you are prevented from contacting a human to resolve the situation.
Make no mistake, the handful of companies calling for a new Manhattan Project to develop AGI will end up controlling the world, and nation-states will become a thing of the past. These for-profit companies would love to get rid of any restrictions placed on their operations, privatise all services provided by any nation-state (including the US) to its citizens, and gain control of State-owned entities. In short, no more public schools, community hospitals, fire and police departments, public utilities (water, sewer, electricity, gas, transportation), etc. Why, even the military, what with mercenary soldiers (human and/or robot) lurking in the background.
The after-effects of the original Manhattan Project, which produced the A-bomb, are still with us and are not distant memories. Just ask the survivors or their descendants in Hiroshima, Nagasaki, Three Mile Island, Chernobyl, and Fukushima.
The IT crowd is not schooled in the arts, humanities or social sciences. But most of all, history.
View original post here:
Artificial General Intelligence will make you feel special, very special - Deccan Herald
Mind the Machines: The Urgent Need to Regulate Artificial … – Brown Political Review
Over the past decade, society has witnessed the rising presence of artificial intelligence (AI) technology. At this point, AI has garnered an expected compound annual growth rate of 37.3 percent from 2023 to 2030. While it has helped to enhance productivity and decision-making processes, the possibility of its malicious use is rising, prompting much unease. These concerns are particularly relevant to Americans as in the next five years, it is expected that the United States AI market will grow to a value of $223.40 billion. As the United States is undoubtedly in the competition to become the worlds AI hub, we must ask: How successful are we in regulating its use?
With Americas two major political rivals, China and Russia, viewing AI as the new global arms race, it is imperative now more than ever that the US consider this question. Up until March 3, 2023, no bill, whose primary aim is to protect or thwart the development of AIs potentially dangerous aspect had been proposed. This lack of regulation can be primarily attributed to the deep lack of familiarity with AI in American governing bodies. The only member of Congress that brandishes any academic background in the field is Rep. Jay Obernolte (R-CA), who received a masters degree in Artificial Intelligence from the University of California, Los Angeles. In fact, Congressman Obernolte further reaffirms that it is surprising how much time [he] spend[s] explaining to [his] colleagues that the chief dangers of AI will not come from evil robots with lasers coming out of their eyes.
The AI sector is subject to constant change, growth, and ultimately, evolution. It goes without saying that the threats posed by AI also behave similarly. How can the United States be expected to be up to date on and regulate this technology if the people making and improving laws and regulations are not equipped with the knowledge necessary to understand the capacity of AI?
While it is commendable that some members of Congress, like Rep. Don Beyer (D-VA) who is currently pursuing a masters degree in AI from George Mason University, aim to develop their understanding of AI and apply it to their legislative work, the fact remains that it is unrealistic to expect all of Congress to do the same.
"The future of regulating AI is rooted in technical understanding in order to ensure that the solutions are sustainable and feasible."
Currently, the United States displays a certain degree of divisiveness between its technological and legislative institutions. This is especially evident in the purpose of two separate organizations: the Cybersecurity and Infrastructure Agency (CISA) and Congress. While the CISA is grounded in employing technological solutions to combat cybersecurity risks, lawmakers primarily rely on their potential strengthening of existing policies to overcome technological threats. The two are distinguished in the sense that the CISAs technocentric stance suggests that it believes in fighting technological threats with technology, while the law-making process employs a more anthropocentric viewpoint, holding human beings accountable for their use of technology.
The key difference? CISA is equipped with individuals that demonstrate a superior understanding of the technological fielda crucial feature lacking in Congress. While lawmakers have taken steps to address their lack of AI knowledge by establishing an apolitical and nonpartisan Technology Policy Committee in the Senate which regularly educates and informs Congress, the Administration, and the courts about significant developments in the computing field and how those developments affect public policy in the United States, it is not enough. The Technology Policy Committee is a step in the right direction, however, there is only so much that policymakers can do with information regarding technological developments they do not fully understand. Questions like What are the implications and the scope of AI technological developments on current policies? can only be answered when there is an understanding of both technology and policymaking.
To combat the threats of AI, it is critical that solutions are rooted in an understanding of technologys inner mechanisms. This can be accomplished through the implementation of a government body that serves as an intersection of technology and policy making. I would like to refer to this as the Technocentric Coalition of Lawmakers (TCL). This would be a body of individuals that have an academic background in a technology-related field (Computer Science, Data Science, Artificial Intelligence, Computer Engineering, and so on) and an interest in policymaking thought processes. Leading by example is the European Union, which pioneered to protect its general public from the dangers of AI through the introduction of the Artificial Intelligence Liability Detective. This law seeks to establish rules that would govern the preservation and disclosure of evidence in cases involving high-risk AI. Such examples demonstrate the potential for the regulation of AI globally. They elucidate that the future of regulating AI is rooted in technical understanding in order to ensure that the solutions are sustainable and feasible.
The solution I propose does not involve a singular law, but rather the establishment of a committee specifically dedicated to regulating AI as the field is one that is constantly evolving and growing. Therefore, AI and the threats associated with it demand a committee that is able to keep up with the technologys growth and evolution, which a written law on its own would not accomplish without constant amendments.
How would the logistics of the TCL work? Considering that democracy is rooted in Americas foundational values, it is only fitting that the formation of the TCL would involve an election as opposed to an appointment. Eligible candidates must have acquired at least a college-level degree in a technological field, in addition to the qualifications required to be elected as a representative to the US Senate. It is important to note that in this manner, the TCL functions as a permanent subcommittee of the Senate, regardless of the party leading the country. In addition, it will have a representative from every state, ensuring that the needs and concerns of AI pertaining to every state have a platform where they can be heard and addressed. As such, the TCL seeks to regulate AI in a manner that promotes bipartisan and interstate collaboration, cooperation and compromise, promoting a responsible and secure use of AI nationwide.
The current institutions in the United States relevant to the technology space serve as an advisory board. The TCL does not seek to replace these bodies. Instead, it aims to distinguish itself from the existing bodies by remaining active at the heart of policy-making. This body would have the power and capacity to formulate informed laws that prevent AI from being used with malicious intent. It would serve as a symbol of growth and the evolution in the nations legislative system into one that is equipped to keep pace with todays rapidly advancing tech space. With great power comes great responsibility, and it is time we harness the potential of Artificial Intelligence in a responsible manner.
Read more here:
Mind the Machines: The Urgent Need to Regulate Artificial ... - Brown Political Review
Media ideology shapes public perception of artificial intelligence – Open Access Government
Virginia Tech Pamplin College of Business researchers Angela Yi, Shreyans Goenka, and Mario Pandelaere explored the impact of political ideology on AI coverage. Their study, Partisan Media Sentiment Toward Artificial Intelligence, reveals that liberal-leaning media and even some aspects of social media express more negativity toward AI than conservative counterparts.
Liberal medias scepticism toward AI is linked to heightened concerns about the technology exacerbating societal biases. The study identifies a focus on racial, gender, and income disparities by liberal outlets, contributing to their more negative portrayal of AI.
The researchers note a shift in media sentiment post-George Floyds death, with increased negativity towards AI. The incident triggered a national conversation about social biases, intensifying media concerns about AIs role in perpetuating societal inequalities.
Findings suggest that partisan media sentiment influences public opinion on AI, potentially impacting policymaking. Angela Yi highlights the power of media sentiment in shaping public perception. She calls for further exploration into how social media conversations about AI may evolve based on these partisan differences.
Based on a dataset of over 7,500 articles, the study refrains from prescribing an optimal stance but emphasises recognising and understanding these ideological differences in media discourse.
In conclusion, Virginia Techs research illuminates the significant role political ideology plays in shaping public narratives around artificial intelligence. As media sentiment influences public opinion, the findings underscore the potential impact on policymaking. Acknowledging these ideological differences is crucial for fostering a nuanced understanding of AI and its societal implications.
Editor's Recommended Articles
Read the original post:
Media ideology shapes public perception of artificial intelligence - Open Access Government
The EU Artificial Intelligence Act, and what it may mean for … – Lexology
The European Parliament and the Council of the European Union are currently discussing the European Commissions proposed Artificial Intelligence Act. The objective of the act is to promote the uptake of artificial intelligence, while at the same time also addressing the risks associated with certain uses of the technology.
Whilst the proposed new law aims to impose the new rules without prejudice to the application of Union competition law, it is likely to have a significant impact on the procedural and investigative powers of competition agencies.
This briefing provides a brief introduction to the proposed law and considers the implications of its implementation for competition law in Ireland.
Introduction to the AI Act
As part of the European Unions Digital Strategy aimed at enabling Europes digital transformation, and further to President von der Leyens political commitment to put forward legislation for a coordinated European approach on the human and ethical implications of artificial intelligence (AI), the Artificial Intelligence Act (the AI Act) will be the worlds first comprehensive AI law. If adopted, it will have direct effect in Ireland and the regulation will automatically become part of Irish law.
The AI Act aims to ensure the proper functioning of the single market by creating the conditions for the development and use of trustworthy AI systems in the Union. It proposes to achieve this by:
The European Commission draft proposes providing supervisory agencies with procedural powers that will have implications for national competition authorities across Europe, by indirectly extending the investigative powers of those authorities.
In order to enable designated national supervisory authorities to fulfil their obligations, the European Commission draft of the AI Act provides that they will have, inter alia:
full access to the training, validation and testing datasets used by the provider including through application programming interfaces (API) or other appropriate technical means and tools enabling remote access.
In addition, the European Commission proposes, in its draft of the AI Act, to provide surveillance authorities with the power, upon reasoned request, to obtain access to the source code of the AI system (where it is deemed necessary). The European Parliament has sought to limit these provisions, by replacing the power to request access to source code with a power to request access to training and trained models of the AI system, including its relevant source parameters, and requiring that all other reasonable ways to verify conformity be exhausted before such access is requested.
Article 63 of the European Commission draft AI Act requires that the national supervisory authorities without delay report to both the Commission and national competition authorities, any information identified in the course of market surveillance activities that may be of interest for the application of Union law on competition rules. Access to this type of data will be significant for competition authorities.
Implications for the enforcement of competition law
Competition agencies are usually only able to compel information from companies, by way of a formal information request, when they suspect that an infringement of competition law has occurred. A similar approach is adopted in the Digital Markets Act (DMA), which gives the Commission power to carry out inspections and request information where it suspects an infringement of the DMA. However, per the European Commission draft of the AI Act, the bar for obligating the sharing of information gathered by the national supervisory authorities with the competition agencies, appears to be much lower, and arises regardless of whether any infringements are suspected or alleged.
The draft AI Act provides the national supervisory authorities access to data for the process of carrying out the required conformity checks. It requires that they supply this data to the competition agencies when it may be of interest to them. There is no requirement that there be any suspicions of anti-competitive conduct. In a world where companies are increasingly utilising algorithms to make competitively strategic decisions, it is not difficult to imagine that there may be a lot of information available to the national supervisory authorities that would be of interest to their competition law enforcement colleagues.
Implications for business
As currently drafted, the obligations of the AI Act will apply on a sliding scale based on the potential risks posed by the intended use of the particular AI system:
Companies with high-risk AI systems that require conformity assessments must be prepared to make the required data available, knowing that it may be provided to their national competition agency and/or the European Commission. Whilst it is not yet clear when the AI Act will come into force, before it does, given the structural interlinking of competition concerns in the draft legislation, such companies should take steps to understand if, and to what degree, that could expose them to the risk of an investigation by the relevant competition authority, and where necessary, take pre-emptive steps to minimise that risk.
Also contributed to by Beverley Williamson
Read the original here:
The EU Artificial Intelligence Act, and what it may mean for ... - Lexology
Artificial Intelligence Could Impact Black voting during 2024 Elections – MSR News Online
For much of the last century, segregationists and their anti-Black racist allies who were intent on ensuring that African Americans couldnt exercise the right to vote, erected an assortment of barriers to that end.
Segregationists used the courts, local and state laws, literacy tests, poll taxes, fraud, brute force, violence and intimidation by the Ku Klux Klan to impede and prevent Black people from exercising their Constitutional right.
In the 21stcentury, voter suppression has gone high-tech with the same characters still plotting to control who votes, when and how. They are employing an assortment of methods including artificial intelligence (AI). Concerns about misuse of AI in the electoral ecosystem is what brought Melanie Campbell and Damon T. Hewitt to testify before the U.S. Congress.
Campbell, president and CEO of the National Coalition on Black Civic Participation (NCBCP) and convener of the Black Womens Roundtable (BWR), spoke of the urgency around creating safeguards and federal legislation to protect against the technologys misuse as it relates to elections, democracy, and voter education, while fighting back against the increasing threats surrounding targeted misinformation and disinformation.
AI has the potential to be a significant threat because of how rapidly its moving, Campbell said. There was Russian targeting of Black men with misinformation in 2020 to encourage them not to vote. It started in 2016.
- ADVERTISEMENT -
Both civil rights leaders warned that misinformation driven by artificial intelligence may worsen considerably for African American voters leading up to the 2024 presidential election.
What we have seen through our work demonstrates how racial justice, voting rights, and technology are inextricably linked, said Hewitt, president and executive director of the Lawyers Committee for Civil Rights Under Law during his testimony.
Voters of color already face disproportionate barriers to the ballot box that make it more difficult and more costly for them to vote without factoring in the large and growing cost of targeted mis- and disinformation on our communities.
Campbell and Hewitt said that during recent election cycles, African Americans have been specifically targeted by disinformation campaigns. The pair referred to a lawsuit, NCBCP vs. Wohl, filed by the Lawyers Committee and involving NCBCP which was a plaintiff two men who targeted Black voters in New York, Pennsylvania and Ohio disinformation via robocalls in an effort to sway the outcome of the 2020 Elections.
The goal was to discourage African Americans from voting by mail, lying that their personal information would be added to a public database used by law enforcement to execute warrants; to collect credit card debts; and by public health entities to force people to take mandatory vaccinations.
- ADVERTISEMENT -
These threats played upon systemic inequities likely to resonate with and intimidate Black Americans, Hewitt said. The methods used for those deceptive robocalls in 2020 look primitive by 2023 standards.
Campbell concurred. She said AI would allow this type of weaponization to be more significant using texts, video and audio.
AI increases the ability to do that in larger formats she said. You have open source where just about anyone who wants to can use AI for nefarious means. There is a lot of angst with those doing voting rights and elections work.
Campbell and Hewitt agree that the exploding capabilities of AI technology can drastically multiply the amount of harm to American democracy. Campbell adds that Google, Microsoftand Meta are the front-line companies who activists hope will step up and put guardrails in place before the 2024 elections is overwhelmed by AI-driven misinformation and disinformation.
In malicious hands and absent strong regulation, AI can clone voices so that calls sound like trusted public figures, election officials, or even possibly friends and relatives, said Hewitt. The technology could reach targeted individuals across platforms, following up the AI call with targeted online advertisements, fake bot accounts seeking to follow them on social media, customized emails or WhatsApp messages, and carefully tailored memes.
- ADVERTISEMENT -
AI regulation should include transparency and explainability requirements so people are made aware of when, how, and why AI is being used to ensure that it is not used to grab data from those who have not given their consent. Voter information should not be tied to private information to target voters without safeguards.
The effort being led by the Lawyers Committee and the NCBCP comes against the backdrop of similar alarm from the Biden administration, some lawmakers and AI experts who fear that AI will be weaponized to spread disinformation to heighten the distrust that significant numbers of Americans have towards the government and politicians.
President Joe Biden recently signed whats described as a sweeping executive order that focuses on algorithmic bias, preserving privacy and regulation on the safety of frontier AI models. The executive order also encourages open development of AI technologies, innovations in AI security and building tools to improve security.
Vice President Kamala Harris echoed others concerned about this issue who fear that malevolent actors misusing AI could upend democratic institutions and cause Americans confidence in democracy to plunge precipitously.
When people around the world cannot discern fact from fiction because of a flood of AI-enabled disinformation and misinformation, I ask, Is that not existential? Harris said in a speech at the 2023 AI Safety Summit in London, England. Harris concluded, Were going to do everything we can. This is one of the biggest concerns most people have.Barrington Salmon is an NNPA contributing writer. This commentary was edited for length.
- ADVERTISEMENT -
Help amplify Black voices by donating to the MSR. Your contribution enables critical coverage of issues affecting the community and empowers authentic storytelling.
Go here to read the rest:
Artificial Intelligence Could Impact Black voting during 2024 Elections - MSR News Online
Artificial Intelligence Has Landed in the Watch Industry – PR Web
For a smarter approach to watch collecting, explore everywatch.com today
After more than 2 years of research and development, EveryWatch is being deployed to serve as the one-stop information and analysis platform for the watch market. The platform includes a historical watch sale database with actual sales prices of over 500,000 watches and aggregates new watches coming on to the market in real time from over 250 auction houses and 150 online marketplaces and dealers. The platform also includes a trove of tools to analyze trends and statistics to understand the value and trends of specific watches or a full collection. Users also have the ability to save detailed searches and receive notifications when watches that match the specific search characteristics are made available for sale anywhere in the world.
"I am excited to see EveryWatch completing the watch ecosystem as the largest watch database, providing accurate and reliable pricing information to the watch community. EveryWatch allows to make educated decisions about pre-owned watches quickly and with confidence." Chabi Nouri, Chairperson of the Board of EveryWatch and former CEO of Piaget.
Davide Parmigiani, Chairman of Monaco Legend Auction House and collector, commented, "EveryWatch is the epitome of efficiency. As auctioneers, we can now easily and quickly value watches and identify trends. And as a collector, I can now track the value of my full collection daily and also be notified when the watch I have been looking to buy for years comes on to the market."
The platform was created by Nacre Capital, a venture builder focused on building AI based deep technology companies. The Nacre portfolio, includes FDNA, providing solutions to detect rare diseases in children using AI, which are used in leading hospitals globally; Fairtilty, transforming IVF using AI, Seed-X, an AI based agtech company and many others.
EveryWatch was conceived by leading tech investors and watch professionals, including Howard Morgan, Chairman of Nacre, co-founder of Renaissance Capital and First Round Capital, early investor in Uber, LinkedIn, Square and other companies; Chabi Nouri, former CEO of Piaget; Alexander Friedman, collector, founder of AF. Luxury Consulting and the AF. Watch Report; and other leading collectors and tech leaders, to address the lack of reliable, verified information in the second-hand watch market. EveryWatch serves as the one-stop shop information platform where anyone can easily find any second-hand watch currently available for sale and understand price trends and the real going market rate for any luxury watch ever created.
Media Contact
EveryWatch PR, EveryWatch, 39 3466079308, [emailprotected], https://everywatch.com/
SOURCE EveryWatch
See the article here:
Artificial Intelligence Has Landed in the Watch Industry - PR Web