Category Archives: Ai

Foundations seek to advance AI for good and also protect the world from its threats – ABC News

While technology experts sound the alarm on the pace of artificial-intelligence development, philanthropists including long-established foundations and tech billionaires have been responding with an uptick in grants.

Much of the philanthropy is focused on what is known as technology for good or ethical AI, which explores how to solve or mitigate the harmful effects of artificial-intelligence systems. Some scientists believe AI can be used to predict climate disasters and discover new drugs to save lives. Others are warning that the large language models could soon upend white-collar professions, fuel misinformation, and threaten national security.

What philanthropy can do to influence the trajectory of AI is starting to emerge. Billionaires who earned their fortunes in technology are more likely to support projects and institutions that emphasize the positive outcomes of AI, while foundations not endowed with tech money have tended to focus more on AIs dangers.

For example, former Google CEO Eric Schmidt and wife, Wendy Schmidt, have committed hundreds of millions of dollars to artificial-intelligence grantmaking programs housed at Schmidt Futures to accelerate the next global scientific revolution. In addition to committing $125 million to advance research into AI, last year the philanthropic venture announced a $148 million program to help postdoctoral fellows apply AI to science, technology, engineering, and mathematics.

Also in the AI enthusiast camp is the Patrick McGovern Foundation, named after the late billionaire who founded the International Data Group and one of a few philanthropies that has made artificial intelligence and data science an explicit grantmaking priority. In 2021, the foundation committed $40 million to help nonprofits use artificial intelligence and data to advance their work to protect the planet, foster economic prosperity, ensure healthy communities, according to a news release from the foundation. McGovern also has an internal team of AI experts who work to help nonprofits use the technology to improve their programs.

I am an incredible optimist about how these tools are going to improve our capacity to deliver on human welfare, says Vilas Dhar, president of Patrick J. McGovern Foundation. What I think philanthropy needs to do, and civil society writ large, is to make sure we realize that promise and opportunity to make sure these technologies dont merely become one more profit-making sector of our economy but rather are invested in furthering human equity. Salesforce is also interested in helping nonprofits use AI. The software company announced last month that it will award $2 million to education, workforce, and climate organizations to advance the equitable and ethical use of trusted AI.

Billionaire entrepreneur and LinkedIn co-founder Reid Hoffman is another big donor who believes AI can improve humanity and has funded research centers at Stanford University and the University of Toronto to achieve that goal. He is betting AI can positively transform areas like health care (giving everyone a medical assistant) and education (giving everyone a tutor), he told the New York Times in May.

The enthusiasm for AI solutions among tech billionaires is not uniform, however. EBay founder Pierre Omidyar has taken a mixed approach through his Omidyar Network, which is making grants to nonprofits using the technology for scientific innovation as well as those trying to protect data privacy and advocate for regulation.

One of the things that were trying really hard to think about is how do you have good AI regulation that is both sensitive to the type of innovation that needs to happen in this space but also sensitive to the public accountability systems, says Anamitra Deb, managing director at the Omidyar Network.

Grantmakers that hold a more skeptical or negative perspective on AI are also not a uniform group; however, they tend to be foundations unaffiliated with the tech industry.

The Ford, MacArthur, and Rockefeller foundations number among several grantmakers funding nonprofits examining the harmful effects of AI.

For example, computer scientists Timnit Gebru and Joy Buolamwini, who conducted pivotal research on racial and gender bias from facial-recognition tools which persuaded Amazon, IBM, and other companies to pull back on the technology in 2020 have received sizable grants from them and other big, established foundations.

Gebru launched the Distributed Artificial Intelligence Research Institute in 2021 to research AIs harmful effects on marginalized groups free from Big Techs pervasive influence. The institute raised $3.7 million in initial funding from the MacArthur Foundation, Ford Foundation, Kapor Center, Open Society Foundations, and the Rockefeller Foundation. (The Ford, MacArthur, and Open Society foundations are financial supporters of the Chronicle.)

Buolamwini is continuing research on and advocacy against artificial-intelligence and facial-recognition technology through her Algorithmic Justice League, which also received at least $1.9 million in support from the Ford, MacArthur, and Rockefeller foundations as well as from the Alfred P. Sloan and Mozilla foundations.

These are all people and organizations that I think have really had a profound impact on the AI field itself but also really caught the attention of policymakers as well, says Eric Sears, who oversees MacArthurs grants related to artificial intelligence. The Ford Foundation also launched a Disability x Tech Fund through Borealis Philanthropy, which is supporting efforts to fight bias against people with disabilities in algorithms and artificial intelligence.

There are also AI skeptics among the tech elite awarding grants. Tesla CEO Elon Musk has warned AI could result in civilizational destruction. In 2015, he gave $10 million to the Future of Life Institute, a nonprofit that aims to prevent existential risk from AI, and spearheaded a recent letter calling for a pause on AI development. Open Philanthropy, a foundation started by Facebook co-founder Dustin Moskovitz and his wife, Cari Tuna, has provided majority support to the Center for AI Safety, which also recently warned about the risk of extinction associated with AI.

A significant portion of foundation giving on AI is also directed at universities studying ethical questions. The Ethics and Governance of AI Initiative, a joint project of the MIT Media Lab and Harvards Berkman Klein Center, received $26 million from 2017 to 2022 from Luminate (the Omidyar Group), Reid Hoffman, Knight Foundation, and the William and Flora Hewlett Foundation. (Hewlett is a financial supporter of the Chronicle.)

The goal, according to a May 2022 report, was to ensure that technologies of automation and machine learning are researched, developed, and deployed in a way which vindicates social values of fairness, human autonomy, and justice. One university funding effort comes from the Kavli Foundation, which in 2021 committed $1.5 million a year for five years to two new centers focused on scientific ethics with artificial intelligence as one priority area at the University of California at Berkeley and the University of Cambridge. The Knight Foundation announced in May it will spend $30 million to create a new ethical technology institute at Georgetown University to inform policymakers.

Although hundreds of millions of philanthropic dollars have been committed to ethical AI efforts, influencing tech companies and governments remains a massive challenge.

Philanthropy is just a drop in the bucket compared to the Goliath-sized tech platforms, the Goliath-sized AI companies, the Goliath-sized regulators and policymakers that can actually take a crack at this, says Deb of the Omidyar Network.

Even with those obstacles, foundation leaders, researchers, and advocates largely agree that philanthropy can and should shape the future of AI.

The industry is so dominant in shaping not only the scope of development of AI systems in the academic space, theyre shaping the field of research, says Sarah Myers West, managing director of the AI Now Institute. And as policymakers are looking to really hold these companies accountable, its key to have funders step in and provide support to the organizations on the front lines to ensure that the broader public interest is accounted for.

_____

This article was provided to The Associated Press by the Chronicle of Philanthropy. Kay Dervishi is a staff writer at the Chronicle. Email: kay.dervishi@philanthropy.com. The AP and the Chronicle are solely responsible for this content. They receive support from the Lilly Endowment for coverage of philanthropy and nonprofits. For all of APs philanthropy coverage, visit https://apnews.com/hub/philanthropy.

The rest is here:

Foundations seek to advance AI for good and also protect the world from its threats - ABC News

Every AI Stock Cathie Wood Owns, Ranked From Best to Worst – The Motley Fool

Ark Invest CEO Cathie Wood looks for one thing in her investments above all others: innovation. It's no coincidence that half of Ark Invest's actively managed ETFs feature the word in their names.

There's arguably no greater area for innovation right now than artificial intelligence (AI). Unsurprisingly, Ark Invest has loaded up in recent years on AI stocks. Here is every AI stock that Wood owns, ranked from best to worst.

Ark Invest ETFs hold positions in most of the stocks that I'd call top-tier titans in the AI world. These megacap AI leaders make up Wood's top five, in my view:

Data source for market caps: Google Finance. Chart by author.

I've listed Alphabet in first place for three main reasons. First, the company is indisputably a leader in AI with its Google DeepMind unit. Second, AI gives Alphabet multiple paths to growth, including self-driving car technology with its Waymo business and hosting AI apps on Google Cloud. Third, the stock is arguably the most attractively valued of the top-tier AI contenders.

However, all the other members of the top five have a lot going for them. Amazon and Microsoft, like Alphabet, should benefit tremendously from AI advances. Meta's open-source approach to AI could reap significant rewards. And Tesla has a huge potential market opportunity with self-driving robotaxis.

Each of the next five stocks in the ranking also lay claim to impressive growth prospects due to AI. However, I think they all also come with asterisks that prevent them from cracking the top five on the list.

Data source for market caps: Google Finance. Chart by author.

Nvidia's stock has skyrocketed this year, with AI driving seemingly insatiable demand for its graphics processing units. The key problem for Nvidia, though, is its valuation. With shares trading at nearly 44 times sales, my fear is that a major pullback is due for the high-flying stock.

It's a similar story for Palantir and, to a lesser extent, AMD. Palantir's forward earnings multiple is close to 82x. That's steep for a company that delivered year-over-year sales growth of only 13% in its latest quarter. AMD's revenue declined 18% year over year in the second quarter, although I expect better days are ahead.

Taiwan Semi boasts an impressive moat. Its chips are used by AI leaders, including Nvidia and AMD. JD.com is investing heavily in AI apps. Its stock is also dirt cheap.

But both stocks share the same asterisk: China. The potential for the Chinese government's interference with JD's business raises uncertainties. And the possibility that China could invade Taiwan increases the risks associated with investing in Taiwan Semi.

I call the final four AI stocks in Wood's portfolio her up-and-comers. All of these stocks are making a name for themselves in AI but remain smaller (and riskier) than the other AI leaders in which Ark Invest has positions.

Data source for market caps: Google Finance. Chart by author.

Teradyne's technology is used to test autonomous mobile robots. I listed it ahead of the other up-and-comers because it's already profitable, whereas the other three companies aren't.

However, I like the potential for all of these bottom-rung AI stocks that Wood owns. Accolade is using AI to develop personalized healthcare solutions. Schrodinger and Recursion are using AI in drug discovery and development.

Wood would probably argue that Tesla deserves to be ranked No. 1 instead of Alphabet. The electric vehicle maker is the top position in her combined Ark Invest portfolio, making up more than 7.6% of the ETFs' total holdings. None of the other top five AI stocks in my ranking, however, have a weight of more than 0.22%.

My main knock against Wood is that she hasn't invested as heavily in the best of these stocks as she could have. Overall, though, I think that she has an impressive lineup of AI stocks in her Ark Invest holdings.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. Keith Speights has positions in Alphabet, Amazon.com, Meta Platforms, and Microsoft. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Amazon.com, JD.com, Meta Platforms, Microsoft, Nvidia, Palantir Technologies, Taiwan Semiconductor Manufacturing, and Tesla. The Motley Fool recommends Teradyne. The Motley Fool has a disclosure policy.

More here:

Every AI Stock Cathie Wood Owns, Ranked From Best to Worst - The Motley Fool

To Navigate the Age of AI, the World Needs a New Turing Test – WIRED

There was a time in the not too distant pastsay, nine months agowhen the Turing test seemed like a pretty stringent detector of machine intelligence. Chances are youre familiar with how it works: Human judges hold text conversations with two hidden interlocutors, one human and one computer, and try to determine which is which. If the computer manages to fool at least 30 percent of the judges, it passes the test and is pronounced capable of thought.

For 70 years, it was hard to imagine how a computer could pass the test without possessing what AI researchers now call artificial general intelligence, the entire range of human intellectual capacities. Then along came large language models such as GPT and Bard, and the Turing test suddenly began seeming strangely outmoded. OK, sure, a casual user today might admit with a shrug, GPT-4 might very well pass a Turing test if you asked it to impersonate a human. But so what? LLMs lack long-term memory, the capacity to form relationships, and a litany of other human capabilities. They clearly have some way to go before were ready to start befriending them, hiring them, and electing them to public office.

And yeah, maybe the test does feel a little empty now. But it was never merely a pass/fail benchmark. Its creator, Alan Turing, a gay man sentenced in his time to chemical castration, based his test on an ethos of radical inclusivity: The gap between genuine intelligence and a fully convincing imitation of intelligence is only as wide as our own prejudice. When a computer provokes real human responses in usengaging our intellect, our amazement, our gratitude, our empathy, even our fearthat is more than empty mimicry.

So maybe we need a new test: the Actual Alan Turing Test. Bring the historical Alan Turing, father of modern computinga tall, fit, somewhat awkward man with straight dark hair, loved by colleagues for his childlike curiosity and playful humor, personally responsible for saving an estimated 14 million lives in World War II by cracking the Nazi Enigma code, subsequently persecuted so severely by England for his homosexuality that it may have led to his suicideinto a comfortable laboratory room with an open MacBook sitting on the desk. Explain that what he sees before him is merely an enormously glorified incarnation of what is now widely known by computer scientists as a Turing machine. Give him a second or two to really take that in, maybe offering a word of thanks for completely transforming our world. Then hand him a stack of research papers on artificial neural networks and LLMs, give him access to GPTs source code, open up a ChatGPT prompt windowor, better yet, a Bing-before-all-the-sanitizing windowand set him loose.

Imagine Alan Turing initiating a light conversation about long-distance running, World War II historiography, and the theory of computation. Imagine him seeing the realization of all his wildest, most ridiculed speculations scrolling with uncanny speed down the screen. Imagine him asking GPT to solve elementary calculus problems, to infer what human beings might be thinking in various real-world scenarios, to explore complex moral dilemmas, to offer marital counseling and legal advice and an argument for the possibility of machine consciousnessskills which, you inform Turing, have all emerged spontaneously in GPT without any explicit direction by its creators. Imagine him experiencing that little cognitive-emotional lurch that so many of us have now felt: Hello, other mind.

A thinker as deep as Turing would not be blind to GPTs limitations. As a victim of profound homophobia, he would probably be alert to the dangers of implicit bias encoded in GPTs training data. It would be apparent to him that despite GPTs astonishing breadth of knowledge, its creativity and critical reasoning skills are on par with a diligent undergraduates at best. And he would certainly recognize that this undergraduate suffers from severe anterograde amnesia, unable to form new relationships or memories beyond its intensive education. But still: Imagine the scale of Turings wonder. The computational entity on the laptop in front of him is, in a very real sense, his intellectual childand ours. Appreciating intelligence in our children as they grow and develop is always, in the end, an act of wonder, and of love. The Actual Alan Turing Test is not a test of AI at all. It is a test of us humans. Are we passingor failing?

See the article here:

To Navigate the Age of AI, the World Needs a New Turing Test - WIRED

Your Financial Advisor Will Soon Use AI on Your Portfolio – Barron’s

ChatGPT software and other generative artificial intelligence tools are muscling their way into the financial services industry, and will be involved in both retirement planning and constructing investment portfolios.

JPMorgan is developing a ChatGPT-like A.I. service, called IndexGPT, according to a trademark filing, that can select securities and investment advice.

Rival Morgan Stanley is also testing an OpenAI-powered chatbot for its 16,000 financial advisors to help better serve clients. Like ChatGPT, this tool will provide instant answers to advisors questions, drawing on Morgan Stanley research.

The technology isnt yet running money on its own, but a study conducted by two academics in South Korea shows a portfolio constructed using ChatGPT outperformed random stock selection as a portfolio manager. Among other things, ChatGPT was better at picking diversified assets, producing a more efficient portfolio.

Advertisement - Scroll to Continue

In another experiment, a dummy portfolio of stocks selected byChatGPT significantly outperformed some of the leading investment funds in the U.K. As of March 6 to April 28, the continuing study showed that the AI-generated portfolio increased in value by 4.9%, surpassing the 3% gains of the S&P 500 index, while major U.K. investment funds lost 0.8%, over the same period. As of July 27, the ChatGPT fund had racked up nearly a10% return.

AI considered key principles from top-performing funds to select personalized stocks. This process would be very difficult for an amateur investor, and could easily be derailed by conscious and unconscious bias, says Jon Ostler, CEO at Finder.com, a global fintech firm that conducted the study.

While the fund continues to outperform, Ostler admits it doesnt yet have access to real time information. The next step would be to make a portfolio that constantly monitors the market and continually tweaks the portfolio based on external factors, he adds.

Advertisement - Scroll to Continue

AI is fantastic for synthesizing large amounts of data, says Ostler. In theory, generative AI has the potential to support and enhance many aspects of retirement planning if it has access to up-to-date and specific financial data sources and analyst research.

AI models are getting good at predictions and simulations which can be useful in testing different future scenarios and their impact on specific financial goals. AI could be used to develop, test and illustrate retirement plans quickly, as long as all the individual circumstances of a person can be fed into the model effectively, says Ostler. Unlike Monte Carlo simulations that use models constructed by experts to predict probabilities, AI builds its own models to predict future outcomes, Ostler says.

Generative AI also holds the potential for making the retirement planning process more efficient. The use of AI-powered automation will allow retirement plans to be continuously adapted based on changing circumstances and new data, Ostler says. A plan could, thus, be designed by an advisor using AI and then updated and enhanced automatically with little human effort.

Advertisement - Scroll to Continue

Models like GPT-4a more advanced version of ChatGPTcan analyze vast amounts of data, consider multiple variables, and generate possible scenarios. While it cant predict the future, it can aid in creating hyper-personalized strategies based on the users input and the data it has been trained on for these purposes, says Dave Mazza, chief strategy officer at Roundhill Investments.

For example, a client may have dueling objectives such as needing current income for living expense and capital appreciation for the future. AI could help serve as the advisors co-pilot in analyzing the range of acceptable outcomes and determine what is and isnt relevant to a clients individual requirements and craft personalized strategies with greater customization to better meet their investment objectives, Mazza notes.

His investment firm is in the early stages of incorporating generative AI into numerous workflows to gain additional precision, speed, and cost-effectiveness. These AI models can process massive data sets in seconds and provide personalized advice, which could augment advisors productivity and optimize their business, he adds.

Advertisement - Scroll to Continue

Over time, generative AI could acquire a fine-grained ability to understand more complex aspects of retirement planning, such as dynamic portfolio management. Generative AI might evolve to develop personalized investment strategies that flexibly respond in real time to changes in an individuals financial circumstances and market dynamics. There are expectations of advancements that would enable generative AI to understand user emotions, needs, and aspirations more accurately to offer more personalized advice, says Mazza.

All told, ChatGPT is a complementary tool. As a personal assistant AI could perform many routine tasks of advisors, leaving them with the responsibility of reviewing the AIs recommendations and providing the final stamp of approval.

AI could permit financial advisors to be far more efficient, says John Rekenthaler, director of research for Morningstar Research Services. In the future, advisors may be less valued for their deep knowledge on a subject, as AI programs can replace that knowledge. Instead, they may be more valued for their ability to effectively use AI technology in their work, he adds.

Rekenthaler says AIs role will grow. Further down the line, AI will become intertwined with the financial planning process, he says. The advisor will retain the personal relationship, but AI will assist in asking the questions and will ultimately create the financial plans.

Write to editors@barrons.com

Read more from the original source:

Your Financial Advisor Will Soon Use AI on Your Portfolio - Barron's

The Case Against AI Everything, Everywhere, All at Once – TIME

I cringe at being called Mother of the Cloud," but having been part of the development and implementation of the internet and networking industryas an entrepreneur, CTO of Cisco, and on the boards of Disney and FedExI am fortunate to have had a 360-degree view of the technologies that are at the foundation of our modern world.

I have never had such mixed feelings about technological innovation. In stark contrast to the early days of internet development, when many stakeholders had a say, discussions about AI and our future are being shaped by leaders who seem to be striving for absolute ideological power. The result is Authoritarian Intelligence. The hubris and determination of tech leaders to control society is threatening our individual, societal, and business autonomy.

What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as theonly possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research.

Artificial Intelligence is not just chat bots, but a broad field of study. One implementation capturing todays attention, machine learning, has expanded beyond predicting our behavior to generating contentcalled Generative AI. The awe of machines wielding the power of language is seductive, but Performative AI might be a more appropriate name, as it leans toward production and mimicryand sometimes fakeryover deep creativity, accuracy, or empathy.

The very fact that the evolution of technology feels so inevitable is evidence of an act of manipulation, an authoritarian use of narrative brilliantly described by historian Timothy Snyder. He calls out the politics of inevitability ...a sense that the future is just more of the present, ... that there are no alternatives, and therefore nothing really to be done. There is no discussion of underlying values. Facts that dont fit the narrative are disregarded.

Read More: AI's Long-term Risks Shouldn't Makes Us Miss Present Risks

Here in Silicon Valley, this top-down authoritarian technique is amplified by a bottom-up culture of inevitability. An orchestrated frenzy begins when the next big thing to fuel the Valleys economic and innovation ecosystem is heralded by companies, investors, media, and influencers.

They surround us with language coopted from common valuesdemocratization, creativity, open, safe. In behavioral psych classes, product designers are taught to eliminate frictionremoving any resistance to us to acting on impulse.

The promise of short-term efficiency, convenience, and productivity lures us. Any semblance of pushback is decried as ignorance, or a threat to global competition. No one wants to be called a Luddite. Tech leaders, seeking to look concerned about the public interest, call for limited, friendly regulation, and the process moves forward until the tech is fully enmeshed in our society.

We bought into this narrative before, when social media, smartphones and cloud computing came on the scene. We didnt question whether the only way to build community, find like-minded people, or be heard, was through one enormous town square, rife with behavioral manipulation, pernicious algorithmic feeds, amplification of pre-existing bias, and the pursuit of likes and follows.

Its now obvious that it was a path towards polarization, toxicity of conversation, and societal disruption. Big Tech was the main beneficiary as industries and institutions jumped on board, accelerating their own disruption, and civic leaders were focused on how to use these new tools to grow their brands and not on helping us understand the risks.

We are at the same juncture now with AI. Once again, a handful of competitive but ideologically aligned leaders are telling us that large-scale, general-purpose AI implementations are the only way forward. In doing so, they disregard the dangerous level of complexity and the undue level of control and financial return to be granted to them.

While they talk about safety and responsibility, large companies protect themselves at the expense of everyone else. With no checks on their power, they move from experimenting in the lab to experimenting on us, not questioning how much agency we want to give up or whether we believe a specific type of intelligence should be the only measure of human value.

The different types and levels of risks are overwhelming, and we need to focus on all of them: the long-term existential risks, and the existing ones. Disinformation, supercharged by deep fakes, data privacy issues, and biased decision making continue to erode trustwith few viable solutions. We do not yet fully understand risks to our society at large such as the level and pace of job loss, environmental impacts, and whether we want opaque systems making decisions for us.

Deeper risks question the very aspects of humanity. When we prioritize intelligence to the exclusion of cognition, might we devolve to become more like machines? On the current trajectory we may not even have the option to weigh in on who gets to decide what is in our best interest. Eliminating humanity is not the only way to wipe out our humanity.

Human well-being and dignity should be our North Starwith innovation in a supporting role. We can learn from the open systems environment of the 1970s and 80s. When we were first developing the infrastructure of the internet, power was distributed between large and small companies, vendors and customers, government and business. These checks and balances led to better decisions and less risk.

AI everything, everywhere, all at once, is not inevitable, if we use our powers to question the tools and the people shaping them. Private and public sector leaders can slow the frenzy through acts of friction; simply not giving in to the Authoritarian Intelligence emanating out of Silicon Valley, and our collective group think.

We can buy the time needed to develop impactful national and international policy that distributes power and protects human rights, and inspire independent funding and ethics guidelines for a vibrant research community that will fuel innovation.

With the right priorities and guardrails, AI can help advance science, cure diseases, build new industries, expand joy, and maintain human dignity and the differences that make us unique.

More Must-Reads From TIME

Contact us at letters@time.com.

Visit link:

The Case Against AI Everything, Everywhere, All at Once - TIME

Security Pressures Mount Around AI’s Promises & Peril – Dark Reading

BLACK HAT USA Las Vegas Friday, Aug. 11 Artificial intelligence (AI)is not a newcomer to the tech world, but as ChatGPT and similar offerings push it beyondlab environments and use cases like Siri, Maria 'Azeria' Markstedter, founder and CEO of Azeria Labs, said that security practioners need to be on alert forhow its evolution will affect their daily realities.

Jokingly she claimed that AI is "now in safe hands of big technology companies racing against time to compete to be safe fromelimination," in the wake ofOpenAI releasing its ChatGPT model while other companies held back. "With the rise of ChatGPT, Google's peace time approach was over and everyone jumped in," she said, speaking from the keynote stage at Black Hat USA this week.

Companies are investing millions of dollars of funding into AI, but whenever the world shifts towards a new type of technology, "corporate arms races are not driven by concern for safety or security, as security slows down progress."

She said the use cases to integrate AI are evolving, and it is starting to make a lot of money, especially those who dominate the market. However, there is a need for "creators to break it, and fix it, and ultimately prevent the technology in its upcoming use cases to blow up in our faces."

She added that companies may be experiencing a bit of irrational exuberance."Every business wants to be an AI business sample machine right now and the way that our businesses are going to leverage these tools to integrate AI will have significant impact on our threat model," she said. However, the rapid adoption of AI means that its effect on the entire cyber-threat model remains an unknown.

Acknowledging that ChatGPT was "pretty hard to escape over the last nine months," Markstedter said the skyrocketing increase in users led to some companies limitingaccess to it. Enterprises were skeptical, she said, as OpenAI is a black box, and anything you feed to ChatGPT will be part of the OpenAI data set.

She said: "Companies don't want to leak their sensitive data to an external provider, so they started banning employees from using ChatGPT for work, but every business still wants to, and is even pressured to, augment their workforce products and services with AI; they just don't trust sensitive data to ... external providers that can make part of the data set."

However, the intense focus and fast pace of development and integration of OpenAI will force security practitioners to evolve quickly.

"So, the way our organizations are going to use these things is changing pretty quickly: from something you check with for the browser, to something businesses integrate to their own infrastructure, to something that will soon be native to our operating system and mobile device," she said.

Markstedter said the biggest problem for AI and cybersecurity is that we don't have enough people with the skills and knowledge to assess these systems and create the guardrails that we need. "So there are already new job flavors emerging out of these little challenges," she said.

Concluding, Markstedter highlighted four takeaways: First, that AI systems and their use cases and capabilitiesare evolving; second, that we need to take the possibility of autonomous AI agents becoming a reality within our enterprise seriously; third, is that we need to rethink our concepts around identity and apps; and fourth, we need to rethink our concepts around data security.

"So we need to learn about the very technology that's changing our systems and our threat model in order to address these emerging problems, and technological changes aren't new to us," she said."We have no manuals to tell us how to fix our previous problems. We are all self-taught in one way or another, and now our industry attracts creative minds with an entire mindset. So we know how to study new systems and find creative ways to break them."

She concluded by saying that this is our chance to reinvent ourselves, our security posture, and our defenses. "For the next danger of security challenges, we need to come together as a community and foster research into this areas," she said.

Read the original here:

Security Pressures Mount Around AI's Promises & Peril - Dark Reading

White House is fast-tracking executive order on artificial intelligence – CyberScoop

LAS VEGAS The Biden administration is expediting work to develop an executive order to address risks posed by artificial intelligence and provide guidelines to federal agencies on how it might be used, Arati Prabhakar, director of the White House Office of Science Technology and Policy, told CyberScoop on the sidelines of the DEF CON security conference.

As generative AI tools such as ChatGPT have become widely available, Prabhakar said that President Biden has grown increasingly concerned about the technology and that the administration is working rapidly to craft an executive order that will provide guidance to federal agencies on how best to use AI.

Its not just the normal process accelerated its just a completely different process, Prabhakar said, adding that shes been encouraged by the urgency federal agencies are treating AI regulation. They know its serious, they know what the potential is, and so their departments and agencies are really stepping up.

Prabhakar spoke to reporters after visiting the AI village at DEF CON, where thousands of hackers here are participating in a red-teaming exercise aimed at discovering vulnerabilities in leading AI models. Over the course of the conference, attendees have stood in long lines for a chance to spend 50 minutes at a laptop attempting to prompt the laptops into generating problematic content.

Prabhakars comments come amid a flurry of work on Capitol Hill and the White House to craft stronger AI guardrails.

Senate Majority Leader Chuck Schumer, D.-N.Y., has begun convening a series of listening sessions aimed to educating lawmakers about the technology and laying the groundwork for a major legislative push to regulate AI.

The White House recently announced a set of voluntary safety commitments from leading AI companies, and a forthcoming executive order is expected to provide additional guidance on how to deploy the technology safely. This week, the White House and the Defense Advanced Projects Research Agency announced that they would launch a challenge aimed at using AI to defend computer systems and discover vulnerabilities in open source software.

Prabhakar said policymakers have a unique opportunity today to harness the benefits and govern the risks of what could be a transformational technology.

A lot of the dreams that we all had about information technology have today come true, Prabhakar said. But some nightmares have come with that, and Prabhakar said a growing realization about the harms posed by technology is fueling a sense of urgency in the federal government to put up guardrails.

Continue reading here:

White House is fast-tracking executive order on artificial intelligence - CyberScoop

As hospitals use AI chatbots and algorithms, doctors and nurses say … – The Washington Post

Updated August 10, 2023 at 10:54 a.m. EDT|Published August 10, 2023 at 7:00 a.m. EDT

NEW YORK Every day Bojana Milekic, a critical care doctor at Mount Sinai Hospital, scrolls through a computer screen of patient names, looking at the red numbers beside them a score generated by artificial intelligence to assess who might die.

On a morning in May, the tool flagged a 74-year-old lung patient with a score of .81 far past the .65 score when doctors start to worry. He didnt seem to be in pain, but he gripped his daughters hand as Milekic began to work. She circled his bed, soon spotting the issue: A kinked chest tube was retaining fluid from his lungs, causing his blood oxygen levels to plummet.

After repositioning the tube, his breathing stabilized a simple intervention, Milekic says, that might not have happened without the aid of the computer program.

Milekics morning could be an advertisement for the potential of AI to transform health care. Mount Sinai is among a group of elite hospitals pouring hundreds of millions of dollars into AI software and education, turning their institutions into laboratories for this technology. Theyre buoyed by a growing body of scientific literature, such as a recent study finding AI readings of mammograms detected 20 percent more cases of breast cancer than radiologists along with the conviction that AI is the future of medicine.

Researchers are also working to translate generative AI, which backs tools that can create words, sounds and text, into a hospital setting. Mount Sinai has deployed a group of AI specialists to develop medical tools in-house, which doctors and nurses are testing in clinical care. Transcription software completes billing paperwork; chatbots help craft patient summaries.

But the advances are triggering tension among front-line workers, many of whom fear the technology comes at a strong cost to humans. They worry about the technology making wrong diagnoses, revealing sensitive patient data and becoming an excuse for insurance and hospital administrators to cut staff in the name of innovation and efficiency.

Most of all, they say software cant do the work of a human doctor or nurse.

If we believe that in our most vulnerable moments we want somebody who pays attention to us, Michelle Mahon, the assistant director of nursing practice at the National Nurses United union, said, then we need to be very careful in this moment.

Hospitals have dabbled with AI for decades. In the 1970s, Stanford University researchers created a rudimentary AI system that asked doctors questions about a patients symptoms and provided a diagnosis based on a database of known infections.

In the 1990s and early 2000s, AI algorithms began deciphering complex patterns in X-rays, CT scans and MRI images to spot abnormalities that the human eye might miss.

Several years later, robots fueled with AI vision began operating alongside surgeons. With the advent of electronic medical records, companies incorporated algorithms that scanned troves of patient data to spot trends and commonalities in patients who had certain ailments, and recommend tailored treatments.

As higher computing power has turbocharged AI, algorithms have moved from spotting trends to predicting whether a specific patient will suffer from an ailment. The rise of generative AI has created tools that more closely mimic patient care.

Vijay Pande, a general partner at venture capital firm Andreessen Horowitz, said health care is at a turning point. Theres a lot of excitement about AI right now, he said. The technology has gone from being cute and interesting to where actually [people] can see it being deployed.

In March, the University of Kansas health system started using medical chatbots to automate clinical notes and medical conversations. The Mayo Clinic in Minnesota is using a Google chatbot trained on medical licensing exam questions, called Med-Palm 2, to generate responses to health care questions, summarize clinical documents and organize data, according to a July report in the Wall Street Journal.

Some of these products have already raised eyebrows among elected officials. Sen. Mark R. Warner (D-Va.) on Tuesday urged caution in the rollout of Med-Palm 2, citing repeated inaccuracies in a letter to Google.

While artificial intelligence (AI) undoubtedly holds tremendous potential to improve patient care and health outcomes, I worry that premature deployment of unproven technology could lead to the erosion of trust in our medical professionals and institutions, he said in a statement.

Thomas J. Fuchs, the dean for AI at Mount Sinais Icahn School of Medicine, said it is imperative that research hospitals, which are staffed with pioneering physicians and researchers, act as laboratories to test this technology.

Mount Sinai has taken the premise literally, raising over 100 million dollars through private philanthropy and building research centers and on-site computing facilities. This allows programmers to build AI tools in-house that can be refined based on physician input, used in their hospitals and also sent to places that dont have the money to do similar research.

You cannot transplant people, Fuchs said. But you can transplant knowledge and experience to some degree with these models that then can help physicians in the community.

But Fuchs added that theres enormous amount of hype about AI in medicine right now, and more start-up companies than you can count who like to evangelize to sometimes absurd degrees about the revolutionary powers the technology can hold in medicine. He worries they may create products that make biased diagnoses or put patient data at risk. Strong federal regulation, along with physician oversight, is paramount, he said.

David L. Reich, the president of The Mount Sinai hospital and Mount Sinai Queens, said his hospital has been wanting to use AI more broadly for a few years, but the pandemic delayed its rollout.

Though generative chatbots are becoming popular, Reichs team is focusing mostly on using algorithms. Critical care physicians are piloting predictive software to identify patients who are at risk of issues such as sepsis or falling the kind of software used by Milekic. Radiologists use AI to more accurately spot breast cancer. Nutritionists use AI to flag patients who are likely to be malnourished.

Reich said the ultimate goal is not to replace health workers, but something more simple: getting the right doctor to the right patient at the right time.

But some medical professionals arent as comfortable with the new technology.

Mahon, of National Nurses United, said there is very little empirical evidence to demonstrate AI is actually improving patient care.

We do experiments in this country, we use the clinical trial, but for some reason, these technologies, theyre being given a pass, she said. Theyre being marketed as superior, as ever present, and other types of things that just simply dont bear out in their utilization.

Though AI can analyze troves of data and predict how sick a patient might be, Mahon has often found that these algorithms can get it wrong. Nurses see beyond a patients vital signs, she argues. They see how a patient looks, smell unnatural odors from their body and can use these biological data points as predictors that something might be wrong. AI cant do that, she said.

Some physicians interviewed by Duke University in a May survey expressed reservations AI models might exacerbate existing issues with care, including bias. I dont think we even really have a great understanding of how to measure an algorithms performance, let alone its performance across different race and ethnic groups, one respondent told researchers in the study of caregivers at hospitals including the Mayo Clinic, Kaiser Permanente and the University of California San Francisco.

At a time of severe nursing shortage, Mahon said hospital administrators excitement to incorporate the technology is less about patient outcomes and more about plugging holes and saving costs.

The [health care] industry really is helping people buy into the all the hype, she said, so that they can cut back on their labor without any questions.

Robbie Freeman, Mount Sinais vice president of digital experience, said the hardest parts of getting AI into hospitals are the doctors and nurses themselves. You may have come to work for 20 years and done it one way, he said, and now were coming in and asking you to do it another way.

People may feel like its flavor of the month, he added. They may not fully be bought into the idea of adopting some sort of new practice or tool.

And AI is not always a surefire method for saving time. When Rebecca Brown, a 45-year-old heart patient from Corning, N.Y., was flagged as one of the sickest patients in Mount Sinais critical care ward on a May morning, Milekic went to her room to run an examination.

Milekic quickly saw nothing was out of the ordinary, letting Brown continue eating her peanut butter and jelly sandwich.

Asked whether she would want AI to care for her over a doctor, Browns answer was simple: There is something that technology can never do, and that is be human, she said. "[I] hope that the human touch doesnt go away.

correction

A previous version of this story misstated David L. Reich's position. He is the president of The Mount Sinai hospital and Mount Sinai Queens.

Read more here:

As hospitals use AI chatbots and algorithms, doctors and nurses say ... - The Washington Post

I lived with AI in my browser for two weeks here’s what happened – Laptop Mag

AI chatbots have spared no one. Within a few months, theyve made their way into search engines, photo editors, and spreadsheets; the lists endless. Their next target is what most of us spend hours on each day: Web browsers.

Though the web itself has evolved beyond measure, browsers have looked and worked the same for years. We go through an immense volume of content and correspondence, often without context, and browsers do little to help us better navigate and understand it. Unlike the rest of our apps and devices, web browsers have yet to embrace automation. Now, the ChatGPT-fueled AI wave has spurred a handful of browser makers to wonder if AI could lend us a hand.

Earlier this year, Microsoft, which also backs the firm behind ChatGPT, rolled out the Bing AI chatbot to the Edge browser. Others soon followed. From Opera to alternate, niche brands like SigmaOS have raced to add an AI sidekick to their eponymous browsers. Heck, even Samsungs planning to integrate ChatGPT inside its internet browser on Galaxy devices.

The question is, do you need AI in your browser? I lived with Edge and Opera for a couple of weeks to find out.

Each browser AI assistant so far functions more or less the same way. It lives as a sidebar and allows you to chat with a ChatGPT-powered bot and fetch answers on any topic. You can ask specific questions, generate text for emails and social media posts, and summarize articles. Plus, the bot can read the contents of the web page youre browsing so that you can request further insights or context about it.

Microsoft Edge offers the most elaborate layout, however. When you click the big blue Bing AI option at the top-right corner, it opens up a multi-tab sidebar with dedicated sections for Chat, Compose, and Insights.

The first tab functions like any typical chatbot, albeit with access to your current web page. Besides the usual queries, this lets you simply punch in, for example, summarize this, and the chatbot will spurt out a few bulleted highlights after sifting through the website for a minute or two. What I found handy was the set of follow-up suggestions Bing AI presented beneath each response, which cut back the effort I had to put in to fetch the information. Once it summarized the review of an e-ink tablet, it checked with me if Id like to know how an e-inks screen works.

The second, Compose, as the name suggests, lets you auto-generate text for any purpose. All you have to do is give a prompt. In case youre looking for a specific output, youll also find filters to set the tone, format (like a blog post or an email), and length. The Add to Site button pastes the draft to a selected text box so that you dont have to copy it yourself.

The third tab is where I spent most of my time. It stitches together a dossier of the link you are on. Apart from a summary and highlights, it shows you the websites trust rating, related searches to dive deeper, and listings of a product on other platforms (if you are shopping). It worked well when I wanted to skim an article or read other related stories. On a phone review, for example, I could visit videos about it right from the Bing AI sidebar. Although I was disappointed to discover that it didnt surface its e-commerce link and price.

Operas Aria browser chatbot does a few tasks better. For me, its best quality is that you dont have to pull out the sidebar to use it. You can either launch a Spotlight-search bar with a keyboard shortcut and enter your query there or select a text, which is when the browser brings up three, hovering options: Explain briefly, Explore topic, and Translate.

Though this menu sounds fairly insignificant, it saves up a lot of time if you actively use the browser chatbot. Bing AIs Insights often missed the exact phrase or word on a web page I wanted to explore more and, as a result, I had to type a question or paste it in the chat manually. Operas quick search bar also feels a lot more natural while Im knee-deep in research, as opposed to moving a cursor all the way to the corner on Edge.

While these AI sidebars were productive and replaced the one option I probably used to click the most: the Search with Google option in the right-click menu, they are also vastly limited and passive at the moment. Thats both an upside and a downside.

On one hand, that means they dont interfere with your existing browsing experience, and are there when you need them. At the same time, I wished they were more proactive, and gave me suggestions or insights as Im scrolling. Even though Operas approach is faster, it still opens a sidebar I have to shift my vision to, and Id rather prefer a mini hovering window in the center itself.

The most surprising omission was that these AI functions cant perform any browser actions like sifting my browsing history to unearth what Ive read about a topic in the past, and offer that as context if I search for it again. In their current form, therefore, unfortunately, they feel no more than a ChatGPT wrapper.

A bigger worry for me is what these rampant AI assistants mean for privacy. Both Microsoft and Opera say they store conversations for a month before deleting them. Your data is also anonymously stored on OpenAIs servers, which by default trains its models on it, but you can go into settings and disable that. Security researchers have been able to trick chatbots like Bing AI into asking people for their private information, including your email inbox and bank account.

Browsers have always functioned as windows to the web. With new AI features, though, companies are hoping to take a step further. Instead of you choosing what your online experience looks like, AI chatbots limit the internets scope to their knowledge and the chat windows borders. Though I found it helpful on occasion, these updates seem rushed and have little utility. Id rather just open another tab.

Back to Ultrabook Laptops

Read the original here:

I lived with AI in my browser for two weeks here's what happened - Laptop Mag

AI can be a force for good or ill in society, so everyone must shape it, not just the tech guys – The Guardian

Living with AI

Although designers do have a lot of power, AI is just a tool conceived to benefit us. Communities must make sure that happens

Fri 11 Aug 2023 03.00 EDT

Superpower. Catastrophic. Revolutionary. Irresponsible. Efficiency-creating. Dangerous. These terms have been used to describe artificial intelligence over the past several months. The release of ChatGPT to the general public thrusts AI into the limelight, and many are left wondering: how it is different from other technologies, and what will happen when the way we do business and live our lives changes entirely?

First, it is important to recognise that AI is just that: a technology. As Amy Sample Ward and I point out in our book, The Tech That Comes Next, technology is a tool created by humans, and therefore subject to human beliefs and constraints. AI has often been depicted as a completely self-sufficient, self-teaching technology; however, in reality, it is subject to the rules built into its design. For instance, when I ask ChatGPT, What country has the best jollof rice?, it responds: As an AI language model, I dont have personal opinions, but I can provide information. Ultimately, the question of which country has the best jollof rice is subjective and depends on personal preference. Different people may have different opinions based on their cultural background, taste preferences, or experiences.

This reflects an explicit design choice by the AI programmers to prevent this AI program providing specific answers to matters of cultural opinion. Users of ChatGPT may ask the model questions of opinion about topics more controversial than a rice dish, but because of this design choice, they will receive a similar response. Over recent months, ChatGPT has modified its code to react to accusations and examples of sexism and racism in the products responses. We should hold developers to a high standard and expect checks and balances in AI tools; we should also demand that the process to set these boundaries is inclusive and involve some degree of transparency.

Whereas designers have a great deal of power in determining how AI tools work, industry leaders, government agencies and nonprofit organisations can exercise their power to choose when and how to apply AI systems. Generative AI may impress us with its ability to produce headshots, plan vacation agendas, create work presentations, and even write new code, but that does not mean it can solve every problem. Despite the technological hype, those deciding how to use AI should first ask the affected community members: What are your needs? and What are your dreams?. The answers to these questions should drive constraints for developers to implement, and should drive the decision about whether and how to use AI.

In early 2023, Koko, a mental health app, tested GPT-3 to counsel 4,000 people but shut the test down because it felt kind of sterile. It quickly became apparent that the affected community did not want an AI program instead of a trained human therapist. Although the conversation about AI may be pervasive, its use is not and does not have to be. The consequences of rushing to rely solely on AI systems to provide access to medical services, prioritisation for housing, or recruiting and hiring tools for companies can be tragic; systems can exclude and cause harm at scale. Those considering how to use it must recognise that the decision to not use AI is just as powerful as the decision to use AI.

Underlying all these issues are fundamental questions about the quality of datasets powering AI and access to the technology. At its core, AI works by performing mathematical operations on existing data to provide predictions or generate new content. If the data is biased, not representative, or lacks specific languages, then the chatbot responses, the activity recommendations and the images generated from our prompts may have the same biases embedded.

To counter this, the work of researchers and advocates at the intersection of technology, society, race and gender questions should inform our approaches to building responsible technology tools. Safiya Noble has examined the biased search results that appeared when professional hairstyles and unprofessional hairstyles for work were searched in Google. The former term yielded images of white women; the latter search, images of Black women with natural hairstyles. Increased awareness and advocacy based on the research eventually pushed Google to update its system.

There has also been work to influence AI systems before they are deemed complete and deployed into the world. A team of Carnegie Mellon University and University of Pittsburgh researchers used AI lifecycle comic boarding, or translating the AI reports and tools into easy-to-understand descriptions and images, to engage frontline workers and unhoused individuals in discussions about an AI-based decision support system for homeless services in their area. They were able to absorb how the system worked and provide concrete feedback to the developers. The lesson to be learned is that AI is used by humans, and therefore an approach that combines the technology with societal context is needed to shape it.

Where do we as a society go from here? Whose role is it to balance the design of AI tools with the decision about when to use AI systems, and the need to mitigate harms that AI can inflict? Everyone has a role to play. As previously discussed, technologists and organisational leaders have clear responsibilities in the design and deployment of AI systems. Policymakers have the ability to set guidelines for the development and use of AI not to restrict innovation, but rather to direct it in ways that minimise harm to individuals. Funders and investors can support AI systems that centre humans and encourage timelines that allow for community input and community analysis. All these roles must work together to create more equitable AI systems.

The cross-sector, interdisciplinary approach can yield better outcomes, and there are many promising examples today. Farmer.chat uses Gooey.AI to enable farmers in India, Ethiopia and Kenya to access agricultural knowledge in local languages on WhatsApp. The African Center for Economic Transformation is in the process of developing a multi-country, multi-year programme to undertake regulatory sandbox, or trial, exercises on AI in economic policymaking. Researchers are investigating how to use AI to revitalise Indigenous languages. One such project is working with the Cheyenne language in the western United States.

These examples demonstrate how AI can be used to benefit society in equitable ways. History has proved that inequitable effects of technology compound over time; these disparate effects are not something for the tech guys to fix on their own. Instead, we can collectively improve the quality of AI systems developed and used on and about our lives.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Follow this link:

AI can be a force for good or ill in society, so everyone must shape it, not just the tech guys - The Guardian