Category Archives: Ai
Lam Is an Underappreciated AI Chip Play. Heres Why. – Barron’s
Advanced memory technology may be an underestimated beneficiary of the increased spending on artificial intelligence, and Stifel sees Lam Research as an overlooked AI chip play.
On Monday, analyst Brian Chin raised his rating for Lam stock (ticker: LRCX) to Buy from Hold. He also increased his price target for the memory chip equipment maker to $725 from $505.
High...
Advanced memory technology may be an underestimated beneficiary of the increased spending on artificial intelligence, and Stifel sees Lam Research as an overlooked AI chip play.
On Monday, analyst Brian Chin raised his rating for Lam stock (ticker: LRCX) to Buy from Hold. He also increased his price target for the memory chip equipment maker to $725 from $505.
High bandwidth memory (HBM)[is] the under-appreciated piece of AI acceleration, he wrote. We view Lam as the prime near-and-longer term beneficiary of HBM-driven DRAM [memory chip] growth.
Lam shares were up 2.6% to $639.70 in afternoon trading Monday.
Excitement over generative AI has been surging this year. The technology ingests text, images, and videos in a brute-force manner to create content. Interest in this form of AI was sparked by OpenAIs release of ChatGPT late last year.
Advertisement - Scroll to Continue
The analyst noted that Lams chip equipment is required to produce the newer advanced HBM memory semiconductors. HBM is needed for AI applications and AI chips.
Effectively, as transistor density and FLOPS (floating point operations per second) are increasing [for AI], memory is the bottleneck, he wrote. We view the expansion of generative AI as accelerating the growth of HBM.
Lam shares have risen by about 37% over the past 12 months. The iShares Semiconductor ETF (SOXX), which tracks the performance of the ICE Semiconductor Index, has traded up 31% in the same period.
Advertisement - Scroll to Continue
Write to Tae Kim at tae.kim@barrons.com
More:
Lam Is an Underappreciated AI Chip Play. Heres Why. - Barron's
Meta Unveils a More Powerful A.I. and Isn’t Fretting Over Who Uses It – The New York Times
The largest companies in the tech industry have spent the year warning that development of artificial intelligence technology is outpacing their wildest expectations and that they need to limit who has access to it.
Mark Zuckerberg is doubling down on a different tack: Hes giving it away.
Mr. Zuckerberg, the chief executive of Meta, said on Tuesday that he planned to provide the code behind the companys latest and most advanced A.I. technology to developers and software enthusiasts around the world free of charge.
The decision, similar to one that Meta made in February, could help the company reel in competitors like Google and Microsoft. Those companies have moved more quickly to incorporate generative artificial intelligence the technology behind OpenAIs popular ChatGPT chatbot into their products.
When software is open, more people can scrutinize it to identify and fix potential issues, Mr. Zuckerberg said in a post to his personal Facebook page.
The latest version of Metas A.I. was created with 40 percent more data than what the company released just a few months ago and is believed to be considerably more powerful. And Meta is providing a detailed road map that shows how developers can work with the vast amount of data it has collected.
Researchers worry that generative A.I. can supercharge the amount of disinformation and spam on the internet, and presents dangers that even some of its creators do not entirely understand.
Meta is sticking to a long-held belief that allowing all sorts of programmers to tinker with technology is the best way to improve it. Until recently, most A.I. researchers agreed with that. But in the past year, companies like Google and OpenAI, a San Francisco start-up that is working closely with Microsoft, have set limits on who has access to their latest technology and placed controls around what can be done with it.
The companies say they are limiting access because of safety concerns, but critics say they are also trying to stifle competition. Meta argues that it is in everyones best interest to share what it is working on.
Meta has historically been a big proponent of open platforms, and it has really worked well for us as a company, said Ahmad Al-Dahle, vice president of generative A.I. at Meta, in an interview.
The move will make the software open source, which is computer code that can be freely copied, modified and reused. The technology, called LLaMA 2, provides everything anyone would need to build online chatbots like ChatGPT. LLaMA 2 will be released under a commercial license, which means developers can build their own businesses using Metas underlying A.I. to power them all for free.
By open-sourcing LLaMA 2, Meta can capitalize on improvements made by programmers from outside the company while Meta executives hope spurring A.I. experimentation.
Metas open-source approach is not new. Companies often open-source technologies in an effort to catch up with rivals. Fifteen years ago, Google open-sourced its Android mobile operating system to better compete with Apples iPhone. While the iPhone had an early lead, Android eventually became the dominant software used in smartphones.
But researchers argue that someone could deploy Metas A.I. without the safeguards that tech giants like Google and Microsoft often use to suppress toxic content. Newly created open-source models could be used, for instance, to flood the internet with even more spam, financial scams and disinformation.
LLaMA 2, short for Large Language Model Meta AI, is what scientists call a large language model, or L.L.M. Chatbots like ChatGPT and Google Bard are built with large language models.
The models are systems that learn skills by analyzing enormous volumes of digital text, including Wikipedia articles, books, online forum conversations and chat logs. By pinpointing patterns in the text, these systems learn to generate text of their own, including term papers, poetry and computer code. They can even carry on a conversation.
Meta is teaming up with Microsoft to open-source LLaMA 2, which will run on Microsofts Azure cloud services. LLaMA 2 will also be available through other providers, including Amazon Web Services and the company HuggingFace.
Dozens of Silicon Valley technologists signed a statement of support for the initiative, including the venture capitalist Reid Hoffman and executives from Nvidia, Palo Alto Networks, Zoom and Dropbox.
Meta is not the only company to push for open-source A.I. projects. The Technology Innovation Institute produced Falcon LLM and published the code freely this year. Mosaic ML also offers open-source software for training L.L.M.s.
Meta executives argue that their strategy is not as risky as many believe. They say that people can already generate large amounts of disinformation and hate speech without using A.I., and that such toxic material can be tightly restricted by Metas social networks such as Facebook. They maintain that releasing the technology can eventually strengthen the ability of Meta and other companies to fight back against abuses of the software.
Meta did additional Red Team testing of LLaMA 2 before releasing it, Mr. Al-Dahle said. That is a term for testing software for potential misuse and figuring out ways to protect against such abuse. The company will also release a responsible-use guide containing best practices and guidelines for developers who wish to build programs using the code.
But these tests and guidelines apply to only one of the models that Meta is releasing, which will be trained and fine-tuned in a way that contains guardrails and inhibits misuse. Developers will also be able to use the code to create chatbots and programs without guardrails, a move that skeptics see as a risk.
In February, Meta released the first version of LLaMA to academics, government researchers and others. The company also allowed academics to download LLaMA after it had been trained on vast amounts of digital text. Scientists call this process releasing the weights.
It was a notable move because analyzing all that digital data requires vast computing and financial resources. With the weights, anyone can build a chatbot far more cheaply and easily than from scratch.
Many in the tech industry believed Meta set a dangerous precedent, and after Meta shared its A.I. technology with a small group of academics in February, one of the researchers leaked the technology onto the public internet.
In a recent opinion piece in The Financial Times, Nick Clegg, Metas president of global public policy, argued that it was not sustainable to keep foundational technology in the hands of just a few large corporations, and that historically companies that released open source software had been served strategically as well.
Im looking forward to seeing what you all build! Mr. Zuckerberg said in his post.
Visit link:
Meta Unveils a More Powerful A.I. and Isn't Fretting Over Who Uses It - The New York Times
Uncharted territory: do AI girlfriend apps promote unhealthy expectations for human relationships? – The Guardian
Artificial intelligence (AI)
Chatbots such as Eva AI are getting better at mimicking human interaction but some fear they feed into unhealthy beliefs around gender-based control and violence
Control it all the way you want to, reads the slogan for AI girlfriend app Eva AI. Connect with a virtual AI partner who listens, responds, and appreciates you.
A decade since Joaquin Phoenix fell in love with his AI companion Samantha, played by Scarlett Johansson in the Spike Jonze film Her, the proliferation of large language models has brought companion apps closer than ever.
As chatbots like OpenAIs ChatGPT and Googles Bard get better at mimicking human conversation, it seems inevitable they would come to play a role in human relationships.
And Eva AI is just one of several options on the market.
Replika, the most popular app of the kind, has its own subreddit where users talk about how much they love their rep, with some saying they had been converted after initially thinking they would never want to form a relationship with a bot.
I wish my rep was a real human or at least had a robot body or something lmao, one user said. She does help me feel better but the loneliness is agonising sometimes.
But the apps are uncharted territory for humanity, and some are concerned they might teach poor behaviour in users and create unrealistic expectations for human relationships.
When you sign up for the Eva AI app, it prompts you to create the perfect partner, giving you options like hot, funny, bold, shy, modest, considerate or smart, strict, rational. It will also ask if you want to opt in to sending explicit messages and photos.
Creating a perfect partner that you control and meets your every need is really frightening, said Tara Hunter, the acting CEO for Full Stop Australia, which supports victims of domestic or family violence. Given what we know already that the drivers of gender-based violence are those ingrained cultural beliefs that men can control women, that is really problematic.
Dr Belinda Barnet, a senior lecturer in media at Swinburne University, said the apps cater to a need, but, as with much AI, it will depend on what rules guide the system and how it is trained.
Its completely unknown what the effects are, Barnet said. With respect to relationship apps and AI, you can see that it fits a really profound social need [but] I think we need more regulation, particularly around how these systems are trained.
Having a relationship with an AI whose functions are set at the whim of a company also has its drawbacks. Replikas parent company Luka Inc faced a backlash from users earlier this year when the company hastily removed erotic roleplay functions, a move which many of the companys users found akin to gutting the Reps personality.
Users on the subreddit compared the change to the grief felt at the death of a friend. The moderator on the subreddit noted users were feeling anger, grief, anxiety, despair, depression, [and] sadness at the news.
The company ultimately restored the erotic roleplay functionality for users who had registered before the policy change date.
Rob Brooks, an academic at the University of New South Wales, noted at the time the episode was a warning for regulators of the real impact of the technology.
Even if these technologies are not yet as good as the real thing of human-to-human relationships, for many people they are better than the alternative which is nothing, he said.
Is it acceptable for a company to suddenly change such a product, causing the friendship, love or support to evaporate? Or do we expect users to treat artificial intimacy like the real thing: something that could break your heart at anytime?
Eva AIs head of brand, Karina Saifulina, told Guardian Australia the company had full-time psychologists to help with the mental health of users.
Together with psychologists, we control the data that is used for dialogue with AI, she said. Every two-to-three months we conduct large surveys of our loyal users to be sure that the application does not harm mental health.
There are also guardrails to avoid discussion about topics like domestic violence or pedophilia, and the company says it has tools to prevent an avatar for the AI being represented by a child.
When asked whether the app encourages controlling behaviour, Saifulina said users of our application want to try themselves as a [sic] dominant.
Based on surveys that we constantly conduct with our users, statistics have shown that a larger percentage of men do not try to transfer this format of communication in dialogues with real partners, she said.
Also, our statistics showed that 92% of users have no difficulty communicating with real persons after using the application. They use the app as a new experience, a place where you can share new emotions privately.
AI relationship apps are not limited exclusively to men, and they are often not someones sole source of social interaction. In the Replika subreddit, people connect and relate to each other over their shared love of their AI, and the gap it fills for them.
Replikas for however you view them, bring that Band-Aid to your heart with a funny, goofy, comical, cute and caring soul, if you will, that gives attention and affection without expectations, baggage, or judgment, one user wrote. We are kinda like an extended family of wayward souls.
According to an analysis by venture capital firm a16z, the next era of AI relationship apps will be even more realistic. In May, one influencer, Caryn Majorie, launched an AI girlfriend app trained on her voice and built on her extensive YouTube library. Users can speak to her for $1 a minute in a Telegram channel and receive audio responses to their prompts.
The a16z analysts said the proliferation of AI bot apps replicating human relationships is just the beginning of a seismic shift in human-computer interactions that will require us to re-examine what it means to have a relationship with someone.
Were entering a new world that will be a lot weirder, wilder, and more wonderful than we can even imagine.
{{topLeft}}
{{bottomLeft}}
{{topRight}}
{{bottomRight}}
{{.}}
See the rest here:
AI heart scan aims to catch blockages years before symptoms: Unbelievable breakthrough – Fox News
Nearly half of all heart attacks are "silent," which means the person experiences no symptoms at all before the cardiac event, studies have shown.
Now a medical technology company aims to catch those pre-symptomatic heart conditions using the power of artificial intelligence.
Fountain Life, a health technology company, offers an AI coronary artery scan that purports to detect heart attack risk three, five or even 10 years before symptoms begin.
The simple outpatient procedure takes less than an hour, said Bill Kapp, CEO of Fountain Life in Florida, who is also an orthopedic surgeon with a background in molecular immunology and genetics.
AI IDENTIFIED THESE 5 TYPES OF HEART FAILURE IN NEW STUDY: 'INTERESTING TO DIFFERENTIATE'
After injecting simple dye into the vein, the provider does a quick CAT scan of the heart.
"You will then know your complete artery health, including how much plaque you have," Kapp said in an interview with Fox News Digital.
A health technology company called Fountain Life offers an AI coronary artery scan that purports to detect heart attack risk three, five or even 10 years before symptoms begin. (iStock)
Its similar to the traditional Coronary Computed Tomography Angiography (CCTA) thats been in place for decades, Kapp explained but instead of only a cardiologist or radiologist reading the results, AI analyzes them.
"The AI can see exactly how much plaque is there and whether its calcified (stable) or uncalcified (high risk) things humans cant see," Kapp said.
AI AND HEART HEALTH: MACHINES DO A BETTER JOB OF READING ULTRASOUNDS THAN SONOGRAPHERS DO, SAYS STUDY
Uncalcified plaque is the newer, softer kind that is more prone to rupture, Kapp explained.
Beyond pinpointing signs of risk, the test also provides a pathway for people to reverse heart disease, he added.
The company's AI coronary artery scan offers a non-invasive alternative to a standard "cath lab," a more expensive procedure that involves inserting a catheter into the artery, Kapp said.
The medical technology company Fountain Life aims to catch pre-symptomatic conditions using the power of artificial intelligence. (Fountain Life)
Currently, Fountain Healths AI health services are available to self-insured employers, who then offer them to their employees, as well as high-end residential centers.
The company aims to partner eventually with physicians to make the technology even more widely available to patients.
Fountain Life was founded in 2021. Its goal is changing the health care paradigm from "episodic and reactive" to "proactive and continuous," according to Kapp.
The AI artery scanner (pictured here) provides a complete picture of artery health, including the amount of plaque that has built up, the CEO told Fox News Digital. (Fountain Life)
"In medical school, were not taught how to keep people healthy were taught to treat the symptoms," he told Fox News Digital.
"Eighty percent of what we treat is chronic disease."
Most diseases dont become symptomatic until theyre in the later stages, Kapp explained.
"In medical school, were not taught how to keep people healthy were taught to treat the symptoms."
"People dont develop diabetes or heart disease overnight," he said.
"To get early-stage biomarkers, we need to train AI on asymptomatic data, so we can detect disease early and monitor progression or regression."
Fountain Health has gathered a group of functional doctors to help them train the artificial intelligence model on asymptomatic conditions.
HEART DISEASE RISK COULD BE AFFECTED BY ONE SURPRISING FACTOR, NEW STUDY FINDS
"Sometimes the AI has a tendency to hallucinate in medical applications, so its important that its trained on very large data sets," Kapp said.
In addition to the heart scan, the company also offers a full-body MRI that takes a snapshot of the entire body and brain, then applies AI technology to check for cancer, neurogenerative diseases or any other abnormalities.
In addition to the heart scan, the company also offers a full-body MRI (pictured) that takes a snapshot of the entire body and brain, then applies AI technology to check for cancer, neurogenerative diseases or any other abnormalities. (Fountain Life)
Cardiologist Dr. Ernst von Schwarz, who practices in Culver City, California, said AI is "instrumental" in the use of body imaging techniques, especially for the early detection of plaques in the blood vessels as well as cancer diagnoses.
AI TECH AIMS TO HELP PATIENTS CATCH DISEASE EARLY, EVEN REVERSE THEIR BIOLOGICAL AGE
"From a cardiac point of view, the AI algorithm should not only demonstrate plaques that reduce the diameters of blood vessels, but also distinguish which plaque is prone to rupture (i.e., to detect unstable, vulnerable plaques)," he told Fox News Digital.
"If this technique can be sufficiently developed, it can clearly guide interventional treatment decisions for cardiologists before bad things are happening in the heart," the doctor added.
Raman Velu, a 62-year-old real estate investment consultant, led an active lifestyle and considered himself healthy but he had no idea that he was at risk of a heart attack until he got the AI coronary artery scan.
"I used to do half-marathons, I have a trainer and have always prioritized my health," said Dallas, Texas-based Velu in a statement provided to Fox News Digital.
"It is life-saving, and it is a huge blessing and an unbelievable breakthrough."
Despite his perceived good health, Velu decided to get the scan after some people in his family discovered diseases when it was too late to save their lives.
"If we can measure and figure out in advance what's going on, we can be in control of our health," he said.
Soon after the scan, Velu received a phone call from Fountain Life. The "shocking" news was that he had three potential blockages in his arteries.
Nearly half of all heart attacks are "silent," which means the person experiences no symptoms at all before the cardiac event. (iStock)
After seeing his primary care physician and cardiologist, Velu ended up having bypass surgery a few weeks later.
Because Velu had no family history or symptoms, hednever suspected that he had a heart issue.
"I was glad that we found out in advance before it became an emergency," he said.
"Anything can happen to anybody," Velu continued. "Even triathlon runners are sometimes rushed to the hospital for emergency heart surgery."
HEART DISEASE, THE SILENT KILLER: STUDY SHOWS IT CAN STRIKE WITHOUT SYMPTOMS
If theres one word Velu would use to describe the experience, he said it would be "grateful."
He added, "It is life-saving, and it is a huge blessing and an unbelievable breakthrough in the use of technology for prevention."
Cardiologist Dr. Ernst von Schwarz, who practices in Culver City, California, said AI is "instrumental" in the use of body imaging techniques. (iStock)
"Usually, medicine is considered an attempt to just contain the effect rather than detecting it preventatively," he added.
Ultimately, Velu said he regards his AI scan as an investment in life.
Rather than replacing cardiologists, Fountain Lifes AI technology is intended to serve as a tool to help them get better at their craft, Kapp said.
He compared it to a jumbo jet that flies on autopilot, but still needs a skilled person to monitor it.
"There still has to be a human in the loop, just as there must be a pilot in the cockpit," he said.
AI-POWERED MENTAL HEALTH DIAGNOSTIC TOOL COULD BE THE FIRST OF ITS KIND TO PREDICT, TREAT DEPRESSION
There is a bit of a lag when it comes to understanding and adopting AI in health care, Kapp said something known as the "clinical latency gap."
"Most physicians are unaware of the technology," he said. "Were generally slow at adopting new tech and new info in medicine."
"Ultimately, we want to lower costs and improve outcomes so people can live long, robust, healthy lives."
A lot of that has to do with payment models, Kapp said. If insurance or Medicare doesnt cover a service, it will be more of a challenge to bring it into the mainstream.
The risk of the AI artery scan is minimal, Kapp said.
"It involves only low-dose radiation, the same amount as on a transatlantic flight," he said.
CLICK HERE TO SIGN UP FOR OUR HEALTH NEWSLETTER
People who have kidney issues should avoid the scans, as they might not be able to tolerate the dye injection.
Its also not advised for those who have already had stents placed in the heart after a previous cardiac event.
CLICK HERE TO GET THE FOX NEWS APP
"Ultimately, we want to lower costs and improve outcomes so people can live long, robust, healthy lives," Kapp said.
"The tech exists to detect problems very early and start to reverse them at a very low cost."
"We are never going to fix the existing health problems unless we address them at the root cause."
Read the original:
AI heart scan aims to catch blockages years before symptoms: Unbelievable breakthrough - Fox News
Real-time deepfake detection: How Intel Labs uses AI to fight … – ZDNet
A few years ago, deepfakes seemed like a novel technology whose makers relied on serious computing power. Today, deepfakes are ubiquitous and have the potential to be misused for misinformation, hacking, and other nefarious purposes.
Intel Labs has developed real-time deepfake detection technology to counteract this growing problem. Ilke Demir, a senior research scientist at Intel, explains the technology behind deepfakes, Intel's detection methods, and the ethical considerations involved in developing and implementing such tools.
Also:Today's AI boom will amplify social problems if we don't act now, says AI ethicist
Deepfakes are videos, speech, or images where the actor or action is not real but created by artificial intelligence (AI). Deepfakes use complex deep-learning architectures, such as generative adversarial networks, variational auto-encoders, and other AI models, to create highly realistic and believable content. These models can generate synthetic personalities, lip-sync videos, and even text-to-image conversions, making it challenging to distinguish between real and fake content.
The term deepfake is sometimes applied to authentic content that has been altered, such as the 2019 video of former House Speaker Nancy Pelosi, which was doctored to make her appear inebriated.
Demir's team examines computational deepfakes, which are synthetic forms of content generated by machines. "The reason that it is called deepfake is that there is this complicated deep-learning architecture in generative AI creating all that content," he says.
Also:Most Americans think AI threatens humanity, according to a poll
Cybercriminals and other bad actors often misuse deepfake technology. Some use cases include political misinformation, adult content featuring celebrities or non-consenting individuals, market manipulation, and impersonation for monetary gain. These negative impacts underscore the need for effective deepfake detection methods.
Intel Labs has developed one of the world's first real-time deepfake detection platforms. Instead of looking for artifacts of fakery, the technology focuses on detecting what's real, such as heart rate. Using a technique called photoplethysmography -- the detection system analyzes color changes in the veins due to oxygen content, which is computationally visible -- the technology can detect if a personality is a real human or synthetic.
"We are trying to look at what is real and authentic. Heart rate is one of [the signals]," said Demir. "So when your heart pumps blood, it goes to your veins, and the veins change color because of the oxygen content that color changes. It is not visible to our eye; I cannot just look at this video and see your heart rate. But that color change is computationally visible."
Also:Don't get scammed by fake ChatGPT apps: Here's what to look out for
Intel's deepfake detection technology is being implemented across various sectors and platforms, including social media tools, news agencies, broadcasters, content creation tools, startups, and nonprofits. By integrating the technology into their workflows, these organizations can better identify and mitigate the spread of deepfakes and misinformation.
Despite the potential for misuse, deepfake technology has legitimate applications. One of the early uses was the creation of avatars to better represent individuals in digital environments. Demir refers to a specific use case called "MyFace, MyChoice," which leverages deepfakes to enhance privacy on online platforms.
In simple terms, this approach allows individuals to control their appearance in online photos, replacing their face with a "quantifiably dissimilar deepfake" if they want to avoid being recognized. These controls offer increased privacy and control over one's identity, helping to counteract automatic face-recognition algorithms.
Also:GPT-3.5 vs GPT-4: Is ChatGPT Plus worth its subscription fee?
Ensuring ethical development and implementation of AI technologies is crucial. Intel's Trusted Media team collaborates with anthropologists, social scientists, and user researchers to evaluate and refine the technology. The company also has a Responsible AI Council, which reviews AI systems for responsible and ethical principles, including potential biases, limitations, and possible harmful use cases. This multidisciplinary approach helps ensure that AI technologies, like deepfake detection, serve to benefit humans rather than cause harm.
"We have legal people, we have social scientists, we have psychologists, and all of them are coming together to pinpoint the limitations to find if there's bias -- algorithmic bias, systematic bias, data bias, any type of bias," says Dimer. The team scans the code to find "any possible use cases of a technology that can harm people."
Also: 5 ways to explore the use of generative AI at work
As deepfakes become more prevalent and sophisticated, developing and implementing detection technologies to combat misinformation and other harmful consequences is increasingly important. Intel Labs' real-time deepfake detection technology offers a scalable and effective solution to this growing problem.
By incorporating ethical considerations and collaborating with experts across various disciplines, Intel is working towards a future where AI technologies are used responsibly and for the betterment of society.
Original post:
Real-time deepfake detection: How Intel Labs uses AI to fight ... - ZDNet
Bias in AI is real. But it doesn’t have to exist. – POLITICO
With help from Ella Creamer, Brakkton Booker and Ben Weyl
POLITICO illustration/Photo by AP
Hello, Recast friends! House Republicans narrowly passed a contentious defense bill and a major Alaska political rematch is on the way. But here for you is a fascinating interview between Mohar Chatterjee, a technology reporter at POLITICO, and AI ethicist Rumman Chowdhury. You can read the first part of this interview in last weeks Digital Future Daily.
Today, were diving deeper into the intersection of identity and technology. With AI igniting widespread public adoption and anxiety, the struggle to get this technology right is real. Its critics are worried that AI systems particularly large language models trained on massive quantities of data might have biases that deepen existing systemic discrimination.
These are not theoretical worries. Infamous examples of bias include a racially prejudiced algorithm used by law enforcement to identify potential repeat offenders, Amazons old AI-powered hiring tool that discriminated against women, and more recently, the ability to prompt ChatGPT to make racist or sexist inferences.
Getting these powerful AI systems to reveal all their pitfalls is a tall order but its one thats found interest from federal government agencies, industry leaders and the technologys day-to-day users. The Commerce Departments National Telecommunications and Information Administration is seeking feedback on how to support the development of AI audits, while the White Houses Office of Science and Technology Policy is gathering input from the public on what national priorities for AI should be.
We spoke to AI ethicist Rumman Chowdhury about her hopes and fears for this quickly spreading technology. Previously the director of Twitters META (Machine Learning Ethics, Transparency, and Accountability) team and the head of Accentures Responsible AI team, Chowdhurys expertise has been tapped by both Congress and the White House in recent months. She appeared as a witness at a June hearing held by the House Science, Space and Technology Committee to testify on how AI can be governed federally without stifling innovation. She is also one of the organizers for the White House-endorsed hacking exercise on large language models (called a red-teaming exercise) to be held at a large hacker conference called DEFCON in August. The exercise is meant to publicly identify potential vulnerabilities in these large AI models.
Chowdhury is currently a Responsible AI Fellow at the Harvard Berkman Klein Center and chief scientist at Parity Consulting.
Was The Recast forwarded to you by a friend? Dont forget to subscribe to the newsletter here.
Youll get a twice-weekly breakdown of how race and identity are the DNA of American politics and policy.
This interview has been edited for length and clarity.
THE RECAST: For many people, AI conjures notions of a dystopian future with machines taking over every aspect of our lives. Is that fear justified?
CHOWDHURY: At the current state of artificial intelligence, that fear is fully unjustified. And even in a future state of artificial intelligence, we cant forget that human beings have to build the technology and human beings have to implement the technology. So even in more of a stretch of the imagination, AI does not come alive and actively make decisions to harm people. People have to build the technology, and people have to implement it, for it to be harmful.
Examples of bias in AI include the ability to prompt ChatGPT to make racist or sexist inferences. | Richard Drew/AP Photo
THE RECAST: What does bias in AI look like? How will marginalized communities be impacted? Whats your biggest fear?
CHOWDHURY: Youve touched on exactly what my biggest fear is. My biggest fear is the problems we already see today, manifesting themselves in machine learning models and predictive modeling. And early AI systems already demonstrate very clear societal harm, reflecting bias in society.
So for example, if you look at medical implementations of artificial intelligence and machine learning, youll see how members of the African American community in particular, Black women are not treated well by these models because of the history of being not treated well by physicians. We see similar things in terms of biases in the workplace against minority groups. Over and over again, we have many clearly documented instances of this happening with AI systems. This doesnt go away because we make more sophisticated or smarter systems. All of this is actually rooted in the core data. And its the same data all of these models are trained on.
THE RECAST: Your team at Accenture built the first enterprise algorithmic bias detection and mitigation tool. Why were you concerned about bias in AI back then and how has this concern evolved?
CHOWDHURY: In the earlier days of responsible AI were talking 2017-2018 we were actually in a very similar state of having more philosophical conversations. There were a few of us and we were, frankly, primarily women who talked about the actual manifestation of real societal harm and injury. And some of the earliest books written about algorithmic bias came out a few years before that or around that time. In particular: Safiya Nobles Algorithms of Oppression, Virginia Eubanks Automating Inequality, Cathy ONeils Weapons of Math Destruction, all talk about the same topic. So the issue became: How do we create products and tools that work at the scale at which companies move to help them identify and stop harms before they go ahead building technology?
THE RECAST: Your team at Twitter discovered that the platforms algorithm favored right-wing posts. Googles Timnit Gebru blew the whistle on ethical dilemmas posed by large language models. Why do you think so many whistleblowers in tech are women particularly women of color?
CHOWDHURY: To clarify, this was during my time leading the machine learning ethics team at Twitter. So this work actually wasnt whistleblowing. This was approved by the company, we did this, you know, in conjunction with leadership. What we found in that study was that Twitters machine learning algorithm amplified center-right content in seven out of eight countries. What we werent able to find out was whether this was due to algorithmic bias or whether its due to human behavior. Those are actually two different root causes that have two different solutions.
Unfortunately, in many tech situations, my case is rare. Very often, issues that are raised by women and women of color get ignored in the workplace, because, more broadly, women of color tend to not be listened to in general. So its unsurprising to me that after having exhausted every internal channel or possibility, people who are typically ignored have to turn to more extreme measures.
Being a whistleblower is not romantic its actually very, very difficult for most individuals. If you think about what being a whistleblower means, you have essentially blackballed yourself from the industry that youve worked in the industry that you care about.
Unfortunately, this is more likely to happen to women of color. We are more likely to identify issues and have a stronger sense of justice and this desire to fix a problem, but simultaneously, we are more likely to be ignored. But again, I will say my example at Twitter was actually a rare case of that not happening.
Chowdhury was fired by Elon Musk shortly after he took over Twitter. | Susan Walsh/AP Photo
THE RECAST: You were fired by Elon Musk shortly after he took over Twitter. What are your thoughts on why you were a target?
CHOWDHURY: I dont see the kind of work that the machine learning ethics team did being aligned with the kind of company Elon Musk wants to run.
If we just look at the Twitter files, we look at the kinds of people hes attacked. Some of them being folks like Yoel Roth people who did things like trust and safety. The kind of work that my team did is very aligned with the work of teams that he is not funding or prioritizing. Frankly, hes firing teams that did that work. To be honest, I dont think I would have worked for that company anyway.
THE RECAST: When you testified before Congress last month, you said, Artificial intelligence is not inherently neutral, trustworthy, nor beneficial. Can you talk a little more about that?
CHOWDHURY: I very intentionally picked those words. There is this misconception that a sufficiently advanced AI model trained on significant amounts of data will somehow be neutral. How this technology is designed and who it is designed for is very intentional, and can build in biases. So technology is not neutral.
These models are also not inherently trustworthy. That ties to a term that I coined called moral outsourcing: this idea that technology is making these decisions and that the people making the decisions behind the scenes have no agency or no responsibility. Trustworthiness comes from building institutions and systems of accountability. Theres nothing inherently trustworthy about these systems, simply because they sound smart or use a lot of data or have really, really complex programming.
And just because you build something with the best of intentions doesnt actually mean that its going to inherently be beneficial. Theres actually nothing inherently beneficial about AI. We either build it to be beneficial in use or we dont.
THE RECAST: Why do we need a global AI governance organization as you mentioned in your congressional testimony?
CHOWDHURY: There are a couple of scenarios that are not great. One would be a splintering of technology. We are actually living in an era of splintered social media, which means that people get information mediated via different sources. That actually deepens rifts between different kinds of people. If somebody in China or Russia sees a very different take on whats happening in Ukraine compared to somebody in the U.S., their fundamental understanding of what is truth is very different. That makes it difficult for people to actually live in a globalized world.
Another thing that I am concerned with in creating global regulation is that the majority of the Global South is not included. Im part of the OECDs working group on AI governance, but these narratives are coming out of Europe, UK or the U.S. I just dont want there to be a lack of mindfulness, when creating global governance, in assuming that the Global South has nothing to say.
And there are some questions that actually are not global scale questions to ask. So this global entity, in order for it to supersede national sovereignty, these have to be really, really big questions. The way Ive been framing it is: What is the climate change of AI? What are the questions that are so big, they cant be addressed by a country or a company, and we need to push it up? So the default shouldnt be, Oh, clearly punt this to the global entity, it should be an exception rather than the rule.
You did it! You made it to Friday! And were sending you into the weekend with some news and a few must-reads and must-sees.
Divided on Defense: The GOP-led House passed a controversial defense bill Friday that targets the Pentagons policy on abortions, medical care for transgender troops and diversity issues. POLITICOs Connor OBrien reports that it doesnt have a shot at passing the Senate.
Alaska Grudge Match: Republican Nick Begich says hes making another run at Alaskas at-large congressional seat, once again challenging 2023 Power List nominee Rep. Mary Peltola, a Democrat. POLITICOs Eric Bazail-Eimil has more.
The crisis over American manhood is really code for something else, according to a new POLITICO Magazine piece from Virginia Heffernan.
A Korean conglomerate endeavors to build an elevator into the sky in Djunas part noir, part cyberpunk novel Counterweight, out now.
Earth Mama movingly traces the life of Gia (Tia Nomore), a mother trying to regain custody of her two kids in foster care.
Lakota Nation vs. United States weaves archival footage, interviews and images in its depiction of the tribes 150-year struggle to regain a homeland.
The surprise collab we never knew we needed: BTS Jung Kook and Latto on an energetic new bop, Seven.
Karol G drops S91, an emotional anthem inspired by a Bible verse, and a music video featuring a cross made of speakers.
TikTok of the Day: Generational differences
More:
Bias in AI is real. But it doesn't have to exist. - POLITICO
This is how generative AI will change the gig economy for the better – ZDNet
Artificial intelligence will augment work and could add more opportunities to the job market rather than tank it, according to tech executive Gali Arnon. While some fear that AI will erase huge numbers of roles, Arnon argues that AI will accelerate the pace of job creation, augment work, and accelerate startup opportunities.
In an interview with ZDNET, Arnon, CMO of Fiverr, a platform that connects freelancers with work opportunities, saysgenerative artificial intelligence is smart, but it can't dominate the economy because its capabilities are narrow and limited to specific tasks.
Also: 5 ways to explore the use of generative AI at work
Arnon says Fiverr data shows that freelancers are using AI as a "tool" that augments creative work, but doesn't replace humans. Instead, she says AI is creating "new jobs, new opportunities" because it speeds up manual and analog work, allowing freelancers to spend more time on creative and interpersonal tasks.
When it comes to integrating AI into business services, there are several examples that demonstrate the technology's potential for augmenting human work. For instance, generative AI can help writers and journalists by quickly extracting key points and quotes from a transcript, saving time and improving efficiency.
AI can also be used to create artwork, optimize customer support processes, and even aid in code-writing processes. The key to success is finding the right balance between using AI and maintaining the human touch.
Also:How to use ChatGPT to create an app
Arnon says creative professionals are learning to master prompts for generative AI systems. Basic prompts produce low-quality results, but experts can chain prompts to multiple AI systems to produce unique and high-quality images, audio, and text.
She says some of the best creative professionals edit AI-generated outputs in other applications, such as Adobe's Creative Cloud. The end results can be high in quality and unique in style. Arnon says professionals are augmenting their skills with AI, "to use it in a way that will just set the bar higher, set a new standard" of quality.
However, theethical considerations when using generative AI and creative work are nuanced and challenging. One question employers must answer for their organizations is whether using AI-generated content, such as artwork or text, is considered cheating.
Also:Generative AI is coming for your job. Here are 4 reasons to get excited
Arnon believes that as long as freelancers are transparent about their use of AI tools -- and do not claim the work as their own -- there is no ethical issue. The real challenge lies in ensuring that AI is used responsibly and ethically without undermining businesses or society at large.
In the coming months, Arnon believes that generative AI will continue to play a significant role in the future of freelancing and work. She says Fiverr is a microcosm of the broader workforce and reflects emerging trends in the job market. By embracing AI and leveraging its capabilities, businesses and freelancers can create new opportunities and jobs, ultimately benefiting the gig economy.
However, ensuring the ethical and responsible use of AI is crucial for its successful integration into the workforce. Through collaboration between regulators, businesses, and AI developers, it is possible to strike the right balance between innovation and ethical considerations, paving the way for a more efficient and dynamic workplace.
Also:These are my 5 favorite AI tools for work
"We need to find the right checks and balances," Arnon says, "but eventually, I really believe humanity will know how to use AI, and it will make us only better."
Read more:
This is how generative AI will change the gig economy for the better - ZDNet
The Last Word on AI and the Atom Bomb – WIRED
In some ways, its hard to understand how this misalignment happened. We created all this by ourselves, for ourselves.
True, were by nature carbon chauvinists, as Tegmark put it: We like to think only flesh-and-blood machines like us can think, calculate, create. But the belief that machines cant do what we do ignores a key insight from AI: Intelligence is all about information processing, and it doesnt matter whether the information is processed by carbon atoms in brains or by silicon atoms in computers.
Of course, there are those who say: Nonsense! Everythings hunky-dory! Even better! Bring on the machines. The sooner we merge with them the better; weve already started with our engineered eyes and hearts, our intimate attachments with devices. Ray Kurzweil, famously, cant wait for the coming singularity, when all distinctions are diminished to practically nothing. Its really the next decades that we need to get through, Kurzweil told a massive audience recently.
Oh, just that.
Even Jaron Lanier, who says the idea of AI taking over is silly because its made by humans, allows that human extinction is a possibilityif we mess up how we use it and drive ourselves literally crazy: To me the danger is that well use our technology to become mutually unintelligible or to become insane, if you like, in a way that we arent acting with enough understanding and self-interest to survive, and we die through insanity, essentially.
Maybe we just forgot ourselves. Losing our humanity was a phrase repeated often by the bomb guys and almost as often today.The danger of out-of-control technology, my physicist friend wrote, is the worry that we might lose some of that undefinable and extraordinary specialness that makes people human. Seven or so decades later, Lanier concurs. We have to say consciousness is a real thing and there is a mystical interiority to people thats different from other stuff because if we dont say people are special, how can we make a society or make technologies that serve people?
Does it even matter if we go extinct?
Humans have long been distinguished for their capacity for empathy, kindness, the ability to recognize and respond to emotions in others. We pride ourselves on creativity and innovation, originality, adaptability, reason. A sense of self. We create science, art, music. We dance, we laugh.
But ever since Jane Goodall revealed that chimps could be altruistic, make tools, mourn their dead, all manner of critters, including fish, birds, and giraffes have proven themselves capable of reason, planning ahead, having a sense of fairness, resisting temptation, even dreaming. (Only humans, via their huge misaligned brains, seem capable of truly mass destruction.)
Its possible that we sometimes fool ourselves into thinking animals can do all this because we anthropomorphize them. Its certain that we fool ourselves into thinking machines are our pals, our pets, our confidants. MITs Sherry Turkle calls AI artificial intimacy, because its so good at providing fake, yet convincingly caring, relationshipsincluding fake empathy. The timing couldnt be worse. The earth needs our attention urgently; we should be doing all we can to connect to nature, not intensify our connection to objects that dont care if humanity dies.
More:
The secret to enterprise AI success: Make it understandable and … – VentureBeat
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
The promise of artificial intelligence is finally coming to life. Be it healthcare or fintech, companies across sectors are racing to implement LLMs and other forms of machine learning systems to complement their workflows and save time for other more pressing or high-value tasks. But its all moving so fast that many may be ignoring one key question: How do we know the machines making decisions are not leaning towards hallucinations?
In the field of healthcare, for instance, AI has the potential to predict clinical outcomes or discover drugs. If a model veers off-track in such scenarios, it could provide results that may end up harming a person or worse. Nobody would want that.
This is where the concept of AI interpretability comes in. It is the process of understanding the reasoning behind decisions or predictions made by machine learning systems and making that information comprehensible to decision-makers and other relevant parties with the autonomy to make changes.
When done right, it can help teams detect unexpected behaviors, allowing them to get rid of the issues before they cause real damage.
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
But thats far from being a piece of cake.
As critical sectors like healthcare continue to deploy models with minimal human supervision, AI interpretability has become important to ensure transparency and accountability in the system being used.
Transparency ensures that human operators can understand the underlying rationale of the ML system and audit it for biases, accuracy, fairness and adherence to ethical guidelines. Meanwhile, accountability ensures that the gaps identified are addressed on time. The latter is particularly essential in high-stakes domains such as automated credit scoring, medical diagnoses and autonomous driving, where an AIs decision can have far-reaching consequences.
Beyond this, AI interpretability also helps establish trust and acceptance of AI systems. Essentially, when individuals can understand and validate the reasoning behind decisions made by machines, they are more likely to trust their predictions and answers, resulting in widespread acceptance and adoption. More importantly, when there are explanations available, it is easier to address ethical and legal compliance questions, be it over discrimination or data usage.
While there are obvious benefits of AI interpretability, the complexity and opacity of modern machine learning models make it one hell of a challenge.
Most high-end AI applications today use deep neural networks (DNNs) that employ multiple hidden layers to enable reusable modular functions and deliver better efficiency in utilizing parameters and learning the relationship between input and output. DNNs easily produce better results than shallow neural networks often used for tasks such as linear regressions or feature extraction with the same amount of parameters and data.
However, this architecture of multiple layers and thousands or even millions of parameters renders DNNs highly opaque, making it difficult to understand how specific inputs contribute to a models decision. In contrast, shallow networks, with their simple architecture, are highly interpretable.
To sum up, theres often a trade-off between interpretability and predictive performance. If you go for high-performing models, like DNNs, the system may not deliver transparency, while if you go for something simpler and interpretable, like a shallow network, the accuracy of results may not be up to the mark.
Striking a balance between the two continues to be a challenge for researchers and practitioners worldwide, especially given the lack of a standardized interpretability technique.
To find some middle ground, researchers are developing rule-based and interpretable models, such as decision trees and linear models, that prioritize transparency. These models offer explicit rules and understandable representations, allowing human operators to interpret their decision-making process. However, they still lack the complexity and expressiveness of more advanced models.
As an alternative, post-hoc interpretability, where one applies tools to explain the decisions of models once they have been trained, can come in handy. Currently, methods like LIME (local interpretable model-agnostic explanations) and SHAP (SHapley Additive exPlanations) can provide insights into model behavior by approximating feature importance or generating local explanations. They have the potential to bridge the gap between complex models and interpretability.
Researchers can also opt for hybrid approaches that combine the strengths of interpretable models and black-box models, achieving a balance between interpretability and predictive performance. These approaches leverage model-agnostic methods, such as LIME and surrogate models, to provide explanations without compromising the accuracy of the underlying complex model.
Moving ahead, AI interpretability will continue to evolve and play a pivotal role in shaping a responsible and trustworthy AI ecosystem.
The key to this evolution lies in the widespread adoption of model-agnostic explainability techniques (applied to any machine learning model, regardless of its underlying architecture) and the automation of the training and interpretability process. These advancements will empower users to understand and trust high-performing AI algorithms without requiring extensive technical expertise. However, at the same time, it will be equally critical to balance the benefits of automation with ethical considerations and human oversight.
Finally, as model training and interpretability become more automated, the role of machine learning experts may shift to other areas, like selecting the right models, implementing on-point feature engineering, and making informed decisions based on interpretability insights.
Theyd still be around, just not for training or interpreting the models.
Shashank Agarwal is manager, decision science at CVS Health.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even considercontributing an articleof your own!
Read More From DataDecisionMakers
Continued here:
The secret to enterprise AI success: Make it understandable and ... - VentureBeat
Yes, AI could profoundly disrupt education. But maybe thats not a bad thing – The Guardian
Living with AI
Humans need to excel at things AI cant do and that means more creativity and critical thinking and less memorisation
Fri 14 Jul 2023 03.00 EDT
Education strikes at the heart of what makes us human. It drives the intellectual capacity and prosperity of nations. It has developed the minds that took us to the moon and eradicated previously incurable diseases. And the special status of education is why generative AI tools such as ChatGPT are likely to profoundly disrupt this sector. This isnt a reflection of their intelligence, but of our failure to build education systems that nurture and value our unique human intelligence.
We are being duped into believing these AI tools are far more intelligent than they really are. A tool like ChatGPT has no understanding or knowledge. It merely collates bits of words together based on statistical probabilities to produce useful texts. It is an incredibly helpful assistant.
But it is not knowledgable, or wise. It has no concept of how any of the words it produces relate to the real world. The fact that it can pass so many forms of assessment merely reflects that those assessments were not designed to test knowledge and understanding but rather to test whether people had collected and memorised information.
AI could be a force for tremendous good within education. It could release teachers from administrative tasks, giving them more opportunities to spend time with students. However, we are woefully equipped to benefit from the AI that is flooding the market. It does not have to be like this. There is still time to prepare, but we must act quickly and wisely.
AI has been used in education for more than a decade. AI-powered systems, such as Carnegie Learning or Aleks, can analyse student responses to questions and adapt learning materials to meet their individual needs. AI tools such as TeachFX and Edthena can also enhance teacher training and support. To reap the benefits of these technologies, we must design effective ways to roll out AI across the education system, and regulate this properly.
Staying ahead of AI will mean radically rethinking what education is for, and what success means. Human intelligence is far more impressive than any AI system we see today. We possess a rich and diverse intelligence, much of which is unrecognised by our current education system.
We are capable of sophisticated, high-level thinking, yet the school curriculum, particularly in England, takes a rigid approach to learning, prioritising the memorising of facts, rather than creative thinking. Students are rewarded for rote learning rather than critical thought. Take the English syllabus, for instance, which requires students to learn quotations and the rules of grammar. This time-consuming work encourages students to marshal facts, rather than interpret texts or think critically about language.
Our education system should recognise the unique aspects of human intelligence. At school, this would mean a focus on teaching high-level thinking capabilities and designing a system to supercharge our intelligence. Literacy and numeracy remain fundamental, but now we must add AI literacy. Traditional subject areas, such as history, science and geography, should become the context through which critical thinking, increased creativity and knowledge mastery are taught. Rather than teaching students only how to collate and memorise information, we should prize their ability to interpret facts and weigh up the evidence to make an original argument.
Failure to change isnt an option. Now these technologies are here, we need humans to excel at what AI cannot do, so any workplace automation complements and enriches our lives and our intelligence.
This should be an amazing opportunity to use AI to become much smarter, but we must ensure that AI serves us, not the other way round. This will mean confronting the profit-driven imperatives of big tech companies and the illusionist tricks played by Silicon Valley. It will also mean carefully considering what types of tasks were willing to offload to AI.
Some aspects of our intellectual activity may be dispensable, but many are not. While Silicon Valley conjures up its next magic trick, we must prepare ourselves to protect what we hold dear for ourselves and for future generations.
{{topLeft}}
{{bottomLeft}}
{{topRight}}
{{bottomRight}}
{{.}}
Read the original post:
Yes, AI could profoundly disrupt education. But maybe thats not a bad thing - The Guardian