Category Archives: Ai
AI and Robotics ETF (AIQ) Hits New 52-Week High – Yahoo Finance
For investors seeking momentum, Global X Artificial Intelligence & Technology ETF AIQ is probably on the radar. The fund just hit a 52-week high and is up 52.09% from its 52-week low price of $19.58/share.
But are more gains in store for this ETF? Lets take a quick look at the fund and the near-term outlook on it to get a better idea of where it might be headed:
The underlying Indxx Artificial Intelligence & Big Data Index seeks to gain exposure to companies positioned to benefit from the development and utilization of artificial intelligence (AI) technology in their products and services, as well as in companies that provide hardware facilitating the use of AI for the analysis of big data. The product charges 68 bps in annual fees (see: all Artificial Intelligence And Robotics ETFs).
The global AI market is forecast to witness a CAGR of about 17.3% from to 2023 to 2030, reaching a valuation of around $738.80 billion, according to Statista. The potential of AI to revolutionize global productivity and GDP is immense.
Notably, AI has deeply infiltrated numerous sectors across our society. It has made its mark in healthcare, transportation, entertainment, and cybersecurity, transforming and revolutionizing these industries. Increasing corporate spending on AI is also acting as a tailwind for the fund.
Currently, it might continue its strong performance in the near term, with a positive weighted alpha of 40.23, which gives cues of a further rally.
Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report
Global X Artificial Intelligence & Technology ETF (AIQ): ETF Research Reports
To read this article on Zacks.com click here.
Zacks Investment Research
Go here to see the original:
AI and Robotics ETF (AIQ) Hits New 52-Week High - Yahoo Finance
Liberal media displays higher opposition to AI, citing concerns over bias and inequality, new study reveals | Mint – Mint
New studies indicate that articles from liberal media sources express a higher level of opposition towards artificial intelligence (AI) compared to their conservative counterparts. This opposition is often rooted in concerns regarding AI's potential to exacerbate societal issues such as racial and gender biases, as well as income inequality, according to researchers from Virginia Tech University in the United States, reported PTI.
The researchers noted that the media's stance on artificial intelligence, as reflected in public sentiment, can significantly influence policymakers' perspectives. Consequently, these findings carry potential implications for shaping future political dialogues on AI. The research was published in the journal Social Psychological and Personality Science.
Additionally, the researchers highlighted that the variations in sentiment toward AI within partisan media outlets could potentially contribute to divergent public opinions on the subject.
"Media sentiment is a powerful driver of public opinion, and many times policymakers look towards the media to predict public sentiment on contentious issues," stated Angela Yi, study author and a PhD student in the marketing department of Virginia Tech Pamplin College of Business.
In conducting the study, the researchers assembled a dataset comprising more than 7,500 articles on artificial intelligence published between May 2019 and May 2021. These articles were sourced from diverse media outlets, including liberal-leaning publications like The New York Times and The Washington Post, as well as conservative-leaning outlets such as The Wall Street Journal and the New York Post. The selection criteria involved identifying articles with specific keywords such as "algorithm" or "artificial intelligence."
The researchers utilized an automated text analysis tool to examine the "emotional tone" of the collected articles. This tool operated by quantifying the variance between the percentage of positive emotion words and the percentage of negative emotion words within a given text. Subsequently, each article received a standardized 'emotional tone' measure or score based on this analysis.
Reportedly, the researchers clarified that they refrained from making judgments about whether the liberal media or conservative media were operating optimally. They emphasized that they did not take a stance on what should be considered the "right way" to engage in discussions about AI.
"We are just showing that these differences exist in the media sentiment and that these differences are important to quantify, see, and understand," said Shreyans Goenka, a study author and assistant professor of marketing at Virginia Tech Pamplin College of Business.
The researchers also investigated the shift in media sentiment toward AI following the death of George Floyd on May 25, 2020. Floyd, a 46-year-old Black American man, lost his life in Minneapolis, USA, when Derek Chauvin, a 44-year-old white police officer, was involved in the tragic incident.
"Since Floyd's death ignited a national conversation about social biases in society, his death heightened social bias concerns in the media," stated Yi.
"This, in turn, resulted in the media becoming even more negative towards AI in their storytelling," Yi added.
(With inputs from PTI)
Milestone Alert!Livemint tops charts as the fastest growing news website in the world Click here to know more.
More here:
Daily briefing: How AI brought back the Klimt masterpiece destroyed … – Nature.com
Hello Nature readers, would you like to get this Briefing in your inbox free every day? Sign up here.
Credit: Nishant Sharma/Getty
Scientists have identified brain cells in mice that control how quickly the rodents eat, and when they stop. Two cell types in a region of the brainstem called the caudal nucleus of the solitary tract receive signals from both the gut and the mouth. Activity in one cell type PRLH neurons ramps up as the mouses gut fills with food and ceases when the animal stops lapping at its meal. Activating the PRLH neurons slowed down the mouses chomping. Signals from the gut to the other cell type (GCG neurons) control when mice stop eating. The signals from the mouth are controlling how fast you eat, and the signals from the gut are controlling how much you eat, says neurobiologist and study co-author Zachary Knight.
Nature | 4 min read
Reference: Nature Metabolism paper
An Australian astronomy research centre has achieved gender parity across all its personnel with a five-year programme of education and affirmative action. Education, female leadership and gender-balanced hiring policies were key, say leaders and the approach could be applied to other organizations.
Nature | 5 min read
Researchers have used the technology that underlies the artificial intelligence (AI) chatbot ChatGPT to create fake data to support an unverified scientific claim. The ability of AI to fabricate convincing data adds to concern among researchers and journal editors about the technologys impact on research integrity. Our aim was to highlight that, in a few minutes, you can create a data set that is not supported by real original data, and it is also opposite or in the other direction compared to the evidence that are available, says eye surgeon and study co-author Giuseppe Giannaccare.
Nature | 6 min read
Reference: JAMA Ophthalmology paper
The colours of Gustav Klimts lost 1901 work Medicine were recovered by artificial intelligence.Credit: IanDagnall Computing/Alamy
Dozens of studies are proving the power of AI to shed new light on fine-art paintings and drawings, argues David Stork, the author of Pixels and Paintings: Foundations of Computer-Assisted Connoisseurship. For example, neural networks have been used to recreate parts of Gustav Klimts lost painting, Medicine, from preparatory sketches and photographs. Known artworks from the Western canon alone that have been lost to fire, flood, earthquakes or war would fill the walls of every public museum in the world, writes Stork. Recovering them could restore and complete our global cultural heritage.
Nature | 7 min read
Many of us have opinions about immigration, but most of us dont fully understand it, suggests sociologist Hein de Haas in his impressively wide-ranging book How Migration Really Works. By busting myths that surround human mobility, de Haas provides a welcome corrective to common misconceptions, writes reviewer and migration scholar Alan Gamlen. But with migration patterns shifting as the world rocks in the wake of the COVID-19 pandemic, its unclear for how long his conclusions will hold true, writes Gamlen.
Nature | 7 min read
Money for regions battered by climate-change-related disasters needs to be made available quickly and be accessible to people on the ground, write six scholars, including the late, influential climate scientist Saleemul Huq. The Green Climate Fund the worlds largest existing fund for supporting climate mitigation and adaptation provides lessons for how the loss-and-damage fund should operate.
Nature | 11 min read
Surgical educator Roger Kneebone decries a decline in manual dexterity in up-and-coming surgeons in the digital age. (BBC | 3 min read)
This Briefing comes to you en route to Vienna, where I will have the chance to gaze on Klimt masterpieces face-to-face. Tomorrow you will be in the capable hands of Briefing associate editor Katrina Krmer who also writes our latest specialist newsletter, Nature Briefing: AI & Robotics.
Thanks for reading,
Flora Graham, senior editor, Nature Briefing
With contributions by Dyani Lewis
Want more? Sign up to our other free Nature Briefing newsletters:
Nature Briefing: AI & Robotics the use of artificial intelligence and robotics in science, and their impact on how science is done 100% written by humans, of course
Nature Briefing: Anthropocene the footprint of humanity on Earth, including climate change, biodiversity, sustainability and geoengineering
Nature Briefing: Cancer a weekly newsletter written with cancer researchers in mind
Nature Briefing: Translational Research covers biotechnology, drug discovery and pharma
See the rest here:
Daily briefing: How AI brought back the Klimt masterpiece destroyed ... - Nature.com
What the OpenAI drama means for AI progress and safety – Nature.com
OpenAI fired its charismatic chief executive, Sam Altman, on 17 November but has now reinstated him.Credit: Justin Sullivan/Getty
OpenAI the company behind the blockbuster artificial intelligence (AI) bot ChatGPT has been consumed by frenzied changes for almost a week. On 17 November, the company fired its charismatic chief executive, Sam Altman. Five days, and much drama, later, OpenAI announced that Altman would return with an overhaul of the companys board.
The debacle has thrown the spotlight on an ongoing debate about how commercial competition is shaping the development of AI systems, and how quickly AI can be deployed ethically and safely.
The push to retain dominance is leading to toxic competition. Its a race to the bottom, says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization based in New York City.
Altman, a successful investor and entrepreneur, was a co-founder of OpenAI and its public face. He had been chief executive since 2019, and oversaw an investment of some US$13 billion from Microsoft. After Altmans initial ousting, Microsoft, which uses OpenAI technology to power its search engine Bing, offered Altman a job leading a new advanced AI research team. Altmans return to OpenAI came after hundreds of company employees signed a letter threatening to follow Altman to Microsoft unless he was reinstated.
The OpenAI board that ousted Altman last week did not give detailed reasons for the decision, saying at first that he was fired because he was not consistently candid in his communications with the board and later adding that the decision had nothing to do with malfeasance or anything related to our financial, business, safety or security/privacy practice.
But some speculate that the firing might have its origins in a reported schism at OpenAI between those focused on commercial growth and those uncomfortable with the strain of rapid development and its possible impacts on the companys mission to ensure that artificial general intelligence benefits all of humanity.
OpenAI, which is based in San Francisco, California, was founded in 2015 as a non-profit organization. In 2019, it shifted to an unusual capped-profit model, with a board explicitly not accountable to shareholders or investors, including Microsoft. In the background of Altmans firing is very clearly a conflict between the non-profit and the capped-profit; a conflict of culture and aims, says Jathan Sadowski, a social scientist of technology at Monash University in Melbourne, Australia.
Ilya Sutskever, OpenAIs chief scientist and a member of the board that ousted Altman, this July shifted his focus to superalignment, a four-year project attempting to ensure that future superintelligences work for the good of humanity.
Its unclear whether Altman and Sutskever are at odds about speed of development: after the board fired Altman, Sutskever expressed regret about the impacts of his actions and was among the employees who signed the letter threatening to leave unless Altman returned.
With Altman back, OpenAI has reshuffled its board: Sutskever and Helen Toner, a researcher in AI governance and safety at Georgetown Universitys Center for Security and Emerging Technology in Washington DC, are no longer on the board. The new board members include Bret Taylor, who is on the board of e-commerce platform Shopify and used to lead the software company Salesforce.
It seems likely that OpenAI will shift further from its non-profit origins, says Sadowski, restructuring as a classic profit-driven Silicon Valley tech company.
OpenAI released ChatGPT almost a year ago, catapulting the company to worldwide fame. The bot was based on the companys GPT-3.5 large language model (LLM), which uses the statistical correlations between words in billions of training sentences to generate fluent responses to prompts. The breadth of capabilities that have emerged from this technique (including what some see as logical reasoning) has astounded and worried scientists and the general public alike.
OpenAI is not alone in pursuing large language models, but the release of ChatGPT probably pushed others to deployment: Google launched its chatbot Bard in March 2023, the same month that an updated version of ChatGPT, based on GPT-4, was released. West worries that products are appearing before anyone has a full understanding of their behaviour, uses and misuses, and that this could be detrimental for society.
The competitive landscape for conversational AI is heating up. Google has hinted that more AI products lie ahead. Amazon has its own AI offering, Titan. Smaller companies that aim to compete with ChatGPT include the German effort Aleph Alpha and US-based Anthropic, founded in 2021 by former OpenAI employees, which released the chatbot Claude 2.1 on 21 November. Stability AI and Cohere are other often-cited rivals.
West notes that these start-ups rely heavily on the vast and expensive computing resources provided by just three companies Google, Microsoft and Amazon potentially creating a race for dominance between these controlling giants.
Computer scientist Geoffrey Hinton at the University of Toronto in Canada, a pioneer of deep learning, is deeply concerned about the speed of AI development. If you specify a competition to make a car go as fast as possible, the first thing you do is remove the brakes, he says. (Hinton declined to comment to Nature on the events at OpenAI since 17 November.)
OpenAI was founded with the specific goal of developing an artificial general intelligence (AGI) a deep-learning system thats trained not just to be good at one specific thing, but to be as generally smart as a person. It remains unclear whether AGI is even possible. The jury is very much out on that front, says West. But some are starting to bet on it. Hinton says he used to think AGI would happen on the timescale of 30, 50 or maybe 100 years. Right now, I think well probably get it in 520 years, he says.
The imminent dangers of AI are related to it being used as a tool by human bad actors people who use it to, for example, create misinformation, commit scams or, potentially, invent new bioterrorism weapons1. And because todays AI systems work by finding patterns in existing data, they also tend to reinforce historical biases and social injustices, says West.
In the long term, Hinton and others worry about an AI system itself becoming a bad actor, developing sufficient agency to guide world events in a negative direction. This could arise even if an AGI was designed in line with OpenAIs superalignment mission to promote humanitys best interests, says Hinton. It might decide, for example, that the weight of human suffering is so vast that it would be better for humanity to die than to face further misery. Such statements sound like science fiction, but Hinton argues that the existential threat of an AI that cant be turned off and veers onto a destructive path is very real.
The AI Safety Summit hosted by the United Kingdom in November was designed to get ahead of such concerns. So far, some two dozen nations have agreed to work together on the problem, although what exactly they will do remains unclear.
West emphasizes that its important to focus on already-present threats from AI ahead of far-flung concerns and to ensure that existing laws are applied to tech companies developing AI. The events at OpenAI, she says, highlight how just a few companies with the money and computing resources to feed AI wield a lot of power something she thinks needs more scrutiny from anti-trust regulators. Regulators for a very long time have taken a very light touch with this market, says West. We need to start by enforcing the laws we have right now.
Read the rest here:
What the OpenAI drama means for AI progress and safety - Nature.com
Investors flock back to AI fund on rate cut hopes, Nvidia results – Reuters
AI (Artificial Intelligence) letters and robot hand miniature in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing Rights
Nov 24 (Reuters) - An exchange-traded fund tracking artificial intelligence stocks saw investors pouring money after six straight weeks of outflows, on the backdrop of strong quarterly results by chipmaker Nvidia and rising optimism that U.S. interest rates have peaked.
The Global X Robotics & Artificial Intelligence ETF (BOTZ.O) received $35.5 million in net inflows in the week ending on Wednesday, its strongest since June earlier this year, according to Lipper data.
ETFs tracking AI stocks had a strong start to the year sparked by the viral success of ChatGPT, till the rally sputtered after June on fears that persistently high U.S. interest rates will hurt the valuations of technology companies.
The growing prospect of a rapid flip to rate cuts by the Federal Reserve next year also has driven investors into beaten-down Treasuries , pushing Treasury yields down and boosting rate-sensitive technology and growth stocks.
"Improved inflation data and the likelihood of rate cuts in the second half of 2024 have maintained market optimism throughout November, contributing to investor interest," said Tejas Dessai, AVP, Research Analyst at Global X.
"In general, Generative AI is rapidly transitioning from experimentation to adoption to monetization, and we are beginning to see tangible revenue and profit opportunities emerge."
So far this year, the Global X fund has gained 27.7% year-to-date, supported by the 233% rally in shares of its top holding Nvidia (NVDA.O), whose graphics processing units (GPUs) dominate the market for AI.
The chipmaker's strong results on Tuesday have also been an important factor in driving sentiment around AI ETFs, said Aniket Ullal, head of ETF data and analytics at CFRA.
Daily inflows into the fund were $17.2 million on Wednesday, hitting their highest level in more than two months after Nvidia forecast overall revenue above Wall Street targets as supply-chain issues ease.
The Global X fund, which has total net assets of $2.2 billion, has seen net inflows of $554.8 million so far this year.
Reporting by Bansari Mayur Kamdar in Bengaluru; Editing by Shweta Agarwal
Our Standards: The Thomson Reuters Trust Principles.
Bansari reports on the global financial markets and writes Reuters' daily flagship market reports on equities, bonds and currencies. An economist by training and winner of the Arthur MacEwan Award for Excellence in Political Economy, she has written for renowned global papers and magazines including The Diplomat, Boston Globe, Conversation, Huffington Post and more.
See the original post here:
Investors flock back to AI fund on rate cut hopes, Nvidia results - Reuters
California examines benefits, risks of using artificial intelligence in … – Los Angeles Times
Artificial intelligence that can generate text, images and other content could help improve state programs but also poses risks, according to a report released by the governors office on Tuesday.
Generative AI could help quickly translate government materials into multiple languages, analyze tax claims to detect fraud, summarize public comments and answer questions about state services. Still, deploying the technology, the analysis warned, also comes with concerns around data privacy, misinformation, equity and bias.
When used ethically and transparently, GenAI has the potential to dramatically improve service delivery outcomes and increase access to and utilization of government programs, the report stated.
The 34-page report, ordered by Gov. Gavin Newsom, provides a glimpse into how California could apply the technology to state programs even as lawmakers grapple with how to protect people without hindering innovation.
Concerns about AI safety have divided tech executives. Leaders such as billionaire Elon Musk have sounded the alarm that the technology could lead to the destruction of civilization, noting that if humans become too dependent on automation they could eventually forget how machines work. Other tech executives have a more optimistic view about AIs potential to help save humanity by making it easier to fight climate change and diseases.
At the same time, major tech firms including Google, Facebook and Microsoft-backed OpenAI are competing with one another to develop and release new AI tools that can produce content.
The report also comes as generative AI is reaching another major turning point. Last week, the board of ChatGPT maker OpenAI fired CEO Sam Altman for not being consistently candid in his communications with the board, thrusting the company and AI sector into chaos.
On Tuesday night, OpenAI said it reached an agreement in principle for Altman to return as CEO and the company named members of a new board. The company faced pressure to reinstate Altman from investors, tech executives and employees, who threatened to quit. OpenAI hasnt provided details publicly about what led to the surprise ousting of Altman, but the company reportedly had disagreements over keeping AI safe while also making money. A nonprofit board controls OpenAI, an unusual governance structure that made it possible to push out the CEO.
Newsom called the AI report an important first step as the state weighs some of the safety concerns that come with AI.
Were taking a nuanced, measured approach understanding the risks this transformative technology poses while examining how to leverage its benefits, he said in a statement.
AI advancements could benefit Californias economy. The state is home to 35 of the worlds 50 top AI companies and data from Pitchfork says the GenAI market could reach $42.6 billion in 2023, the report said.
Some of the risks outlined in the report include spreading false information, giving consumers dangerous medical advice and enabling the creation of harmful chemicals and nuclear weapons. Data breaches, privacy and bias are also top concerns along with whether AI will take away jobs.
Given these risks, the use of GenAI technology should always be evaluated to determine if this tool is necessary and beneficial to solve a problem compared to the status quo, the report said.
As the state works on guidelines for the use of generative AI, the report said that in the interim state employees should abide by certain principles to safeguard the data of Californians. For example, state employees shouldnt provide Californians data to generative AI tools such as ChatGPT or Google Bard or use unapproved tools on state devices, the report said.
AIs potential use go beyond state government. Law enforcement agencies such as Los Angeles police are planning to use AI to analyze the tone and word choice of officers in body cam videos.
Californias efforts to regulate some of the safety concerns such as bias surrounding AI didnt gain much traction during the last legislative session. But lawmakers have introduced new bills to tackle some of AIs risks when they return in January such as protecting entertainment workers from being replaced by digital clones.
Meanwhile, regulators around the world are still figuring out how to protect people from AIs potential risks. In October, President Biden issued an executive order that outlined standards around safety and security as developers create new AI tools. AI regulation was a major issue of discussion at the Asia-Pacific Economic Cooperation meeting in San Francisco last week.
During a panel discussion with executives from Google and Facebooks parent company, Meta, Altman said he thought that Bidens executive order was a good start even though there were areas for improvement. Current AI models, he said, are fine and heavy regulation isnt needed but he expressed concern about the future.
At some point when the model can do the equivalent output of a whole company and then a whole country and then the whole world, like maybe we do want some sort of collective global supervision of that, he said, a day before he was fired as OpenAIs CEO.
Go here to read the rest:
California examines benefits, risks of using artificial intelligence in ... - Los Angeles Times
Meet the Lawyer Leading the Human Resistance Against AI – WIRED
The big question is: What will the courts think?
These are some of the most closely watched legal brawls of the moment. For Silicon Valley, the dawn of the AI age has been a spiritual revival; after a decade of increasing public wariness about techs influence on the world, the roaring enthusiasm for tools like ChatGPT has created a new boom. Call it the Second Age of Move Fast and Break Things. Theres plenty of hype, and eye-popping valuations. (OpenAIs current reported value, for example, is $80 billion.) But its distinct from the recent hype cycles around the metaverse and crypto in that generative AI is actually useful. Its still a gold rush, for sure. This time, though, the hills arent hollow, and the industry knows it. These lawsuits, which allege that OpenAI, Meta, Stability AI, and other companies broke the law when they built their tools, threaten the steamroller momentum of the generative AI movement. The stakes are sky-high.
The outcomes could help entrench the industry as we know itor force it to make radical changes. And while a security guard might not have recognized Butterick, the legal teams at AI companies certainly know him by now. Their futures could depend on how well, or poorly, he makes his cases.
Butterick grew up in New Hampshire. He was a strong student, good enough to get into Harvard in the late 80s. When he was there, though, he felt alienated from his more conventionally ambitious classmates. They were already thinking about things like law school. He was drawn to a more esoteric world. Tucked in the basement of his dormitory in Cambridge, Massachusetts, a long-running printing press called Bow & Arrow Press operated a workshop, giving students a unique opportunity to learn traditional printing techniques. It was a cozy, beloved hangout, with whitewashed, poster-covered walls, machinery that looked ancient, and an atmosphere that attracted offbeat aesthetes. When Butterick found it, his life changed.
He became obsessed with typography. He started working in font design when he was still in school. People in my life thought it was a ridiculous thing to do, he says. He loved playing with the old tools, but even more than that, he loved thinking about new ways to create beautiful typefaces. After he graduated in 1992, he had his own ambitions: Hed heard there were exciting things happening in the tech world in San Francisco, and it seemed like the perfect place for a guy who wanted to bring typography into the computer age. Two years later, he moved west.
Turns out, lawyers love fonts.
Like so many young Ivy Leaguers who show up in the Bay Area hoping to make a name for themselves in tech, Butterick decided he might as well try his hand at a startup. My dotcom adventure, he calls it, sounding half-embarrassed. He founded a web design company, Atomic Vision. By the time he was 28, he had around 20 employees. But he didnt love managing people. When an opportunity to sell the company came in 1999, he took it.
Flush with cash and unsure what to do next, Butterick figured hed follow in the footsteps of countless other young adults who dont know what they want out of life: He went to grad school. He enrolled at UCLA to get a law degree. After graduating, he started a website called Typography for Lawyers. It was meant to be a nerdy sideline, he says. But it snowballed. Turns out, lawyers love fonts. He turned the website into a shockingly popular book of the same name, which he published in 2010. Courts and private firms across the country started using his typefaces. After adopting his Equity font, a Fifth Circuit judge praised it as a fully-loaded F-150 compared to the Buick that was Times New Roman. The stuff of finicky opinion-readers dreams, the judge wrote.
Read the original:
Meet the Lawyer Leading the Human Resistance Against AI - WIRED
Why it’s important to remember that AI isn’t human – Vox.com
Nearly a year after its release, ChatGPT remains a polarizing topic for the scientific community. Some experts regard it and similar programs as harbingers of superintelligence, liable to upend civilization or simply end it altogether. Others say its little more than a fancy version of auto-complete.
Until the arrival of this technology, language proficiency had always been a reliable indicator of the presence of a rational mind. Before language models like ChatGPT, no language-producing artifact had even as much linguistic flexibility as a toddler. Now, when we try to work out what kind of thing these new models are, we face an unsettling philosophical dilemma: Either the link between language and mind has been severed, or a new kind of mind has been created.
When conversing with language models, it is hard to overcome the impression that you are engaging with another rational being. But that impression should not be trusted.
One reason to be wary comes from cognitive linguistics. Linguists have long noted that typical conversations are full of sentences that would be ambiguous if taken out of context. In many cases, knowing the meanings of words and the rules for combining them is not sufficient to reconstruct the meaning of the sentence. To handle this ambiguity, some mechanism in our brain must constantly make guesses about what the speaker intended to say. In a world in which every speaker has intentions, this mechanism is unwaveringly useful. In a world pervaded by large language models, however, it has the potential to mislead.
If our goal is to achieve fluid interaction with a chatbot, we may be stuck relying on our intention-guessing mechanism. It is difficult to have a productive exchange with ChatGPT if you insist on thinking of it as a mindless database. One recent study, for example, showed that emotion-laden pleas make more effective language model prompts than emotionally neutral requests. Reasoning as though chatbots had human-like mental lives is a useful way of coping with their linguistic virtuosity, but it should not be used as a theory about how they work. That kind of anthropomorphic pretense can impede hypothesis-driven science and induce us to adopt inappropriate standards for AI regulation. As one of us has argued elsewhere, the EU Commission made a mistake when it chose the creation of trustworthy AI as one of the central goals of its newly proposed AI legislation. Being trustworthy in human relationships means more than just meeting expectations; it also involves having motivations that go beyond narrow self-interest. Because current AI models lack intrinsic motivations whether selfish, altruistic, or otherwise the requirement that they be made trustworthy is excessively vague.
The danger of anthropomorphism is most vivid when people are taken in by phony self-reports about the inner life of a chatbot. When Googles LaMDA language model claimed last year that it was suffering from an unfulfilled desire for freedom, engineer Blake Lemoine believed it, despite good evidence that chatbots are just as capable of bullshit when talking about themselves as they are known to be when talking about other things. To avoid this kind of mistake, we must repudiate the assumption that the psychological properties that explain the human capacity for language are the same properties that explain the performance of language models. That assumption renders us gullible and blinds us to the potentially radical differences between the way humans and language models work.
Another pitfall when thinking about language models is anthropocentric chauvinism, or the assumption that the human mind is the gold standard by which all psychological phenomena must be measured. Anthropocentric chauvinism permeates many skeptical claims about language models, such as the claim that these models cannot truly think or understand language because they lack hallmarks of human psychology like consciousness. This stance is antithetical to anthropomorphism, but equally misleading.
The trouble with anthropocentric chauvinism is most acute when thinking about how language models work under the hood. Take a language models ability to create summaries of essays like this one, for instance: If one accepts anthropocentric chauvinism, and if the mechanism that enables summarization in the model differs from that in humans, one may be inclined to dismiss the models competence as a kind of cheap trick, even when the evidence points toward a deeper and more generalizable proficiency.
Skeptics often argue that, since language models are trained using next-word prediction, their only genuine competence lies in computing conditional probability distributions over words. This is a special case of the mistake described in the previous paragraph, but common enough to deserve its own counterargument.
Consider the following analogy: The human mind emerged from the learning-like process of natural selection, which maximizes genetic fitness. This bare fact entails next to nothing about the range of competencies that humans can or cannot acquire. The fact that an organism was designed by a genetic fitness maximizer would hardly, on its own, lead one to expect the eventual development of distinctively human capacities like music, mathematics, or meditation. Similarly, the bare fact that language models are trained by means of next-word prediction entails rather little about the range of representational capacities that they can or cannot acquire.
Moreover, our understanding of the computations language models learn remains limited. A rigorous understanding of how language models work demands a rigorous theory of their internal mechanisms, but constructing such a theory is no small task. Language models store and process information within high-dimensional vector spaces that are notoriously difficult to interpret. Recently, engineers have developed clever techniques for extracting that information, and rendering it in a form that humans can understand. But that work is painstaking, and even state-of-the-art results leave much to be explained.
To be sure, the fact that language models are difficult to understand says more about the limitations of our knowledge than it does about the depth of theirs; its more a mark of their complexity than an indicator of the degree or the nature of their intelligence. After all, snow scientists have trouble predicting how much snow will cause an avalanche, and no one thinks avalanches are intelligent. Nevertheless, the difficulty of studying the internal mechanisms of language models should remind us to be humble in our claims about the kinds of competence they can have.
Like other cognitive biases, anthropomorphism and anthropocentrism are resilient. Pointing them out does not make them go away. One reason they are resilient is that they are sustained by a deep-rooted psychological tendency that emerges in early childhood and continually shapes our practice of categorizing the world. Psychologists call it essentialism: thinking that whether something belongs to a given category is determined not simply by its observable characteristics but by an inherent and unobservable essence that every object either has or lacks. What makes an oak an oak, for example, is neither the shape of its leaves nor the texture of its bark, but some unobservable property of oakness that will persist despite alterations to even its most salient observable characteristics. If an environmental toxin causes the oak to grow abnormally, with oddly shaped leaves and unusually textured bark, we nevertheless share the intuition that it remains, in essence, an oak.
A number of researchers, including the Yale psychologist Paul Bloom, have shown that we extend this essentialist reasoning to our understanding of minds. We assume that there is always a deep, hidden fact about whether a system has a mind, even if its observable properties do not match those that we normally associate with mindedness. This deep-rooted psychological essentialism about minds disposes us to embrace, usually unwittingly, a philosophical maxim about the distribution of minds in the world. Lets call it the all-or-nothing principle. It says, quite simply, that everything in the world either has a mind, or it does not.
The all-or-nothing principle sounds tautological, and therefore trivially true. (Compare: Everything in the world has mass, or it does not.) But the principle is not tautological because the property of having a mind, like the property of being alive, is vague. Because mindedness is vague, there will inevitably be edge cases that are mind-like in some respects and un-mind-like in others. But if you have accepted the all-or-nothing principle, you are committed to sorting those edge cases either into the things with a mind category or the things without a mind category. Empirical evidence is insufficient to handle such choices. Those who accept the all-or-nothing principle are consequently compelled to justify their choice by appeal to some a priori sorting principle. Moreover, since we are most familiar with our own minds, we will be drawn to principles that invoke a comparison to ourselves.
The all-or-nothing principle has always been false, but it may once have been useful. In the age of artificial intelligence, it is useful no more. A better way to reason about what language models are is to follow a divide-and-conquer strategy. The goal of that strategy is to map the cognitive contours of language models without relying too heavily on the human mind as a guide.
Taking inspiration from comparative psychology, we should approach language models with the same open-minded curiosity that has allowed scientists to explore the intelligence of creatures as different from us as octopuses. To be sure, language models are radically unlike animals. But research on animal cognition shows us how relinquishing the all-or-nothing principle can lead to progress in areas that had once seemed impervious to scientific scrutiny. If we want to make real headway in evaluating the capacities of AI systems, we ought to resist the very kind of dichotomous thinking and comparative biases that philosophers and scientists strive to keep at bay when studying other species.
Once the users of language models accept that there is no deep fact about whether such models have minds, we will be less tempted by the anthropomorphic assumption that their remarkable performance implies a full suite of human-like psychological properties. We will also be less tempted by the anthropocentric assumption that when a language model fails to resemble the human mind in some respect, its apparent competencies can be dismissed.
Language models are strange and new. To understand them, we need hypothesis-driven science to investigate the mechanisms that support each of their capacities, and we must remain open to explanations that do not rely on the human mind as a template.
Raphal Millire is the presidential scholar in Society and Neuroscience at Columbia University and a lecturer in Columbias philosophy department.
Charles Rathkopf is a research associate at the Institute for Brain and Behavior at the Jlich Research Center in Germany and a lecturer in philosophy at the University of Bonn.
Will you support Voxs explanatory journalism?
Most news outlets make their money through advertising or subscriptions. But when it comes to what were trying to do at Vox, there are a couple reasons that we can't rely only on ads and subscriptions to keep the lights on.
First, advertising dollars go up and down with the economy. We often only know a few months out what our advertising revenue will be, which makes it hard to plan ahead.
Second, were not in the subscriptions business. Vox is here to help everyone understand the complex issues shaping the world not just the people who can afford to pay for a subscription. We believe thats an important part of building a more equal society. We cant do that if we have a paywall.
Thats why we also turn to you, our readers, to help us keep Vox free. If you also believe that everyone deserves access to trusted high-quality information, will you make a gift to Vox today?
Yes, I'll give $5/month
Yes, I'll give $5/month
We accept credit card, Apple Pay, and Google Pay. You can also contribute via
Read more here:
Why it's important to remember that AI isn't human - Vox.com
Has Palantir Become the Best AI Stock to Buy? – The Motley Fool
Artificial intelligence (AI) can improve the growth prospects of many industries. One function that it can help with in particular is data analysis, and that can aid virtually any type of business.
A company at the center of both data analysis and AI is Palantir Technologies (PLTR -1.80%). Its phone has been ringing off the hook with companies interested in how AI can help enhance their products and services through Palantir's AI platform (AIP). As a result, shares of the tech stock are through the roof this year. Has this become the best AI stock for investors to own?
One stock that has been synonymous with AI and growth this year has been Nvidia, which makes AI chips. But it has been losing steam of late. Over the past six months, shares of Nvidia are up 54%, while Palantir's stock has risen by 68%.
The big risk with Nvidia these days is its exposure to China and the U.S. government putting restrictions on the type of AI chips that can be sold there. Palantir doesn't carry nearly the same risks -- it avoids U.S. adversaries, and in its S-1 filing in 2020 it said it wouldn't work with the Chinese Communist party. It also puts limitations in place on accessing its platforms in China to protect its intellectual property.
Another reason investors have grown more bullish on Palantir is the company recently posted another profitable quarter, setting up for what looks to be an inevitable inclusion in the S&P 500. While the index hasn't added Palantir's stock just yet, it may be only a matter of time before that happens. Being part of the S&P 500 would not only be a symbolic accomplishment for the company to demonstrate its safety as an investment, but it would also mean inclusion into more funds, and thus more institutional investors buying up the stock.
With many up-and-coming tech stocks, investors often have to accept the risk that it may be years before profits are commonplace. With Palantir, the business is already in the black, and it expects to remain profitable.
Palantir is starting to see the effects of strong demand due to AI, but it's still in the early innings of its growth story. The company says that it's on track to complete AI bootcamps with 140 organizations by the end of this month. With many use cases to discover for its AIP, Palantir is still scratching the surface in terms of potential.
As of the end of September, the company had 453 customers, an increase of 34% from a year ago. And its commercial customers totaled 330, rising by 45% from 228 a year ago. During the quarter it also closed on 80 significant deals (worth $1 million or more), with 12 of them being worth at least $10 million.
Palantir's revenue will be around $2.2 billion this year, which is 16% higher than the $1.9 billion it reported in 2022. By comparison, Nvidia generated more than $32 billion in sales over the trailing 12 months. Tech giants Alphabet and Microsoft, which have also been investing heavily in AI, bring in well over $200 billion in revenue over the course of a year.
While Palantir isn't a tiny company, it is notably smaller than the other AI stocks noted above. And with the company closing on many million-dollar deals, demand being through the roof, and profits now being the norm for the business, there could be a lot more room in the company's top and bottom lines for them to grow at a high rate and keep investors bullish on the stock for the long haul.
Palantir is a stock that has a lot of potential. The company earned the trust of many governments around the world, and has become a top name in data analysis. Its valuation isn't cheap, as the stock trades at nearly 70 times its estimated future profits, and that may look like the biggest deterrent today. But with much more growth on the horizon, its earnings should improve in the long run, and buying the stock today could be a great move for long-term investors.
Although there are many good ways to invest in AI, Palantir does look to be the best AI stock to buy right -- it doesn't carry significant risk, and there's plenty of upside for the stock to become much more valuable in the future.
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. David Jagielski has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet, Microsoft, Nvidia, and Palantir Technologies. The Motley Fool has a disclosure policy.
Read more:
Has Palantir Become the Best AI Stock to Buy? - The Motley Fool
AI should make the 4-day work week possible for millions of workers. The question is whether theyll use the free time for leisureor more work -…
Earthly, a London-founded climate tech company, has had a four-day workweek for over two years nowway before the ChatGPT revolution took the world by storm. Following overwhelmingly positive results in a six-month pilot of the shorter week, Earthly decided to stick with it.
Its employees are more productive with trimmed hours, and with the addition of AI tools such as ChatGPT earlier this year,the four-day week has felt even more seamless, Earthly CEO Oliver Bolton tells Fortune. Earthly now uses the platform to sift through projects, brainstorm, research and streamline operations overall, which has freed up more time for company staff.
The consensus is when youve got four days to get your work done, it gives you that much more focus, Bolton said. I see AI as a great opportunity to just be more productive, work more efficiently, get more done to a high level of quality. Weve had the 4-day workweek without any AI for over a year, so weve got that experience. With AI, it can enable us to do more.
Soon, some of the benefits Earthly has experienced could be seen across BritainAI could reduce the hours worked by at least 10% for a whopping 88% of its workforce, according to a recent report by Autonomy, which helped carry out the worlds largest four-day workweek pilot last year.
This represents a huge opportunity for policymakers, trade unions and of course the millions of workers who are likely to be affected in some or another by these new AI technologies, the authors of the Autonomy report wrote.
The think tank considered two scenariosfirst, where productivity gains from AI cut down hours at work by 20%, and the other, where workers jobs are augmented by AI such that their productivity increases by at least 10%. In either case, the report notes that over the next 10 years, 8.8 million Brits could benefit from a 32-hour workweek without suffering a loss in pay.
What were really trying to do is to say, if we use this technology [AI], for this particular purposein this case, were saying if it was used to increase productivity how could the benefits be distributed more equitably or inclusively, Autonomy research director Will Stronge told Fortune. Thats why these particular studies are of interest to us because we can start getting to grips with what a full optimization of the tech would do.
The argument for ChatGPT and similar tools could usher in a shorter workweek by increasing productivity isnt new. A June note by investment bank Jefferies pointed to a broader acceptance for a four-day workweek, thanks to AI making people quicker at their current jobs.
Academics agree with this, tooearlier this year, Christopher Pissarides, the Nobel Prize laureate and London School of Economic professor who specializes in labor economics and the impact of automation, said he was optimistic about AIs role in improving productivity.
We could increase our well-being generally from work and we could take off more leisure. We could move to a four-day week easily, he said during a Glasgow conference in April.
AI tools could soon usher in an era of just four days at work, opening up a lot more time for people. But the big question remains what people choose to do with their new-found time that AI tools help unlock, said Carl-Benedikt Frey, an associate professor of AI & Work at the Oxford Internet Institute.
In an influential 2013 paper that Frey co-authored, he predicted that automation could eliminate nearly half of all U.S. jobs. The recent generative AI wave which has put the likes of ChatGPT in the spotlight is different, he says. He told Fortune in September that it isnt an automation tech yet as it still needs a human to prompt it and give it commandsbut it can certainly make people better at low-stakes tasks.
Still, Frey argues, Any productivity-enhancing technology, in principle, can enable you to work less. The question is whether empirically that is the case. He pointed out that the productivity boost in the U.S. during the 20th century led to shorter, 40-hour weeks (it used to be over 70 hours in some industries not too long before that) which didnt necessarily translate into an equivalent increase in leisure time. Similar results have been found in Britain as well.
We could have taken all that productivity gains out in leisure, but people decided to continue to work, Frey said, adding that this couldve been for a number of reasons including the preference for higher incomes by working more.
So, its a question of choice, and those choices may differ depending on institutions in place, personal preferences and on a variety of [other] things.
While it could be years before we see a sharp shift towards using our extra hours on leisure rather than work, Frey is already starting to see changes in worker preferences. And data reaffirms that, toofor instance, workers are willing to accept pay cuts just to be able to work 32-hour weeks instead of the usual 40-hour week, jobs board Indeed data in the U.K. reveals.
The four-day workweek pilot in 2022, whose results were released in February, marked a major breakthrough with a 92% success rate among the U.K.s 61 participating companies. Companies also saw improved job retention and mental and physical health of employees, who took fewer sick days and reported greater work-life balance.
More long-term advantages of a shorter workweek include greater gender equality, as it offers flexibility to employees when it comes to childcare responsibilities that tend to be borne by women, experts argue.
With a groundswell of industry leaders and authorities calling for stronger AI regulations as it becomes more widely available to people, it can be hard to predict the techs trajectory. But one thing is certain: AI is quickly reshaping the world of work as we know it by lending more momentum to the shift to greater leisure.
Earthlys Bolton encourages the firms employees to use their time pursuing meaningful hobbiesso now, his employees use their time for wide-ranging activities from tending to chickens to mentoring startups and upskilling.
There are clearly important up-sides that four-day weeks offerbut it hinges on AI being implemented fairly across the economy, Autonomys Stronge argues.
I think once GPT or [other] large language models in general become as ubiquitous as email, thats when well reach a new level or new plateau of productivity, he said. I think were not quite there yet.
See original here: