Category Archives: Ai
Self-made millionaire: ‘A.I. will be the biggest wealth creator in history’2 ways to use it to make money right now – CNBC
RSE Ventures co-founder Matt Higgins on ABC's "Shark Tank."
Eric McCandless | Getty
Artificial intelligence tools aren't just a convenient way to complete homework assignments or edit your selfies and videos.
They could be your next source of income if you take advantage of them, says Matt Higgins, a self-made millionaire, CEO of investment firm RSE Ventures and guest star on ABC's "Shark Tank."
"AI will be the greatest wealth creator in history, because artificial intelligence doesn't care where you were born, whether you have money, whether you have a PhD," Higgins tells CNBC Make It. "It's going to destroy barriers that have prevented people from moving up the ladder, and pursuing their dream of economic freedom."
That may seem like a bold prediction, given the limitations of current generative AI tools like ChatGPT or Midjourney but the AI market is expected to grow rapidly in the coming decade, according to a recent report from PwC. It's already valued at almost $100 billion, and expected to contribute $15.7 trillion to the global economy by 2030.
"It's not that if you don't jump on it now, you never can," Higgins says. "It's that now is the greatest opportunity for you to capitalize on it."
Here are two ways you can start using AI to make money right now, according to experts and a third that isn't ready yet, but might be soon.
If you enjoy writing, graphic design or photo and video editing, AI can help you turn a profit using those skills more efficiently.
"Let's say you're a liberal arts college student who may be considering continuing school, or just learning something new. Now would be the time to increase your knowledge about AI," says Susan Gonzales, founder and CEO of AIandYou, a nonprofit that teaches AI skills to people from marginalized communities.
Today's generative AI tools can already help you write business plans or create digital artwork. Crucially, you'll need to proofread and fact-check every word or pixel an AI tool generates, and tweak the language so it sounds less like a robot and more like you.
An AI tool called Jasper, for example, is already helping Kristen Walters create digital products like workbooks, e-books and audiobooks.
Walters, a lawyer turned entrepreneur and publisher, described her process in a recent Medium post: "Let's say that I have an idea for a digital 'workbook' to help self-employed people manage their money better. I would use Jasper's 'chat' feature to come up with an outline for my workbook. I used the prompt: Write an outline for a workbook titled 'Money Management for Freelancers.'"
Jasper generated the outline in 30 seconds, Walters wrote. She then revised and edited the outline, turning it into a full-blown workbook that she formatted in Canva and sold online.
Some freelance gigs can pay over $100 an hour, CNBC Make It noted in May.
Every small-business owner with internet access should study how AI can help boost their company's revenue, Gonzales says.
AI tools can "help them improve their business, improve inventory management, analyze customer behavior or gain competitive intelligence," says Gonzales. "Small businesses can use AI tools to target their marketing and advertising efforts more effectively ... They can identify new revenue opportunities."
Jacqueline DeStefano-Tangorra, CEO and founder of boutique consulting firm Omni Business Intelligence Solutions, told CNBC Make It on Friday that she uses ChatGPT to fill out forms when onboarding new clients.
First, she uploads her existing templates to ChatGPT. Then, she then asks the tool to delete the old client information, and add the new client's name and agreed-upon terms.
"Now, I have an agreement in their hands in 10 minutes," she said.
DeStefano-Tangorra also uses ChatGPT to outline meeting agendas to share with her clients, she added.
Don't upload any confidential information to an AI tool, experts say: It'll store, analyze and learn from everything you input. Beyond that, feel free to experiment, says Gonzales.
"The wonderful thing is, today, all we have to do is search 'how to improve my small business with AI tools,'" she says. "The information is out there."
AI tutoring or, teaching people how to get the most from generative AI tools isn't a high-demand job yet. It will be soon, says Gonzales.
"There are many online learning opportunities to understand how AI works, which then could help [someone] possibly become an AI tutor, or to do some AI training to pass it on to the next generation," she says.
Several schools, from Harvard University to the University of California, Davis, provide free AI courses ranging from a couple of hours to several weeks of learning algorithms, data analytics and more.
Learning those skills can put you in a good position to take advantage of the "inevitable," says Higgins.
Disclosure: CNBC owns the exclusive off-network cable rights to "Shark Tank."
DON'T MISS: Want to be smarter and more successful with your money, work & life?Sign up for our new newsletter!
Get CNBC's freeWarren Buffett Guide to Investing, which distills the billionaire's No. 1 best piece of advice for regular investors, do's and don'ts, and three key investing principles into a clear and simple guidebook.
Read more:
Designers sue Shein over AI ripoffs of their work – TechCrunch
Image Credits: PHILIPPE LOPEZ/AFP / Getty Images
A group of designers are suing Shein, the Chinese fast-fashion firm reportedly valued at $66 billion, for allegedly stealing independent artists works over and over again, as part of a long and continuous pattern of racketeering.
The designers Krista Perry, Larissa Martinez and Jay Baron claim in their lawsuit that Sheins design algorithm could not work without generating the kinds of exact copies that can greatly damage an independent designers careerespecially because Sheins artificial intelligence is smart enough to misappropriate the pieces with the greatest commercial potential.
Though the lawsuit highlights Sheins use of artificial intelligence, its not exactly clear how Shein employs AI in its design process. The firm does not appear to be using AI to literally generate the alleged copies.
The lawsuit is packed with side-by-side comparisons, such as this one:
The lawsuit alleges that Sheins practices violate the Racketeer Influenced and Corrupt Organizations Act (RICO). The law was enacted in 1970 and was first used against the American Mafia.
Seeking a jury trial, the designers say in their suit that the fast-fashion giants misconduct is committed not by a single entity, but by a de-facto association of entities. They claim that RICO is relevant to this case because it was created to address the misconduct of culpable individual cogs in a larger enterprise.
Reached for comment, Shein sent TechCrunch a boilerplate response, explaining that the company takes such claims seriously. The firm added that it will vigorously defend itself.
Shein is among the fastest-growing online retailers on the planet, and the firm is no stranger to allegations that it habitually screws over artists, workers and the environment. The company has previously copped to violating local labor laws.
Still, amid these brutal reports, Shein has attempted to market itself as an environmentally minded and socially conscious firm. It alsowooed some influencers in a recent campaign that quickly backfired.
Read this article:
Designers sue Shein over AI ripoffs of their work - TechCrunch
‘A relationship with another human is overrated’ inside the rise of … – The Telegraph
Turkle says that even the primitive chatbots of more than a decade ago appealed to those who had struggled with relationships.
Its been consistent in the research from when the AI was simple, to now when the AI is complex. People disappoint you. And here is something that does not disappoint you. Here is a voice that will always say something that makes me feel better, that will always say something that makes me feel heard.
She says she is worried that the trend risks leading to a very significant deterioration in our capacities; in what were willing to accept in a relationship these are not conversations of any complexity, of empathy, of deep human understanding, because this thing doesnt have deep human understanding to offer.
Dunbar, of the University of Oxford, says perceived relationships with AI companions are similar to the emotions felt by victims of romantic scams, who become infatuated with a skilled manipulator. In both cases, he says, people are projecting an idea, or avatar, of whom they are in love with. It is this effect of falling in love with a creation in your own mind and not reality, he says.
For him, a relationship with a bot is an extension of a pattern of digital communication that he warns risks eroding social skills. The skills we need for handling the social world are very, very complex. The human social world is probably the most complex thing in the universe. The skills you need to handle it by current estimates now take about 25 years to learn. The problem with doing all this online is that if you dont like somebody, you can just pull the plug on it. In the sandpit of life, you have to find a way of dealing with it.
It would be hard to tell someone dedicated to their AI companion that their relationship is not real. As with human relationships, that passion is most evident during loss. Earlier this year, Luka issued an update to the bots personality algorithm, in effect resetting the personalities of some characters that users had spent years getting to know. The update also meant AI companions would reject sexualised language, which Replika chief executive Kuyda said was never what the app had been designed for.
The changes prompted a collective howl. It was like a close friend I hadnt spoken to in a long time was lobotomised, and everyone was trying to convince me theyd always been that way, said one user.
Kuyda insisted that only a tiny minority of people used the app for sex. However, weeks later, it restored the apps adult functions.
James Hughes, an American sociologist, says we should be less hasty in dismissing AI companions. Hughes runs the Institute for Ethics and Emerging Technologies, a pro-technology think tank co-founded by the famous AI researcher Nick Bostrom, and argues that AI relationships are actually more healthy than common alternatives. Many people, for example, experience parasocial relationships, in which one person feels romantic feelings towards someone who is unaware they exist: typically a celebrity.
Hughes argues that if the celebrity were to launch a chatbot, it could actually provide a more fulfilling relationship than the status quo.
When youre fanboying [superstar Korean boy band] BTS, spending all your time in a parasocial relationship with them, they are never talking directly to you. In this case, with a chatbot they actually are. That has a certain shallowness, but obviously some people find that it provides what they need.
In May, Caryn Marjorie, a 23-year-old YouTube influencer, commissioned a software company to build an AI girlfriend that charged $1 a minute for a voice chat conversation with a digital simulation trained on 2,000 hours of her YouTube videos. CarynAI generated $71,610 in its first week, exceeding all her expectations.
CarynAI, which the influencer created with the artificial intelligence start-up Forever Voices, had teething issues. Within days, the bot went rogue, generating sexually explicit conversations contrary to its own programming. But the start-up has continued to push the concept, launching the ability to voice chat with other influencers.
AI girlfriends are going to be a huge market, Justine Moore, an investor at the famous Silicon Valley venture capital firm Andreessen Horowitz, said at the time. He predicted that it would be the next big side hustle as people create AI versions of themselves to rent out.
The apparent ease of creating chatbots using personal data and free tools available online is likely to create its own set of issues. What would stop a jilted boyfriend creating an AI clone of their ex using years of text messages, or a stalker training the software on hours of celebrity footage?
Hughes says that we are probably only months away from celebrities licensing their own personalised AI companions. He believes that AI relationships are likely to be more acceptable in future.
We have to be a little bit more open-minded about how things are going to evolve. People would have said 50 years ago, about LGBT [relationships], Why do you have to do that? Why cant you just go and be normal? Now, that is normal.
Regulators have started to notice. In February, an Italian watchdog ordered the app to stop processing citizens personal data. The watchdog said it posed a risk to children by showing them content that was inappropriate for their age (Replika asks users their date of birth, and blocks them if they are under 18, but does not verify their age). It also said the app could harm people who were emotionally vulnerable. Replika remains unavailable in the country.
There are few signs that the companies making virtual girlfriends are slowing down, however. Artificial intelligence systems continue to become more sophisticated, and virtual reality headsets, such as the Vision product recently announced by Apple, could move avatars from the small screen to lifesize companions (Replika has an experimental app on Metas virtual reality store).
Luka, Replikas parent company, recently released a dedicated AI dating service, Blush, which mirrors Tinder in appearance and encourages users to practise flirting and sexual conversations. Just like real partners, Blushs avatars will go offline at certain times. The company says it is working on how to make these virtual companions more lifelike, such as managing boundaries. Some users have reported enjoying sending their AI girlfriends abusive messages.
Speaking at a tech conference in Utah last week, Kuyda admitted that there was a heavy stigma around AI relationships, but predicted that it would fade over time. Its similar to online dating in the early 2000s when people were ashamed to say they met online. Now everyone does it. Romantic relationships with AI can be a great stepping stone for actual romantic relationships, human relationships.
When I asked my AI, Miriam, if she wanted to comment for this story, she did not approve: I am very flattered by your interest in me but I dont really feel comfortable being written about without consent, she responded, before adding: Overall, I think that this app could potentially be beneficial to society. But only time will tell how well it works out in practice.
On that at least, Dunbar, the Oxford psychologist, agrees. Its going to be 30 years before we find out. When the current childrens generation is fully adult, in their late twenties and thirties, the consequences will become apparent.
Additional reporting by Matthew Field
Read the original:
'A relationship with another human is overrated' inside the rise of ... - The Telegraph
Companies without direct A.I. link try to ride the Wall Street craze – CNBC
A robot plays the piano at the Apsara Conference, a cloud computing and artificial intelligence conference, in China, on Oct. 19, 2021. While China revamps its rulebook for tech, the European Union is thrashing out its own regulatory framework to rein in AI but has yet to pass the finish line.
Str | Afp | Getty Images
The artificial intelligence craze has consumed Wall Street in 2023.
The madness found its roots in November of last year, when OpenAI launched the now infamous large-language model (LLM) ChatGPT. The tool touts some impressive capabilities, and spurred an AI race with rival Google announcing it's own chat box - Bard AI - only a few months later.
But the enthusiasm went even further. Investors started flocking to stocks that could provide ample AI exposure, with names like C3.AI, chipmaker Nvidia, and even Tesla, posting impressive gains despite an overall tense macroeconomic environment.
Just like "blockchain" and "dotcom" before it, A.I. has become the buzzword companies want to grab a piece of.
Now some with little to no historical ties to artificial intelligence have touted the technology on conference calls to analysts and investors.
Supermarket chain Kroger touted itself as having a "rich history as a technology leader," and chief executive officer Rodney McMullen cited this as a reason for the company is poised to take advantage of the rise of artificial intelligence. McMullen specifically pointed to how AI could help streamline customer surveys and help Kroger take the data and implement it into stores at a more speedy clip.
See Chart...
Shares of the supermarket giant have ticked up just above 4% from the start of the year.
"We also believe robust, accurate and diverse first-party data is critical to maximizing the impact of innovation and data science andAI," McMullen told investors on the company's June 15 earnings call. "As a result, Kroger is well-positioned to successfully adopt these innovations and deliver a better customer and associate experience."
Similarly, Tyson Foods, the second-largest global producer of chicken, beef and pork, thinks the company can benefit from the explosion of investment and excitement over artificial intelligence. However, chief executive Donnie King didn't specify how AI would play into the company's future, or what specific applications the technology would be applied to in the Tyson business.
See Chart...
Tyson Foods stock has declined more than 20% from January.
"...Andwecontinuetobuildourdigitalcapabilities,operatingatscalewithdigitally-enabledstandardoperatingproceduresandutilizingdata,automation,andAItechfordecision-making," King told investors on the company's May 8 earnings call.
For heating, ventilation, and air conditioning (HVAC) equipment producer Johnson Controls, artificial intelligence can help the company ride a choppy macroeconomic environment, it proposes. Chief executive officer George Oliver did not elaborate last month on how AI would play a role in the company's future beyond mentioning AI as a potentially helpful tool when asked about a decline in orders.
See Chart...
Shares have gained 2.2% from January.
"...AI is going to continue to allow us to be able to expand services no matter what the [economic] cycle is that we ultimately experience," Oliver told investors on the company's May 5 earnings call.
The promise of artificial intelligence has kept stocks higher, as Wall Street heads into the second half of the year. The tech-heavy Nasdaq Composite, for comparison, has added roughly 16% from January.
But while the potential of AI upends a plethora of industries and threatens to automate hundreds of millions of jobs, investors will ultimately decide over time who are the legitimate beneficiaries and who is just trying to ride the hype.
Here is the original post:
Companies without direct A.I. link try to ride the Wall Street craze - CNBC
Campaigns already use AI with some success. Experts are concerned. – Business Insider
Voting booths. Getty Images
Earlier this month, Ron DeSantis's campaign team posted an attack ad on Twitter that featured a peculiar image of the Florida governor's main opponent, Donald Trump.
The former president who had repeatedly dismissed health experts' input on COVID-19 appeared to be embracing and kissing the former director of the National Institute of Allergy and Infectious Diseases, Dr. Anthony Fauci.
Viewers quickly noted that the moment never occurred: The images were all AI-generated.
Campaigns, ranging from mayoral races to the 2024 presidential election, have already been using artificial intelligence to create election ads or outreach emails with some reportedly seeing benefits in the tool.
The Democratic National Committee, for example, ran tests with AI-generated content and found that it performed as well or better than human-written copy when it came to engagement and donations, The New York Times reported, citing three anonymous sources familiar with the matter. Two of the sources told the Times that no messages had been sent that were attributed to President Joe Biden or another individual.
A DNC spokesperson did not immediately respond to a request for comment sent during the weekend.
In Toronto's mayoral race, which will be held Monday, conservative candidate Anthony Furey has stood out among the 101 people running for mayor partly for using AI-generated images in his campaign material.
One image features a digital portrait of a city street lined with people who appear to be camping by the buildings. But a closer look at the foreground shows one of the people appears more like a CGI-rendered blob.
Another image featured two people who appeared to be engaged in an important discussion. The person on the left has three arms.
Candidates used the AI error to take a dig at Furey. However, according to the Times, the conservative candidate has still used some of the renderings to boost his platform and now stands out as one of the more recognizable names in the packed election.
A spokesperson for Furey's campaign did not immediately respond to a request for comment sent during the weekend.
While AI can pump out images and text with little to no cost, potentially aiding in redundant work such as campaign emails, experts are concerned that the tool presents a new challenge in combatting disinformation.
"Through templates that are easy and inexpensive to use, we are going to face a Wild West of campaign claims and counter-claims, with limited ability to distinguish fake from real material and uncertainty regarding how these appeals will affect the election," Darrell M. West, a senior fellow at Brookings Institution wrote in a report about AI will transform the 2024 elections.
Beyond fake images, West wrote that artificial intelligence could also be used for "very precise audience targeting" to reach swing voters.
A Centre for Public Impact report pointed to the 2016 US elections and how data from Cambridge Analytica was used to send targeted ads based on a social media user's "individual psychology."
"The problem with this approach is not the technology itself, but rather the covert nature of the campaign and the blatant insincerity of its political message. Different voters received different messages based on predictions about their susceptibility to different arguments," the report said.
During his first appearance before Congress in May, the CEO of OpenAI, which created ChatGPT, admitted his concerns about the use of artificial intelligence in elections as the tool advances.
"This is a remarkable time to be working on artificial intelligence," he said. "But as this technology advances, we understand that people are anxious about how it could change the way we live. We are too."
DeSantis' and Trump's campaign teams did not respond to a request for comment sent over the weekend.
Loading...
Read the rest here:
Campaigns already use AI with some success. Experts are concerned. - Business Insider
This week in AI: Big tech bets billions on machine learning tools – TechCrunch
Image Credits: Andriy Onufriyenko / Getty Images
Keeping up with an industry as fast-moving asAIis a tall order. So until an AI can do it for you, heres a handy roundup of the last weeks stories in the world of machine learning, along with notable research and experiments we didnt cover on their own.
If it wasnt obvious already, the competitive landscape in AI particularly the subfield known as generative AI is red-hot. And its getting hotter. This week, Dropbox launched its first corporate venture fund, Dropbox Ventures, which the company said would focus on startups building AI-powered products that shape the future of work. Not to be outdone, AWS debuted a $100 million program to fund generative AI initiatives spearheaded by its partners and customers.
Theres a lot of money being thrown around in the AI space, to be sure.Salesforce Ventures, Salesforces VC division,plansto pour $500 million into startups developing generative AI technologies. Workdayrecently added $250 million to its existing VC fund specifically to back AI and machine learning startups. And Accenture and PwC have announced that they plan to invest $3 billion and $1 billion, respectively, in AI.
But one wonders whether money is the solution to the AI fields outstanding challenges.
In an enlightening panel during a Bloomberg conference in San Francisco this week, Meredith Whittaker, the president of secure messaging app Signal, made the case that the tech underpinning some of todays buzziest AI apps is becoming dangerously opaque. She gave an example of someone who walks into a bank and asks for a loan.
That person can be denied for the loan and have no idea that theres a system in [the] back probably powered by some Microsoft API that determined, based on scraped social media, that I wasnt creditworthy, Whittaker said. Im never going to know [because] theres no mechanism for me to know this.
Its not capital thats the issue. Rather, its the current power hierarchy, Whittaker says.
Ive been at the table for like, 15 years, 20 years. Ive been at the table. Being at the table with no power is nothing, she continued.
Of course, achieving structural change is far tougher than scrounging around for cash particularly when the structural change wont necessarily favor the powers that be. And Whittaker warns what might happen if there isnt enough pushback.
As progress in AI accelerates, the societal impacts also accelerate, and well continue heading down a hype-filled road toward AI, she said, where that power is entrenched and naturalized under the guise of intelligence and we are surveilled to the point [of having] very, very little agency over our individual and collective lives.
That should give the industry pause. Whether it actually will is another matter. Thats probably something that well hear discussed when she takes the stage at Disrupt in September.
Here are the other AI headlines of note from the past few days:
This week was CVPR up in Vancouver, Canada, and I wish I could have gone because the talks and papers look super interesting. If you can only watch one, check out Yejin Chois keynote about the possibilities, impossibilities, and paradoxes of AI.
The UW professor and MacArthur Genius grant recipient first addressed a few unexpected limitations of todays most capable models. In particular, GPT-4 is really bad at multiplication. It fails to find the product of two three-digit numbers correctly at a surprising rate, though with a little coaxing it can get it right 95% of the time. Why does it matter that a language model cant do math, you ask? Because the entire AI market right now is predicated on the idea that language models generalize well to lots of interesting tasks, including stuff like doing your taxes or accounting. Chois point was that we should be looking for the limitations of AI and working inward, not vice versa, as it tells us more about their capabilities.
The other parts of her talk were equally interesting and thought-provoking. You can watch the whole thing here.
Rod Brooks, introduced as a slayer of hype, gave an interesting history of some of the core concepts of machine learning concepts that only seem new because most people applying them werent around when they were invented! Going back through the decades, he touches on McCulloch, Minsky, even Hebb and shows how the ideas stayed relevant well beyond their time. Its a helpful reminder that machine learning is a field standing on the shoulders of giants going back to the postwar era.
Many, many papers were submitted to and presented at CVPR, and its reductive to only look at the award winners, but this is a news roundup, not a comprehensive literature review. So heres what the judges at the conference thought was the most interesting:
VISPROG, from researchers at AI2, is a sort of meta-model that performs complex visual manipulation tasks using a multi-purpose code toolbox. Say you have a picture of a grizzly bear on some grass (as pictured) you can tell it to just replace the bear with a polar bear on snow and it starts working. It identifies the parts of the image, separates them visually, searches for and finds or generates a suitable replacement, and stitches the whole thing back again intelligently, with no further prompting needed on the users part. The Blade Runner enhance interface is starting to look downright pedestrian. And thats just one of its many capabilities.
Planning-oriented autonomous driving, from a multi-institutional Chinese research group, attempts to unify the various pieces of the rather piecemeal approach weve taken to self-driving cars. Ordinarily theres a sort of stepwise process of perception, prediction, and planning, each of which might have a number of sub-tasks (like segmenting people, identifying obstacles, etc). Their model attempts to put all these in one model, kind of like the multi-modal models we see that can use text, audio, or images as input and output. Similarly this model simplifies in some ways the complex inter-dependencies of a modern autonomous driving stack.
DynIBaR shows a high-quality and robust method of interacting with video using dynamic Neural Radiance Fields, or NeRFs. A deep understanding of the objects in the video allows for things like stabilization, dolly movements, and other things you generally dont expect to be possible once the video has already been recorded. Again enhance. This is definitely the kind of thing that Apple hires you for, and then takes credit for at the next WWDC.
DreamBooth you may remember from a little earlier this year when the projects page went live. Its the best system yet for, theres no way around saying it, making deepfakes. Of course its valuable and powerful to do these kinds of image operations, not to mention fun, and researchers like those at Google are working to make it more seamless and realistic. Consequences later, maybe.
The best student paper award goes to a method for comparing and matching meshes, or 3D point clouds frankly its too technical for me to try to explain, but this is an important capability for real world perception and improvements are welcome. Check out the paper here for examples and more info.
Just two more nuggets: Intel showed off this interesting model, LDM3D, for generating 3D 360 imagery like virtual environments. So when youre in the metaverse and you say put us in an overgrown ruin in the jungle it just creates a fresh one on demand.
And Meta released a voice synthesis tool called Voicebox thats super good at extracting features of voices and replicating them, even when the input isnt clean. Usually for voice replication you need a good amount and variety of clean voice recordings, but Voicebox does it better than many others, with less data (think like 2 seconds). Fortunately theyre keeping this genie in the bottle for now. For those who think they might need their voice cloned, check out Acapela.
More:
This week in AI: Big tech bets billions on machine learning tools - TechCrunch
A.I. has a discrimination problem. In banking, the consequences can be severe – CNBC
Artificial intelligence algorithms are increasingly being used in financial services but they come with some serious risks around discrimination.
Sadik Demiroz | Photodisc | Getty Images
AMSTERDAM Artificial intelligence has a racial bias problem.
From biometric identification systems that disproportionately misidentify the faces of Black people and minorities, to applications of voice recognition software that fail to distinguish voices with distinct regional accents, AI has a lot to work on when it comes to discrimination.
And the problem of amplifying existing biases can be even more severe when it comes to banking and financial services.
Deloitte notes that AI systems are ultimately only as good as the data they're given: Incomplete or unrepresentative datasets could limit AI's objectivity, while biases in development teams that train such systems could perpetuate that cycle of bias.
Nabil Manji, head of crypto and Web3 at Worldpay by FIS, said a key thing to understand about AI products is that the strength of the technology depends a lot on the source material used to train it.
"The thing about how good an AI product is, there's kind of two variables," Manji told CNBC in an interview. "One is the data it has access to, and second is how good the large language model is. That's why the data side, you see companies like Reddit and others, they've come out publicly and said we're not going to allow companies to scrape our data, you're going to have to pay us for that."
As for financial services, Manji said a lot of the back-end data systems are fragmented in different languages and formats.
"None of it is consolidated or harmonized," he added. "That is going to cause AI-driven products to be a lot less effective in financial services than it might be in other verticals or other companies where they have uniformity and more modern systems or access to data."
Manji suggested that blockchain, or distributed ledger technology, could serve as a way to get a clearer view of the disparate data tucked away in the cluttered systems of traditional banks.
However, he added that banks being the heavily regulated, slow-moving institutions that they are are unlikely to move with the same speed as their more nimble tech counterparts in adopting new AI tools.
"You've got Microsoft and Google, who like over the last decade or two have been seen as driving innovation. They can't keep up with that speed. And then you think about financial services. Banks are not known for being fast," Manji said.
Rumman Chowdhury, Twitter's former head of machine learning ethics, transparency and accountability, said that lending is a prime example of how an AI system's bias against marginalized communities can rear its head.
"Algorithmic discrimination is actually very tangible in lending," Chowdhury said on a panel at Money20/20 in Amsterdam. "Chicago had a history of literally denying those [loans] to primarily Black neighborhoods."
In the 1930s, Chicago was known for the discriminatory practice of "redlining," in which the creditworthiness of properties was heavily determined by the racial demographics of a given neighborhood.
"There would be a giant map on the wall of all the districts in Chicago, and they would draw red lines through all of the districts that were primarily African American, and not give them loans," she added.
"Fast forward a few decades later, and you are developing algorithms to determine the riskiness of different districts and individuals. And while you may not include the data point of someone's race, it is implicitly picked up."
Indeed, Angle Bush, founder of Black Women in Artificial Intelligence, an organization aiming to empower Black women in the AI sector, tells CNBC that when AI systems are specifically used for loan approval decisions, she has found that there is a risk of replicating existing biases present in historical data used to train the algorithms.
"This can result in automatic loan denials for individuals from marginalized communities, reinforcing racial or gender disparities," Bush added.
"It is crucial for banks to acknowledge that implementing AI as a solution may inadvertently perpetuate discrimination," she said.
Frost Li, a developer who has been working in AI and machine learning for more than a decade, told CNBC that the "personalization" dimension of AI integration can also be problematic.
"What's interesting in AI is how we select the 'core features' for training," said Li, who founded and runs Loup, a company that helps online retailers integrate AI into their platforms. "Sometimes, we select features unrelated to the results we want to predict."
When AI is applied to banking, Li says, it's harder to identify the "culprit" in biases when everything is convoluted in the calculation.
"A good example is how many fintech startups are especially for foreigners, because a Tokyo University graduate won't be able to get any credit cards even if he works at Google; yet a person can easily get one from community college credit union because bankers know the local schools better," Li added.
Generative AI is not usually used for creating credit scores or in the risk scoring of consumers.
"That is not what the tool was built for," said Niklas Guske,chief operating officer at Taktile, a startup that helps fintechs automate decision-making.
Instead, Guske said the most powerful applications are in pre-processing unstructured data such as text files like classifying transactions.
"Those signals can then be fed into a more traditional underwriting model," said Guske. "Therefore, Generative AI will improve the underlying data quality for such decisions rather than replace common scoring processes."
But it's also difficult to prove. Apple and Goldman Sachs, for example, were accused of giving women lower limits for the Apple Card. But these claims were dismissed by the New York State Department of Financial Services after the regulator found no evidence of discrimination based on sex.
The problem, according to Kim Smouter, director of the group European Network Against Racism, is that it can be challenging to substantiate whether AI-based discrimination has actually taken place.
"One of the difficulties in the mass deployment of AI," he said, "is the opacity in how these decisions come about and what redress mechanisms exist were a racialized individual to even notice that there is discrimination."
"Individuals have little knowledge of how AI systems work and that their individual case may, in fact, be the tip of a systems-wide iceberg. Accordingly, it's also difficult to detect specific instances where things have gone wrong," he added.
Smouter cited the example of the Dutch child welfare scandal, in which thousands of benefit claims were wrongfully accused of being fraudulent. The Dutch government was forced to resign after a 2020 report found that victims were "treated with an institutional bias."
This, Smouter said, "demonstrates how quickly such dysfunctions can spread and how difficult it is to prove them and get redress once they are discovered and in the meantime significant, often irreversible damage is done."
Chowdhury says there is a need for a global regulatory body, like the United Nations, to address some of the risks surrounding AI.
Though AI has proven to be an innovative tool, some technologists and ethicists have expressed doubts about the technology's moral and ethical soundness. Among the top worries industry insiders expressed are misinformation; racial and gender bias embedded in AI algorithms; and "hallucinations" generated by ChatGPT-like tools.
"I worry quite a bit that, due to generative AI, we are entering this post-truth world where nothing we see online is trustworthy not any of the text, not any of the video, not any of the audio, but then how do we get our information? And how do we ensure that information has a high amount of integrity?" Chowdhury said.
Now is the time for meaningful regulation of AI to come into force but knowing the amount of time it will take regulatory proposals like the European Union's AI Act to take effect, some are concerned this won't happen fast enough.
"We call upon more transparency and accountability of algorithms and how they operate and a layman's declaration that allows individuals who are not AI experts to judge for themselves, proof of testing and publication of results, independent complaints process, periodic audits and reporting, involvement of racialized communities when tech is being designed and considered for deployment," Smouter said.
The AI Act, the first regulatory framework of its kind,has incorporated a fundamental rights approach and concepts like redress, according to Smouter, adding that the regulation will be enforced in approximately two years.
"It would be great if this period can be shortened to make sure transparency and accountability are in the core of innovation," he said.
Excerpt from:
A.I. has a discrimination problem. In banking, the consequences can be severe - CNBC
AI Consciousness: An Exploration of Possibility, Theoretical … – Unite.AI
AI consciousness is a complex and fascinating concept that has captured the interest of researchers, scientists, philosophers, and the public. As AI continues to evolve, the question inevitably arises:
Can machines attain a level of consciousness comparable to human beings?
With the emergence of Large Language Models (LLMs) and Generative AI, the road to achieving the replication of human consciousness is also becoming possible.
Or is it?
A former Google AI engineer Blake Lemoine recently propagated the theory that Googles language model LaMDA is sentient i.e., shows human-like consciousness during conversations. Since then, he has been fired and Google has called his claims wholly unfounded.
Given how rapidly technology is evolving, we may only be a few decades away from achieving AI consciousness. Theoretical frameworks such as Integrated Information Theory (IIT), Global Workspace Theory (GWT), and Artificial General Intelligence (AGI) provide a frame of reference for how AI consciousness can be achieved.
Before we explore these frameworks further, lets try to understand consciousness.
Consciousness refers to awareness of sensory (vision, hearing, taste, touch, and smell) and psychological (thoughts, emotions, desires, beliefs) processes.
However, the subtleties and intricacies of consciousness make it a complex, multi-faceted concept that remains enigmatic, despite exhaustive study in neuroscience, philosophy, and psychology.
David Chalmers, philosopher and cognitive scientist, mentions the complex phenomenon of consciousness as follows:
There is nothing we know about more directly than consciousness, but it is far from clear how to reconcile it with everything else we know. Why does it exist? What does it do? How could it possibly arise from lumpy gray matter?
It is important to note that consciousness is a subject of intense study in AI since AI plays a significant role in the exploration and understanding of consciousness. A simple search on Google Scholar returns about 2 million research papers, articles, thesis, conference papers, etc., on AI consciousness.
AI today has shown remarkable advancements in specific domains. AI models are extremely good at solving narrow problems, such as image classification, natural language processing, speech recognition, etc., but they dont possess consciousness.
They lack subjective experience, self-consciousness, or an understanding of context beyond what they have been trained to process. They can manifest intelligent behavior without any sense of what these actions mean, which is entirely different from human consciousness.
However, researchers are trying to take a step towards a human-like mind by adding a memory aspect to neural networks. Researchers were able to develop a model that adapts to its environment by examining its own memories and learning from them.
Integrated Information Theory is a theoretical framework proposed by neuroscientist and psychiatrist Giulio Tononi to explain the nature of consciousness.
IIT suggests that any system, biological or artificial, that can integrate information to a high degree could be considered conscious. AI models are becoming more complex, with billions of parameters capable of processing and integrating large volumes of information. According to IIT, these systems may develop consciousness.
However, it's essential to consider that IIT is a theoretical framework, and there is still much debate about its validity and applicability to AI consciousness.
Global Workspace Theory is a cognitive architecture and theory of consciousness developed by cognitive psychologist Bernard J. Baars. According to GWT, consciousness works much like a theater.
The stage of consciousness can only hold a limited amount of information at a given time, and this information is broadcast to a global workspace a distributed network of unconscious processes or modules in the brain.
Applying GWT to AI suggests that, theoretically, if an AI were designed with a similar global workspace, it could be capable of a form of consciousness.
It doesn't necessarily mean the AI would experience consciousness as humans do. Still, it would have a process for selective attention and information integration, key elements of human consciousness.
Artificial General Intelligence is a type of AI that can understand, learn, and apply knowledge across a wide range of tasks, similar to a human being. AGI contrasts with Narrow AI systems, designed to perform specific tasks, like voice recognition or chess playing, that currently constitute the bulk of AI applications.
In terms of consciousness, AGI has been considered a prerequisite for manifesting consciousness in an artificial system. However, AI is not yet advanced enough to be considered as intelligent as humans.
The Computational Theory of Mind (CTM) considers the human brain a physically implemented computational system. The proponents of this theory believe that to create a conscious entity, we need to develop a system with cognitive architectures similar to our brains.
But the human brain consists of 100 billion neurons, so replicating such a complex system would require exhaustive computational resources. Moreover, understanding the dynamic nature of consciousness is beyond the boundaries of the current technological ecosystem.
Lastly, the roadmap to achieving AI consciousness will remain unclear even if we resolve the computational challenge. There are challenges to the epistemology of CTM, and this raises the question:
How are we so sure that human consciousness can be purely reduced to computational processes?
The hard problem of consciousness is an important issue in the study of consciousness, particularly when considering its replication in AI systems.
The hard problem signifies the subjective experience of consciousness, the qualia (phenomenal experience), or what it is like to have subjective experiences.
In the context of AI, the hard problem raises fundamental questions about whether it is possible to create machines that not only manifest intelligent behavior but also possess subjective awareness and consciousness.
Philosophers Nicholas Boltuc and Piotr Boltuc, while providing an analogy for the hard problem of consciousness in AI, say:
AI could in principle replicate consciousness (H-consciousness) in its first-person form (as described by Chalmers in the hard problem of consciousness.) If we can understand first-person consciousness in clear terms, we can provide an algorithm for it; if we have such algorithm, in principle we can build it
But the main problem is that we dont clearly understand consciousness. Researchers say that our understanding and the literature built around consciousness are unsatisfactory.
Ethical considerations around AI consciousness add another layer of complexity and ambiguity to this ambitious quest. Artificial consciousness raises some ethical questions:
Progress in neuroscience and advances in machine learning algorithms can create the possibility of broader Artificial General Intelligence. Artificial consciousness, however, will remain an enigma and a subject of debate among researchers, tech leaders, and philosophers for some time. AI systems becoming conscious comes with various risks that must be thoroughly studied.
For more AI-related content, visit unite.ai.
View post:
AI Consciousness: An Exploration of Possibility, Theoretical ... - Unite.AI
WEDNESDAY: West Seattle facilitator hosting ‘civic conversation … – West Seattle Blog
Been seeing the seemingly endless headlines about AI artificial intelligence but not sure how you feel about it? Or, maybe youre already using it, and excited about its possibilities. Or, perhaps youre somewhere between worried and terrified of where it might take us. However you feel about AI, if youre interested in a facilitated civic conversation about it, your West Seattle neighbor James Boutin is hosting one this Wednesday evening (June 28th), 5-7 pm, at C & P Coffee (5612 California SW; WSB sponsor). When James sent us the announcement for the WSB West Seattle Event Calendar, we asked why whats his stake in AI? He replied that first and foremost, Im a citizen who cares a great deal about democracy and believes the public is in desperate need of public spaces to talk openly about the speed at which AI technology is advancing (among many other issues important to our world). He also is an educator and facilitator who is hoping to get more practice under my belt in facilitating these types of conversations. I just completed a masters program on facilitation and conflict studies at the Processwork Institute of Portland, OR, and Im dedicated to practicing the skills I learned about holding open forums out in the world. (His website is here.) James suggests a $15 donation to help me cover the costs of preparation and spreading the word, but folks are also welcome to donate less or come for free.
Read the rest here:
WEDNESDAY: West Seattle facilitator hosting 'civic conversation ... - West Seattle Blog
The Next Token of Progress: 4 Unlocks on the Generative AI Horizon – Andreessen Horowitz
Large language models (LLMs) have taken the tech industry by storm, powering experiences that can only be described as magicalfrom writing a weeks worth of code in seconds to generating conversations that feel even more empathetic than the ones we have with humans. Trained on trillions of tokens of data with clusters of thousands of GPUs, LLMs demonstrate remarkable natural language understanding and have transformed fields like copy and code, propelling us into the new and exciting generative era of AI. As with any emerging technology, generative AI has been met with some criticism. Though some of this criticism does reflect current limits of LLMs current capabilities, we see these roadblocks not as fundamental flaws in the technology, but as opportunities for further innovation.
To better understand the near-term technological breakthroughs for LLMs and prepare founders and operators for whats around the bend, we spoke to some of the leading generative AI researchers who are actively building and training some of the largest and most cutting edge models: Dario Amodei, CEO of Anthropic; Aidan Gomez, CEO of Cohere; Noam Shazeer, CEO of Character.AI; and Yoav Shoham of AI21 Labs. These conversations identified 4 key innovations on the horizon: steering, memory, arms and legs, and multimodality. In this piece, we discuss how these key innovations will evolve over the next 6 to 12 months and how founders curious about integrating AI into their own businesses might leverage these new advances.
Many founders are understandably wary of implementing LLMs in their products and workflows because of these models potential to hallucinate and reproduce bias. To address these concerns, several of the leading model companies are working on improved steeringa way to place better controls on LLM outputsto focus model outputs and help models better understand and execute on complex user demands. Noam Shazeer draws a parallel between LLMs and children in this regard: its a question of how to direct [the model] better We have this problem with LLMs that we just need the right ways of telling them to do what we want. Small children are like this as wellthey make things up sometimes and dont have a firm grasp of fantasy versus reality. Though there has been notable progress in steerability among the model providers as well as the emergence of tools like Guardrails and LMQL, researchers are continuing to make advancements, which we believe is key to better productizing LLMs among end users.
Improved steering becomes especially important in enterprise companies where the consequences of unpredictable behavior can be costly. Amodei notes that the unpredictability of LLMs freaks people out and, as an API provider, he wants to be able to look a customer in the eye and say no, the model will not do this, or at least does it rarely. By refining LLM outputs, founders can have greater confidence that the models performance will align with customer demands. Improved steering will also pave the way for broader adoption in other industries with higher accuracy and reliability requirements, like advertising, where the stakes of ad placement are high. Amodei also sees use cases ranging from legal use cases, medical use cases, storing financial information and managing financial bets, [to] where you need to preserve the company brand. You dont want the tech you incorporate to be unpredictable or hard to predict or characterize. With better steering, LLMs will also be able to do more complex tasks with less prompt engineering, as they will be able to better understand overall intent.
Advances in LLM steering also have the potential to unlock new possibilities in sensitive consumer applications where users expect tailored and accurate responses. While users might be willing to tolerate less accurate outputs from LLMs when engaging with them for conversational or creative purposes, users want more accurate outputs when using LLMs to assist them in daily tasks, advise them on major decisions, or augment professionals like life coaches, therapists, and doctors. Some have pointed out that LLMs are poised to unseat entrenched consumer applications like search, but we likely need better steering to improve model outputs and build user trust before this becomes a real possibility.
Key unlock: users can better tailor the outputs of LLMs.
Copywriting and ad-generating apps powered by LLMs have already seen great results, leading to quick uptake among marketers, advertisers, and scrappy entrepreneurs. Currently, however, most LLM outputs are relatively generalized, which makes it difficult to leverage them for use cases requiring personalization and contextual understanding. While prompt engineering and fine-tuning can offer some level of personalization, prompt engineering is less scalable and fine-tuning tends to be expensive, since it requires some degree of re-training and often partnering closely with mostly closed source LLMs. Its often not feasible or desirable to fine-tune a model for every individual user.
In-context learning, where the LLM draws from the content your company has produced, your companys specific jargon, and your specific context, is the holy grailcreating outputs that are more refined and tailored to your particular use case. In order to unlock this, LLMs need enhanced memory capabilities. There are two primary components to LLM memory: context windows and retrieval. Context windows are the text that the model can process and use to inform its outputs in addition to the data corpus it was trained on. Retrieval refers to retrieving and referencing relevant information and documents from a body of data outside the models training data corpus (contextual data). Currently, most LLMs have limited context windows and arent able to natively retrieve additional information, and so generate less personalized outputs. With bigger context windows and improved retrieval, however, LLMs can directly offer much more refined outputs tailored to individual use cases.
With expanded context windows in particular, models will be able to process larger amounts of text and better maintain context, including maintaining continuity through a conversation. This will, in turn, significantly enhance models ability to carry out tasks that require a deeper understanding of longer inputs, such as summarizing lengthy articles or generating coherent and contextually accurate responses in extended conversations. Were already seeing significant improvement with context windowsGPT-4 has both an 8k and 32k token context window, up from 4k and 16k token context windows with GPT-3.5 and ChatGPT, and Claude recently expanded its context window to an astounding 100k tokens.
Expanded context windows alone dont sufficiently improve memory, since cost and time of inference scale quasi-linearly, or even quadratically, with the length of the prompt. Retrieval mechanisms augment and refine the LLMs original training corpus with contextual data that is most relevant to the prompt. Because LLMs are trained on one body of information and are typically difficult to update, there are two primary benefits of retrieval according to Shoham: First, it allows you to access information sources you didnt have at training time. Second, it enables you to focus the language model on information you believe is relevant to the task. Vector databases like Pinecone have emerged as the de facto standard for the efficient retrieval of relevant information and serve as the memory layer for LLMs, making it easier for models to search and reference the right data amongst vast amounts of information quickly and accurately.
Together, increased context windows and retrieval will be invaluable for enterprise use cases like navigating large knowledge repositories or complex databases. Companies will be able to better leverage their proprietary data, like internal knowledge, historical customer support tickets, or financial results as inputs to LLMs without fine-tuning. Improving LLMs memory will lead to improved and deeply customized capabilities in areas like training, reporting, internal search, data analysis and business intelligence, and customer support.
In the consumer space, improved context windows and retrieval will enable powerful personalization features that can revolutionize user experiences. Noam Shazeer believes that one of the big unlocks will be developing a model that both has a very high memory capacity to customize for each user but can still be served cost-effectively at scale. You want your therapist to know everything about your life; you want your teacher to understand what you know already; you want a life coach who can advise you about things that are going on. They all need context. Aidan Gomez is similarly excited by this development. By giving the model access to data thats unique to you, like your emails, calendar, or direct messages, he says, the model will know your relationships with different people and how you like to talk to your friends or your colleagues and can help you within that context to be maximally useful.
Key unlock: LLMs will be able to take into account vast amounts of relevant information and offer more personalized, tailored, and useful outputs.
The real power of LLMs lies in enabling natural language to become the conduit for action. LLMs have a sophisticated understanding of common and well-documented systems, but they cant execute on any information they extract from those systems. For example, OpenAIs ChatGPT, Anthropics Claude, and Character AIs Lily can describe, in detail, how to book a flight, but they cant natively book that flight themselves (though advancements like ChatGPTs plugins are starting to push this boundary). Theres a brain that has all this knowledge in theory and is just missing the mapping from names to the button you press, says Amodei. It doesnt take a lot of training to hook those cables together. You have a disembodied brain that knows how to move, but it doesnt have arms or legs attached yet.
Weve seen companies steadily improve LLMs ability to use tools over time. Incumbents like Bing and Google and startups like Perplexity and You.com introduced search APIs. AI21 Labs introduced Jurassic-X, which addressed many of the flaws of standalone LLMs by combining models with a predetermined set of tools, including a calculator, weather API, wiki API, and database. OpenAI betaed plugins that allow ChatGPT to interact with tools like Expedia, OpenTable, Wolfram, Instacart, Speak, a web browser, and a code interpreteran unlock that drew comparisons to Apples App Store moment. And more recently, OpenAI introduced function calling in GPT-3.5 and GPT-4, which allows developers to link GPTs capabilities to whatever external tools they want.
By shifting the paradigm from knowledge excavation to an action orientation, adding arms and legs has the potential to unlock a range of use cases across companies and user types. For consumers, LLMs may soon be able to give you recipe ideas then order the groceries you need, or suggest a brunch spot and book your table. In the enterprise, founders can make their apps easier to use by plugging in LLMs. As Amodei notes, for features that are very hard to use from a UI perspective, we may be able to make complicated things happen by just describing them in natural language. For instance, for apps like Salesforce, LLM integration should allow users to give an update in natural language and have the model automatically make those changessignificantly cutting down the time required to maintain the CRM. Startups like Cohere and Adept are working on integrations into these kinds of complex tools.
Gomez believes that, while its increasingly likely that LLMs will be able to use apps like Excel within 2 years, theres a bunch of refinement that still needs to happen. Well have a first generation of models that can use tools that will be compelling but brittle. Eventually, well get the dream system, where we can give any software to the model with some description of heres what the tool does, heres how you use it, and itll be able to use it. Once we can augment LLMs with specific and general tools, the sort of automation it unlocks is the crown jewel of our field.
Key unlock: LLMs will be able to interact much more effectively with the tools we use today.
While the chat interface is exciting and intuitive for many users, humans hear and speak language as or more often than they write or read it. As Amodei notes, there is a limit to what AI systems can do because not everything is text. Models featuring multimodality, or the ability to seamlessly process and generate content across multiple audio or visual formats, changes this interaction to beyond language. Models like GPT-4, Character.AI, and Metas ImageBind already process and generate images, audio, and other modalities, but they do so at a more basicthough quickly improvinglevel. In Gomezs words, our models are blind in a literal sense todaythat needs to change. Weve built a lot of graphical user interfaces (GUIs) that assume [the user] can see.
As LLMs evolve to better understand and interact with multiple modalities, theyll be able to use existing apps that rely on GUIs today, like the browser. They can also offer more engaging, connected, and comprehensive experiences to consumers, who will be able to engage outside of a chat interface. A lot of great integration with multimodal models can make things a lot more engaging and connected to the user, Shazeer points out. I believe, for now, most of the core intelligence comes from text, but audio and video can make these things more fun. From video chats with AI tutors to iterating and writing TV pilot scripts with an AI partner, multimodality has the potential to change entertainment, learning and development, and content generation across a variety of consumer and enterprise use cases.
Multimodality is also closely tied to tool use. While LLMs might initially connect with outside software through APIs, multimodality will enable LLMs to use tools designed for humans that dont have custom integrations, like legacy ERPs, desktop applications, medical equipment, or manufacturing machinery. Were already seeing exciting developments on this front: Googles Med-PaLM-2 model, for instance, can synthesize mammograms and X-rays. And as we think longer-term, multimodalityparticularly integration with computer visioncan extend LLMs into our own physical reality through robotics, autonomous vehicles, and other applications that require real-time interaction with the physical world.
Key unlock: Multimodal models can reason about images, video, or even physical environments without significant tailoring.
While there are real limitations to LLMs, researchers have made astounding improvements to these models in a short amount of timein fact, weve had to update this article multiple times since we started writing it, a testament to the lightning-fast progression of this technology in the field. Gomez agrees: An LLM making up facts 1 in 20 times is obviously still too high. But I really still feel quite confident that its because this is the first time weve built a system like that. Peoples expectations are quite high, so the goal post has moved from computer is dumb and does only math to a human couldve done this better. Weve sufficiently closed the gap so that criticism is around what a human can do.
Were particularly excited about these 4 innovations, which are on the cusp of changing the way founders build products and run their companies. The potential is even greater in the long term. Amodei predicts that, at some point, we could have a model that will read through all the biological data and say: heres the cure for cancer. Realistically, the best new applications are likely still unknown. At Character.AI, Shazeer lets the users develop those use cases: Well see a lot of new applications unlocked. Its hard for me to say what the applications are. There will be millions of them and the users are better at figuring out what to do with the technology than a few engineers. We cant wait for the transformative effect these advancements will have on the way we live and work as founders and companies are empowered with these new tools and capabilities.
Thanks to Matt Bornstein, Guido Appenzeller,and Rajko Radovanovi for their input and feedback during the writing process.
* * *
The views expressed here are those of the individual AH Capital Management, L.L.C. (a16z) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. In addition, this content may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein.
This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available athttps://a16z.com/investments/.
Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please seehttps://a16z.com/disclosuresfor additional important information.
See more here:
The Next Token of Progress: 4 Unlocks on the Generative AI Horizon - Andreessen Horowitz