Category Archives: Ai
Cutting-edge AI raises fears about risks to humanity. Are tech and … – The Columbian
LONDON (AP) Chatbots like ChatGPT wowed the world with their ability towrite speeches, plan vacations orhold a conversationas good as or arguably even better than humans do, thanks to cutting-edge artificial intelligence systems. Now, frontier AI has become the latest buzzword as concerns grow that the emerging technology has capabilities that could endanger humanity.
Everyone from the British government to top researchers and even major AI companies themselves are raising the alarm about frontier AIs as-yet-unknown dangers and calling for safeguards to protect people from its existential threats.
The debate comes to a head Wednesday, when British Prime Minister Rishi Sunak hosts a two-day summit focused on frontier AI. Its reportedly expected to draw a group of about 100 officials from 28 countries, including U.S. Vice President Kamala Harris, European Commission President Ursula von der Leyen and executives from key U.S. artificial intelligence companies including OpenAI, Googles Deepmind and Anthropic.
The venue is Bletchley Park, a former top secret base for World War II codebreakers led by Alan Turing. The historic estate is seen as the birthplace of modern computing because it is where Turing and others famously cracked Nazi Germanys codes using the worlds first digital programmable computer.
In a speech last week,Sunak saidonly governments not AI companies can keep people safe from the technologys risks. However, he also noted that the U.K.s approach is not to rush to regulate, even as he outlined a host of scary-sounding threats, such as the use of AI to more easily make chemical or biological weapons.
See the rest here:
Cutting-edge AI raises fears about risks to humanity. Are tech and ... - The Columbian
Fear of AI is an old, old story. Rebelling robots and evil mystery boxes have worried us for millennia – CBC.ca
Tapestry
Posted: 5 Hours Ago
The fears of rogue artificial intelligence may seem like a new concern, with recent developments such as ChatGPT and self-driving cars but tales of sentient and potentially malevolent technology date back not just decades, but millennia.
According to historians, these themes were around long before Arnold Schwarzenegger played the role of a killer robot and travelled back in time to menace Sarah Connor in 1984's The Terminator.
"People had been thinking about these kinds of devices and inventions and innovations before the technology existed," Adrienne Mayor, a historian of ancient science and a classical folklorist at Stanford University, told Tapestry host Mary Hynes.
Stories such as Pandora in ancient Greece, the murderous rampage of a golem in Prague, and Frankenstein's monster are just some of the many dots throughout history that connect our fear of inanimate creations coming to life.
Mayor, whose 2018 book Gods and Robots explores the subject, says some of these legends come with warnings.
One of the oldest tales dates back to ancient Greece and the story of Pandora. Mayor says in the original story, told by Greek poet Hesiod, Zeus wanted to punish humankind for accepting the gift of fire.
So Zeus commissioned Hephaestus the god of fire, blacksmiths, craftsmenand volcanoes to create an artificial woman named Pandora that Zeus described as evil disguised as beauty.
"Zeus sent this lifelike fembot to Earth carrying this jar filled with misery for mortals," said Mayor. "Pandora's mission was to insinuate herself into human society and then open that jar and release all the misery."
In Hesiod's story, Pandora did just that. Prometheus's brother, Epimetheus, fell for the beauty of Pandora, despite his brother's warning. In Greek, Prometheus means looking ahead, while Epimetheus means hindsight.
"We've got foresight versus hindsight right there in one of the oldest myths about artificial life," said Mayor.
(Hulton Archive/Getty Images)
"Prometheans today are concerned about our future with AI and robotics, in contrast to the overly optimistic Epimetheans, who are easily dazzled by the short-term gains."
Mayor says Pandora isn't the only tale about artificial intelligence in Greek mythology. There's also the story of Talos, the first depiction of a robot-like being in Western literature. Talos was designed by Hephaestus to protect the island of Crete.
"He could pick up and hurl boulders to sink the enemy ships. And then if anyone did come ashore, he could heat his bronze body to red hot and then grab them up and hug them to himself and roast them alive," said Mayor.
But in the story of Jason and the Argonauts, they were able to remove the bolt on Talos's ankle to defeat him.
"So Talos was made by technology and taken down by technology. They took out the bolt, the power source bled out and the giant robot was destroyed," said Mayor.
Amir Vudka, a lecturer at the department of media studies at the University of Amsterdam, says there are a lot of examples of inanimate objects coming to life and causing chaos, like the story of the golem of Prague.
Vudka says there are many versions of the legend, but in all of them, a rabbi uses magic to create a golem. At first, the golem is a good servant, operating as a kind of robot. In some cases, it would protect people. In other stories, it would just help the rabbi with labour. But it always goes wrong.
(Sean Gallup/Getty Images)
"The golem always gets out of control, eventually, kind of rebelling against his master [and] brings a lot of destruction, death, mayhem," said Vudka.
"What keeps repeating is that perhaps it's not a good idea to create something like this."
These stories repeat throughout culture, says Vudka. From Frankenstien's monster, to robots in Blade Runner and The Terminator, humans keep telling the tale of artificial intelligence that rebels.
"We are very afraid of the unknown. In general, I think humans are usually afraid of what they don't know, of otherness," said Vudka.
Vudka says there is an important lesson to be learned from the tale of the golem. In the story of the rabbi creating the golem, the rabbi knows the words to reverse the spell and end the golem's rampage.
"You have to know the spell to close it. Otherwise, what do you do when it goes out of control? It might be too late," said Vudka.
That's why, he says, it's important we know how to control the technology we create.
In the story of Pandora, the jar that brought misery to people serves as a black box.Mayor says people know less and less about the technology they use, and ChatGPT can similarly be considered a black box.
(Marco Bertorello/AFP via Getty Images)
"There's a tendency for technology to be able to access unimaginably vast and complex data, and then make decisions based on that," said Mayor. "Both the users and the makers will be in the dark as to how those decisions were made by the AI."
Mayor says it's important that we remember that these technological advancements are tools, not new life. She says it puts the responsibility of what AI does onto the creators, not the creations themselves.
And, she says, it shouldn't all be thought of as bad or evil. She said there are also examples of myths where technology brings nothing but blessings.
In Homer's Odyssey, Odysseus uses what is basically a self-driving boat that helps him get home safely.
"There is nothing dubious about this. There's nothing bad. It's labour-saving. It fulfills his deepest wish. And these ships appear to be AI-driven ... and it's hopeful," said Mayor.
Philip Drost is a journalist with the CBC. You can reach him by email at philip.drost@cbc.ca.
Originally posted here:
FACT SHEET: Vice President Harris Announces New U.S. Initiatives … – The White House
As part of her visit to the United Kingdom to deliver a major policy speech on Artificial Intelligence (AI) and attend the Global Summit on AI Safety, Vice President Kamala Harris is announcing a series of new U.S. initiatives to advance the safe and responsible use of AI. These bold actions demonstrate U.S. leadership on AI and build upon the historic Executive Order signed by President Biden on October 30.
Since taking office, President Biden and the Vice President have moved with urgency to seize the promise and manage the risks posed by AI. The Biden-Harris Administration is working with the private sector, other governments, and civil society to uphold the highest standards to ensure that innovation does not come at the expense of the publics rights and safety.
As part of the Vice Presidents global work to strengthen international rules and norms, the Vice President is committed to establishing a set of rules and norms for AI, with allies and partners, that reflect democratic values and interests, including transparency, privacy, accountability, and consumer protections. Her trip to London and participation in the Global Summit on AI Safety will further advance this work.
The Vice Presidents trip to the United Kingdom builds on her long record of leadership to confront the challenges and seize the opportunities of advanced technology. In May, she convened the CEOs of companies at the forefront of AI innovation, resulting in voluntary commitmentsfrom 15 leading AI companies to help move toward safe, secure, and transparent development of AI technology. In July, the Vice President convened consumer protection, labor, and civil rights leaders to discuss the risks related to AI and to underscore that it is a false choice to suggest America can either advance innovation or protect consumers rights.
As part of her visit to the United Kingdom, the Vice President is announcing the following initiatives.
Additional actions:
###
See the article here:
FACT SHEET: Vice President Harris Announces New U.S. Initiatives ... - The White House
AI can teach us a lot: scientists say cats expressions richer than imagined and aim to translate them – The Guardian
Artificial intelligence (AI)
Artificial intelligence being used to unpick meanings behind vocal and physical cues of host of creatures
If an unexpected meow, peculiar pose, or unusual twitch of the whiskers leaves you puzzling over what your cat is trying to tell you, artificial intelligence may soon be able to translate.
Scientists are turning to new technology to unpick the meanings behind the vocal and physical cues of a host of animals.
We could use AI to teach us a lot about what animals are trying to say to us, said Daniel Mills, a professor of veterinary behavioural medicine at the University of Lincoln.
Previous work, including by Mills, has shown that cats produce a variety of facial expressions when interacting with humans, and this week researchers revealed felines have a range of 276 facial expressions when interacting with other cats.
However, the facial expressions they produce towards humans look different from those produced towards cats, said Dr Brittany Florkiewicz, an assistant professor of psychology at Lyon College in Arkansas who co-authored the new work.
Mills said the latest research highlighted the complexity of feline facial manoeuvres, adding that new technology could help to unpick them.
As this paper suggests, there is a much greater richness in cat expressions than we appreciate and what AI is good at is classifying images, he said.
One approach, said Mills, was to teach AI to identify specific features such as ear position, which is already known to be important for certain emotions. Another more modern approach is to allow AI to come up with its own rules for classification. While that brings its own challenges, Mills said it could also offer fresh insights.
It could highlight the rules it uses to distinguish data sets, which can show us where to look for the best way to distinguish certain expressions.
Mills and colleagues are already attempting to use AI to try to tease out certain emotional states from facial expressions in cats, dogs and horses. He said there was no shortage of videos to work with a truth universally acknowledged by anyone who has spent time on YouTube.
As well as offering us new ways to understand what our pets are trying to communicate, Mills noted AI could be used for animal welfare, for example to screen the faces of cows for signs of pain as they troop in for milking. In effect, they can have a daily health check of how happy they are, he said.
Among those looking at such applications is Dr Elodie Briefer, an associate professor of ecology and evolution at the University of Copenhagen. Her research has shown AI can be trained to classify pig vocalisations to distinguish between pigs that are happy and those that are not. The idea, Briefer said, was that such tools could be used on farms to track the welfare of the animals.
During an increase in negative calls, the farmer can check whats going on, or if he or she implements some new measures like enrichment he or she can see if there is increasing positive calls, for example, she said.
Briefer added that her team were hoping in future work to couple such findings with AI-based analysis of pigs body postures and expressions. You can get much more information if you use AI on both vocalisations and videos to study facial expressions and body movements, she said.
There are wider implications of using AI to understand animal communication, including to aid conservation.
Mills said the technology could also provide insights into more fundamental biology and psychology, including understanding the origins of some human traits.
Briefer agreed. Among other work, her team is using AI to classify vocalisations in animals such as zebras, white rhinos and parakeets to explore how they communicate. She also said researchers were using AI to explore the size of animals call repertoires and how these calls are combined in a sort of rudimentary syntax to convey information.
That could help us to get to know how did we come up with such huge language skills compared with our closest relatives, she said.
Prof Christian Rutz, from the University of St Andrews, also said the technology had great potential. Machine-learning methods will transform our understanding of animal communication, creating valuable opportunities to improve wildlife conservation and animal welfare, he said.
But, as Rutz and his colleagues have recently noted, there could be potential pitfalls too not least should researchers attempt communication with animals in their own language before such signals are fully understood. We urgently need to agree on ethical standards for this kind of work to prevent unintended harm or misuse, he said.
{{topLeft}}
{{bottomLeft}}
{{topRight}}
{{bottomRight}}
{{.}}
Read more from the original source:
Elon Musk says AI will eventually create a situation where ‘no job is needed’ – CNBC
Elon Musk, chief executive officer of Tesla Inc., at the AI Safety Summit 2023 at Bletchley Park in Bletchley, UK, on Wednesday, Nov. 1, 2023.
Chris J. Ratcliffe | Bloomberg | Getty Images
LONDON Elon Musk thinks that artificial intelligence could eventually put everyone out of a job.
The billionaire technology leader, who is CEO of Tesla, SpaceX and CTO and executive chairman of X, formerly known as Twitter, and owner of the newly formed AI startup xAI, said late Thursday that AI will have the potential to become the "most disruptive force in history."
"We will have something that is, for the first time smarter than the smartest human," Musk said at an event at Lancaster House, an official U.K. government residence.
"It's hard to say exactly what that moment is, but there will come a point where no job is needed," Musk continued, speaking alongside British Prime Minister Rishi Sunak. "You can have a job if you wanted to have a job for personal satisfaction. But the AI would be able to do everything."
"I don't know if that makes people comfortable or uncomfortable," Musk joked, to which the audience laughed.
"If you wish for a magic genie, that gives you any wish you want, and there's no limit. You don't have those three wish limits nonsense, it's both good and bad. One of the challenges in the future will be how do we find meaning in life."
Musk's comments Thursday follow the conclusion of a landmark summit at Bletchley Park, England, where world leaders agreed to a global communique on AI that saw them find common ground on the risks the technology poses to humanity.
Technologists and political leaders used the summit to warn of the existential threats that AI poses, focusing on some of the possible doomsday scenarios that could be formed with the invention of a hypothetical superintelligence.
The summit saw the U.S. and China, two countries clashing the most tensely over technology, agree to find global consensus on how to tackle some of the more complex questions around AI, including how to develop it safely and regulate it.
Correction: Elon Musk is CEO of Tesla. An earlier version misstated his status.
See more here:
Elon Musk says AI will eventually create a situation where 'no job is needed' - CNBC
Wall Street Thinks Tesla Has a $500 Billion Artificial Intelligence (AI … – The Motley Fool
Tesla (TSLA 0.66%) kicked off third-quarter earnings season a few weeks ago. CEO Elon Musk spent most of the earnings call speaking about the company's exploration of artificial intelligence (AI) and supercomputing.
However, it seems like Wall Street analysts were more concerned with the company's financial health than his comments. The combination of rising interest rates, inflation, and aggressive price cuts have indeed led to meaningful deterioration in Tesla's margins and overall profitability.
Following the report, Tesla stock has fallen 15% as of this writing, but this could be a rare opportunity to buy the dip in Tesla stock. While the near and medium-term picture still looks a little stormy, the long-term thesis continues to play out for Tesla, and very few people seem to be talking about it.
Prior to the earnings report, Morgan Stanley released an investor note that focused on Tesla's AI capabilities. More specifically, the research report suggests that Tesla's supercomputing technology, dubbed Dojo, could add $500 billion of enterprise value to the company. And it's this lesser known aspect of Tesla's business that makes the stock such a compelling buying opportunity right now.
As it stands today, Tesla's biggest source of revenue comes from selling its electric vehicles. And yet, unlike other automobile manufacturers, Tesla stock trades much more like a tech company than a car company. The underlying reason for this disparity between Tesla and its peers is due to the heavy investments in AI the company is making.
One of the biggest initiatives at Tesla is its autonomous driving vision, called full self-driving (FSD). At the heart of FSD is Dojo. But what is Dojo, exactly?
Image source: Getty Images.
Dojo is Tesla's supercomputer. The cameras in Tesla's vehicles are constantly capturing loads of data from the road. In turn, this data is fed back into Tesla's core architecture (or neural network) and processed by a series of graphics processing units.
As more data is collected, the smarter this network becomes. This is a classic example of machine learning. In this case, the machine is the Tesla vehicle, and its objective is to get to a point where the car itself has the ability to recognize images on the road and mimic the behavior of a human driver. Dojo gives Tesla the ability to create software from the data it's collecting that can be integrated into its vehicles and give riders the option to have the car drive itself.
While aself-driving car sounds like something out of a science fiction novel, there are many companies working on this technology. Musk posted the video below earlier this year to show just how far Tesla's FSD technology has come:
So how does Morgan Stanley see Dojo opening the door to several hundred billion dollars of additional value for Tesla?
Well, should FSD prove to be the market leader in autonomous driving technology, Tesla could (in theory) begin to license the software Dojo is creating. If this happens, Tesla's valuation becomes a lot more interesting because the company is effectively evolving from a car manufacturer to a software business. Valuing the stock like a high-margin software-as-a-service (SaaS) business doesn't seem overly far-fetched in such a scenario.
One of the most obvious use cases for autonomous driving is creating a robotaxi fleet with beneficiaries that include ride-hailing companies Uber and Lyft. In fact, Uber has already partnered with Waymo, the self-driving subsidiary of Alphabet. To understand how big this market could be, consider that Ark Invest CEO Cathie Wood estimates robotaxis could generate $9 trillion of revenue within the next decade.
That said, it's too early to tell how accurate Wood's outlook will prove to be. While companies like Tesla continue to invest significant capital into these AI-powered projects, autonomous vehicles are not yet commercially available. It's hard to ignore the progress Tesla is making, though, and given the potential size of this market, there will likely be multiple winners in this space.
The popularity of Tesla's electric vehicles means Dojo is likely ahead of the curve in developing self-driving software, and any lead the company has in licensing its software to other auto manufacturers would likely allow it establish a dominant market share position, even with the threat of Alphabet looming. This dynamic is what many investors are ultimately banking on.
Although the full potential of Dojo is still years away, Musk firmly believes Tesla's advancements in AI could make it the most valuable company in the world. Given the sell-off after its latest earnings, Tesla stock looks attractive. Investors should take advantage of the depressed stock price right now and stay focused on the long-term AI opportunity.
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. Adam Spatacco has positions in Alphabet and Tesla. The Motley Fool has positions in and recommends Alphabet, Tesla, and Uber Technologies. The Motley Fool has a disclosure policy.
Read the original:
Wall Street Thinks Tesla Has a $500 Billion Artificial Intelligence (AI ... - The Motley Fool
Take a chance on AI, says Abba’s Bjrn, but protect the musicians – Financial Times
What is included in my trial?
During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages.
Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here.
Change the plan you will roll onto at any time during your trial by visiting the Settings & Account section.
If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month.
For cost savings, you can change your plan at any time online in the Settings & Account section. If youd like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial.
You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many users needs. Compare Standard and Premium Digital here.
Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel.
You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side.
You can still enjoy your subscription until the end of your current billing period.
We support credit card, debit card and PayPal payments.
Read more:
Take a chance on AI, says Abba's Bjrn, but protect the musicians - Financial Times
LinkedIn passes 1 billion members and launches new AI chatbot to help you get a job – CNBC
A man is seen using the LinkedIn app on a mobile device and a laptop computer in this illustration photo on 17 October, 2023.
Jaap Arriens | Nurphoto | Getty Images
LinkedIn debuted an artificial intelligence-powered chatbot Wednesday that it's billing as a "job seeker coach," and unveiled other generative AI tools for Premium members.
The rollouts were tied to an announcement from LinkedIn that its platform surpassed 1 billion members. For months, the Microsoft-owned company has been bolstering its focus on tools like automated recruiter messages, job descriptions and AI-powered profile writing suggestions.
The new AI chatbot, which aims in part to help users gauge whether a job application is worth their time, is powered by OpenAI's GPT-4 and began rolling out to some Premium users Wednesday. Microsoft has invested billions of dollars into OpenAI.
LinkedIn's engineering team had to put hefty resources on the platform side to reduce latency, according to Erran Berger, LinkedIn's vice president of product engineering.
"We had to build a lot of stuff on our end to work around that and to make this a snappy experience," Berger told CNBC in an interview. "When you're having these conversational experiences, sometimes it's almost like search you expect it to be instant. And so there's real platform capabilities we had to develop to make that possible."
LinkedIn is trying to reaccelerate revenue growth after eight straight quarters of slowing expansion. Two weeks ago the company announced nearly 700 job cuts, with most coming from the engineering unit.
Users of the new chatbot can launch it from a job posting by selecting one of a few questions, such as "Am I a good fit for this job?" and "How can I best position myself for this job?" The former would prompt the tool to analyze a user's LinkedIn profile and experience, with answers like, "Your profile shows that you have extensive experience in marketing and event planning, which is relevant for this role."
The chatbot will also point to potential gaps in a user's experience that could hurt them in the job application process.
"The quality of responses has to be really good for the stakes being as high as they are here, so we don't take that lightly at all," Gyanda Sachdeva, LinkedIn's vice president of product management, told CNBC.
The user can also follow up by asking who works at the company, which will prompt the chatbot to send them a few employee profiles potentially second- or third-degree connections who the user can then message about the opportunity. The message itself can also be drafted using generative AI.
In the past, many uses of AI in hiring or job applications have faced criticism for bias against marginalized communities. One example was Amazon's use of a recruiting engine that reportedly downvoted resumes that included the word "women" or mentioned women's colleges. A separate study by the Harvard Business Review highlighted bias against black candidates in an analysis of respondents' job board recommendations.
"We've invested a lot to make sure that this stays kind of within the guardrails of what meets our responsible AI standards," Berger said. "You couple that with our own AI models for matching jobs, again, which we've been doing for a long time, you get this super-personalized, equity-minded experience for our job seekers."
CNBC's Jordan Novet contributed reporting.
Don't miss these stories from CNBC PRO:
WATCH: Tech layoffs return as LinkedIn, Qualcomm announce job cuts
Read this article:
LinkedIn passes 1 billion members and launches new AI chatbot to help you get a job - CNBC
AI instruction added to free noncredit classes available through Utah … – St. George News
Utah Techs Learn & Work Program offers participants the opportunity to take noncredit classes for free online and develop new skills on their own schedule, date and location unspecified | Photo courtesy of Utah Tech University, St. George News
ST. GEORGE To meet the increasing need for technology-based education, Utah Tech University is expanding its Learn & Work in Utah program to include courses on ChatGPT and generative AI.
Supported through the Utah System of Higher Education, the Learn & Work in Utah program helps working adults upskill in their jobs or improve future career options, according to a news release.
Utah Techs Learn & Work program participants will gain access to more than 7,000 online courses and resources offered through Pluralsight, a technology workforce development company, completely free of charge. Recently, Pluralsight added courses on ChatGPT and generative AI.
The Utah Tech Learn & Work program with Pluralsight provides specialized education in high-demand tech fields, imparting practical technology and soft skills for real-world scenarios, Mark Adkins, Utah Techs Learn & Work Program coordinator, said in a news release. This years emphasis on AI offers a unique chance to explore and apply cutting-edge AI technologies, leveraging resources such as ChatGPT.
To qualify for Utah Techs Learn & Work program, participants must have a high school diploma or its equivalent and be a resident of Utah.
The Agile Project Manager certification course from Utah Techs Learn & Work program not only prepared me to pass my PMI-ACP test on my first try, but it also sharpened my critical thinking and problem-solving skills, Miriam Solen, a Learn & Work program participant, said in a news release. Thanks to this training led by industry experts, I was prepared to land a cybersecurity agile project management internship. I truly feel that Utah Tech is 100 percent committed to helping students build their professional journey.
Because Pluralsights platform is completely online, students enjoy full flexibility of both their study schedule and pacing. In addition to selecting courses of interest, participants can choose to tap into 20 specialized training tracks focused on high-demand technological fields, ranging from big data to the cloud, network and security engineering to scrum. Although students can choose their own study schedule, a time commitment of between 5 to 10 hours of work per week is recommended.
In addition to free access, upon successfully completing an industry-recognized certification exam, the cost of one exam is eligible for reimbursement by Utah Tech University. Click here to learn more and apply for the Learn & Work Program.
Here is the original post:
AI instruction added to free noncredit classes available through Utah ... - St. George News
Why are fewer women using AI than men? – BBC.com
2 November 2023
Image source, Harriet Kelsall
Harriet Kelsall says she found that popular AI app ChatGPT made too many mistakes
Popular artificial intelligence (AI) chatbot ChatGPT now has more than 180 million users, but jeweller Harriet Kelsall says it isn't for her.
Being dyslexic, she admits that using it might help improve the clarity of her communication with customers on her website. But ultimately she says that she just doesn't trust it.
Ms Kelsall, who is based in Cambridge, says that when she experimented with ChatGPT this year, she noticed errors. She tested it by quizzing it about the crown worn by King Charles III in his coronation back in May, the St Edward's Crown.
"I asked ChatGPT to tell me some information about the crown, just to see what it would say," she says. "I know quite a bit about gemstones in the royal crowns, and I noticed there were large chunks within the text about it which were about the wrong crown."
Ms Kelsall adds that she is also concerned about people "passing off what ChatGPT tells them as independent thought, and plagiarising".
While ChatGPT has become hugely popular since its launch a year ago, Ms Kelsall's reluctance to use it appears to be significantly more common among women than men. While 54% of men now use AI in either their professional or personal lives, this falls to just 35% of women, according to a survey earlier this year.
What are the reasons for this apparent AI gender gap, and should it be a concern?
Image source, Getty Images
ChatGPT now has more than 180 million users around the world
Michelle Leivars, a London-based business coach, says she doesn't use AI to write for her, because she wants to retain her own voice and personality.
"Clients have said they booked sessions with me because the copy on my website didn't feel cookie cutter, and that I was speaking directly to them," she says. "People who know me have gone onto the website, and said that they can hear me saying the words and they could tell it was me straight away."
Meanwhile, Hayley Bystram, also based in London, has not been tempted to save time by using AI. Ms Bystram is the founder of matchmaking agency, Bowes-Lyon Partnership, and meets her clients face-to-face to hand pair them with like-minded others, with no algorithm involved.
"The place where we could use something such as ChatGPT is in our carefully crafted member profiles. which can take up to half a day to create," she says. "But for me it would take the soul and the personalisation out of the process, and it feels like it's cheating, so we carry on doing it the long-winded way."
Image source, Hayley Bystram
Hayley Bystram says that using AI feels like "cheating"
For Alexandra Coward, a business strategist based in Paisley, Scotland, using AI for content generation is just "heavy photoshopping".
She is also particularly concerned about the growing trend of people using AI to create images "that make them look the slimmest, youngest and hippest versions of themselves".
Ms Coward adds: "We're moving towards a space where not only will your clients not recognise you in person, you won't recognise you in person."
While all these seem valid reasons to give AI a wide berth, AI expert Jodie Cook says there are deeper, more ingrained reasons why women are not embracing the technology as much as men.
"Stem fields [science, technology, engineering, and mathematics] have traditionally been dominated by males," says Ms Cook, who is the founder of Coachvox.ai, an app that allows business leaders to create AI clones of themselves.
"The current trend in the adoption of AI tools appears to mirror this disparity, as the skills required for AI are rooted in Stem disciplines."
In the UK, just 24% of the workforce across the Stem sectors are female, and as a consequence "women may feel less confident using AI tools", adds Ms Cook. "Even though many tools don't require technical proficiency, if more women don't view themselves as technically skilled, they might not experiment with them.
"And AI also still feels like science fiction. In the media and popular culture, science fiction tends to be marketed at men."
Ms Cook says that moving forward she wants to see more women both use AI and work in the sector. "As the industry grows, we definitely don't want to see a widening gap between the genders."
Yet psychologist Lee Chambers says that typically female thinking and behaviour may be holding some women back from embracing AI.
"It's the confidence gap - women tend to want to have a high level of competence in something before they start using it, " he says. "Whereas men tend to be happy to go into something without much competence."
Image source, Lee Chambers
Psychologist Lee Chambers says women fear that using AI might raise questions of competence
Mr Chambers also says that women may fear having their ability questioned, if they use AI tools.
"Women are more likely to be accused of not being competent, so they have to emphasise their credentials more to demonstrate their subject matter expertise in a particular field," he says. "There could be this feeling that if people know that you, as a woman, use AI, it's suggesting that you might not be as qualified as you are.
"Women are already discredited, and have their ideas taken by men and passed off as their own, so having people knowing that you use an AI might also play into that narrative that you're not qualified enough. It's just another thing that's debasing your skills, your competence, your value."
Or as Harriet Kelsall puts it: "I value authenticity and human creativity."
Excerpt from: