CCC Town Hall Explores the Current State of AI Tools – newsbreaks.infotoday.com

Weekly News Digest April 4, 2023 In addition to this week's NewsBreaks article and the monthly NewsLink Spotlight, Information Today, Inc. (ITI) offers Weekly News Digests that feature recent product news and company announcements. Watch for additional coverage to appear in the next print issue of Information Today. For other up-to-the-minute news, check out ITIs Twitter account: @ITINewsBreaks.

CLICK HERE to view more Weekly News Digest items.

CCC Town Hall Explores the Current State of AI Tools

The speakers were:

Each speaker provided introductory discussion points. Crovitz said that NewsGuard ran tests using ChatGPT, and it displayed false information, including that the children who were killed at Sandy Hook Elementary School were actually paid actors. With its latest release, ChatGPT repeated 100 out of 100 false narratives that NewsGuard fed it.

Brown asserted that we are underprepared to have a conversation about AI. Society is still at the level of wondering whether a customer service representative is a chatbot or a real person. She said that we need to be focused on who is going to take responsibility for each AI tool.

Chua called AI tools extremely good autocomplete machines. Semafor has been using them for basic proofreading and copy editing, which has been going well. They are language models and are not particularly smart on their own yet.

Brill said the key to moving forward with AI is accountability plus transparency. The newest version of ChatGPT is good at reading and mimicking language, and that makes it more persuasive in perpetrating hoaxes. He cited the example of cancer.org, the official site of the American Cancer Society, and cancer.news, a site rife with misinformation. ChatGPT reads the information on the .org site with the same regard as the .news site, not differentiating the veracity of the information on each.

Bates believes that the transition away from traditional information gathering isnt a bad thing; for example, she finds Google Maps to be much more effective at keeping her from getting lost than paper maps. She likened AI tools to teenagers: She wouldnt trust a 17-year-old to do her research for her, but they could give her a good start. AI tools will never be a substitute for a professional researcher, she said.

Brill noted that while ChatGPT has been proven to be able to pass a legal bar exam, it isnt great at discerning misinformation. Crovitz talked about NewsGuard for AI, a new solution that provides data for AI tools to train them to be able to recognize false information, thus minimizing the risk of spreading misinformation. He said that in the responses chatbots generate, there needs to be a way to access information about whether the answer that was given is likely to be true.

Browns Sense about Science advocates for a culture of questioning: Ask where data comes from and whether it can bear the weight someone is putting on it. One of the key questions that gets missed with machine learning is, How is the machine doing the learning? Also, what is its accuracy rate? Does it push people toward extreme content? What kind of wrong information is tolerable to receive?

Kenneally reinforced these ponderings by saying that there is no question that AI models are amazing, but we need to examine how well they perform.

Brown cited the Nature policy that AI language models will not be accepted as co-authors on any papers. She said more organizations need to say they wont accept AI authors because AI cant be held accountable. There is a lack of maturity in AI discussions, she believes, and not enough thought put into the real-world context theyll be released into. There needs to be a clearer sense of who is signing off on what when it comes to AI developers.

Chua underscored her earlier point that AI tools are not actually question-and-answer machines, theyre language machines. They dont have any sense of verification; they only mimic what theyve been fed. She noted that they say what is plausible, not what is true or false. We can use them to help us formulate questions because of their attention to written style. She did an experiment with one of the AI tools: She created a news story and asked it to write in the style of The New York Times, then The New York Post, then Fox News. Each time, it mimicked that outlets style well. This type of usage is currently the best way to employ AI tools, she said.

Bates said researchers should keep in mind that the tools are doing simple text and data mining, looking for patterns. They cant infer something youre not asking; only real people can take context into account. A chatbot doesnt know what youre planning to do with your research, it doesnt ask follow-up questions, and its not curious like a human researcher is. A chatbot is a helpful paraprofessional, but anything it provides needs to be reviewed by a professional, she said.

The presenters continued their discussion and addressed some comments from attendees. Access the recording of the town hall at youtube.com/watch?v=RF3Gs-BNOtM.

View original post here:

CCC Town Hall Explores the Current State of AI Tools - newsbreaks.infotoday.com

Related Posts

Comments are closed.