Page 1,282«..1020..1,2811,2821,2831,284..1,2901,300..»

Texas House Passes Bill to Establish Artificial Intelligence Advisory … – The Texan

Austin, TX, 49 seconds ago A recent bill that passed the House would establish an advisory council to monitor the rise and adoption of artificial intelligence (AI), which has become an increasing concern for present and future generations of Texans.

Rep. Giovanni Capriglione introduced House Bill (HB) 2060 in an effort to study and monitor artificial intelligence systems developed, employed, or procured by state agencies.

The council would include seven members: one from each legislative chamber, an executive director, and four members appointed by the governor. Those four appointed members will include an ethics professor, an AI system professor, an expert in law enforcement, and an expert in constitutional and legal rights.

Additionally, the state council will produce a report on whether an AI system has been used in a state capacity as an automated final decision system that makes final decisions, judgments, or conclusions without human intervention, or an automated support decision system that provides information to inform the final decision, judgment, or conclusion of a human decision maker.

The State Bar of Texas has provided insight into how AI is changing the way law practices and legal judgments are being decided, including how the use of AI for predictive analytics remains one of the biggest attractions for lawyers and their clients.

Even proponents of AIs use in analyzing vast amounts of data and predicting such things as likely verdict ranges or a judges predispositions based on past rulings agree that technology has its limitations.

The recent phenomenon of ChatGPT has taken hold of the cultural and political consciousness for its potential as a disruptive technology. ChatGPT is a large language model (LLM) that was developed by OpenAI to create human-like conversations through an AI chatbot.

OpenAIs founder Sam Altman recently said in an interview that we are a little bit scared of ChatGPTs success and that more regulation is important to deter its possible downsides.

LLMs use a neural network of informational data to create a probabilistic model of patterned language. This means that when someone asks a question in ChatGPT, the AI model creates a coherent response one word at a time, based on the overall probability that the next word in the sentence is correct.

Neural networks are an adaptive method of AI learning, modeled on neurons in the human brain, that teach a computer system to process data. These systems create relationship models between information from inputs and outputs of data, utilizing a technique called deep learning to take unstructured information from inputs and make models of probable outputs.

ChatGPT, an LLM that utilizes a neural network, does not create new information but rather intuits it. It is thus not true AI because it does not think as human beings do, but instead uses algorithmic predictions to mimic human intelligence in its responses.

Another task of the council is to review the effect of the automated decision systems on the constitutional or legal rights, duties, or privileges of the residents of this state. This relates to AI alignment, or the process of making sure an AI does what its human creators intend it to do.

In a recent interview with Tucker Carlson, Elon Musk sounded the alarm about what could happen with a misaligned AI system.

AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, Musk said, in the sense that it has the potential of civilization destruction.

An important aspect of the Texas AI advisory council will be to assess the biases that might be present in AI systems. AI bias is a well-documented occurrence; for example, the Manhattan Institute notes, OpenAIs content moderation system is more permissive of hateful comments made about conservatives than the exact same comments made about liberals.

The European Union has also expressed concerns related to AI and instituted its own research commission to study its potential benefits and pitfalls.

Despite warnings from those like Musk and Altman, independent creators and AI developers have been deploying OpenAI to create a plethora of unique tools. Everything from text-to-speech vocalization and photo editing to relationship matchmaking and recipe and meal plan creations can utilize AI, which is still in just the beginning stage of what is possible with the technology.

Read more from the original source:
Texas House Passes Bill to Establish Artificial Intelligence Advisory ... - The Texan

Read More..

Impacts of artificial intelligence on social interactions – CTV News

Published April 22, 2023 11:00 a.m. ET

Updated April 22, 2023 11:08 a.m. ET

Click to Expand

A new study from Cornell University published in Scientific Reports has found that while generative artificial intelligence (AI) can improve efficiency and positivity, it can also impact the way that people express themselves and see others in conversations.

Technology companies tend to emphasize the utility of AI tools to accomplish tasks faster and better, but they ignore the social dimension, said Malte Jung, associate professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science (Cornell Bowers CIS), in a press release. We do not live and work in isolation, and the systems we use impact our interactions with others.

In a study where pairs were evaluated on their conversations, some of which used AI-generated responses, researchers found that those who used smart replies were perceived as less co-operative, and their partner felt less affiliation toward them.

A smart reply is a tool created to help users respond to messages faster and to make it easier to reply to messages on devices with limited input capabilities, according to Google Developers.

While AI might be able to help you write, it's altering your language in ways you might not expect, especially by making you sound more positive, said postdoctoral researcher Jess Hohenstein in a press release. This suggests that by using text-generating AI, youre sacrificing some of your own personal voice.

Heres how the study worked.

One of the study researchers created a smart-reply platform, which the group called moshi, which is hello in Japanese.

Then, the participants evaluated, consisting of 219 pairs of people, who were asked to discuss a policy issue. They were also assigned to one of three conditions, meaning both participants could use smart replies, one could use smart replies, or neither used smart replies.

While smart replies made up 14.3 per cent of sent messages, those who used smart replies appeared to have increased efficiency in communication and positive emotional language, and their partners perceived them through positive evaluations.

Although the results of the use of smart replies were largely positive, researchers noticed something else.

The partners who suspected that their mate responded with smart replies were evaluated in a negative light, compared to those who were suspected to have written their own replies. These findings are aligned with common assumptions about the negative impacts of using AI, according to the researchers.

The researchers took things further and conducted a second experiment. This time, 299 pairs discussed a policy issue, but under four conditions: no smart replies, using default replies from Google, using smart replies that had a positive emotional tone, or using smart replies with an emotionally negative tone.

I was surprised to find that people tend to evaluate you more negatively simply because they suspect that youre using AI to help you compose text, regardless of whether you actually are, Hohenstein said, adding that this research demonstrates the overall suspicion that people seem to have around AI.

The researchers observe how some unintended social consequences can crop up as a result of AI.

This suggests that whoever is in control of the algorithm may have influence on peoples interactions, language and perceptions of each other, said Jung.

RELATED IMAGES

Follow this link:
Impacts of artificial intelligence on social interactions - CTV News

Read More..

Woman’s bowel cancer spotted by artificial intelligence – BBC

21 April 2023

The Colo-Detect study uses AI to flag up areas of concern during colonoscopies

A woman who was part of a study using artificial intelligence (AI) to detect bowel cancer is free of the disease after it was found and removed.

Jean Tyler, 75, from South Shields, took part in a study called Colo-Detect as part of a trial at 10 NHS Trusts.

In the trial the AI flags up tissue potentially of concern to the medic carrying out the colonoscopy, which could be missed by the human eye.

About 2,000 patients from 10 NHS trusts have been recruited for the trial.

Jean Tyler - pictured with husband Derek - had surgery and has since recovered

The AI detected a number of polyps and an area of cancer on Mrs Tyler's colonoscopy about a year ago after she agreed to be part of the trial.

She then underwent surgery at South Tyneside District Hospital and has since recovered.

"I had fantastic support, it was unbelievable," she said.

"I had about seven or eight visits last year and I was so well looked after.

"I always say yes to these research projects because I know that they can make things a lot better for everybody."

Gastroenterology consultant Professor Colin Rees, based at Newcastle University, led the study alongside a team of colleagues working in South Tyneside and Sunderland NHS Trust.

The trial also includes North Tees and Hartlepool NHS Foundation Trust, South Tees NHS Foundation Trust, Northumbria NHS Foundation Trust and Newcastle Upon Tyne Hospitals NHS Foundation Trust.

Professor Colin Rees led the study alongside a team of colleagues working in South Tyneside and Sunderland NHS Trust

Professor Rees described it as "world-leading" in improving detection, adding AI was likely to become "a major tool used by medicine in the coming years".

The findings will studied to see how it can help save lives from bowel cancer - the second biggest killer cancer, claiming around 16,800 lives a year in the UK.

The results are expected to be published in the autumn.

Go here to read the rest:
Woman's bowel cancer spotted by artificial intelligence - BBC

Read More..

How artificial intelligence is matching drugs to patients – BBC

17 April 2023

Image source, Natalie Lisbona

Dr Talia Cohen Solal, left, is using AI to help her and her team find the best antidepressants for patients

Dr Talia Cohen Solal sits down at a microscope to look closely at human brain cells grown in a petri dish.

"The brain is very subtle, complex and beautiful," she says.

A neuroscientist, Dr Cohen Solal is the co-founder and chief executive of Israeli health-tech firm Genetika+.

Established in 2018, the company says its technology can best match antidepressants to patients, to avoid unwanted side effects, and make sure that the prescribed drug works as well as possible.

"We can characterise the right medication for each patient the first time," adds Dr Cohen Solal.

Genetika+ does this by combining the latest in stem cell technology - the growing of specific human cells - with artificial intelligence (AI) software.

From a patient's blood sample its technicians can generate brain cells. These are then exposed to several antidepressants, and recorded for cellular changes called "biomarkers".

This information, taken with a patient's medical history and genetic data, is then processed by an AI system to determine the best drug for a doctor to prescribe and the dosage.

Although the technology is currently still in the development stage, Tel Aviv-based Genetika+ intends to launch commercially next year.

Image source, Getty Images

The global pharmaceutical sector had revenues of $1.4 trillion in 2021

An example of how AI is increasingly being used in the pharmaceutical sector, the company has secured funding from the European Union's European Research Council and European Innovation Council. Genetika+ is also working with pharmaceutical firms to develop new precision drugs.

"We are in the right time to be able to marry the latest computer technology and biological technology advances," says Dr Cohen Solal.

A senior lecturer of biomedical AI and data science at King's College London, she says that AI has so far helped with everything "from identifying a potential target gene for treating a certain disease, and discovering a new drug, to improving patient treatment by predicting the best treatment strategy, discovering biomarkers for personalised patient treatment, or even prevention of the disease through early detection of signs for its occurrence".

New Tech Economy is a series exploring how technological innovation is set to shape the new emerging economic landscape.

Yet fellow AI expert Calum Chace says that the take-up of AI across the pharmaceutical sector remains "a slow process".

"Pharma companies are huge, and any significant change in the way they do research and development will affect many people in different divisions," says Mr Chace, who is the author of a number of books about AI.

"Getting all these people to agree to a dramatically new way of doing things is hard, partly because senior people got to where they are by doing things the old way.

"They are familiar with that, and they trust it. And they may fear becoming less valuable to the firm if what they know how to do suddenly becomes less valued."

However, Dr Sailem emphasises that the pharmaceutical sector shouldn't be tempted to race ahead with AI, and should employ strict measures before relying on its predictions.

"An AI model can learn the right answer for the wrong reasons, and it is the researchers' and developers' responsibility to ensure that various measures are employed to avoid biases, especially when trained on patients' data," she says.

Hong Kong-based Insilico Medicine is using AI to accelerate drug discovery.

"Our AI platform is capable of identifying existing drugs that can be re-purposed, designing new drugs for known disease targets, or finding brand new targets and designing brand new molecules," says co-founder and chief executive Alex Zhavoronkov.

Image source, Insilico Medicine

Alex Zhavoronkov says that using AI is helping his firm to develop new drugs more quickly than would otherwise be the case

Its most developed drug, a treatment for a lung condition called idiopathic pulmonary fibrosis, is now being clinically trialled.

Mr Zhavoronkov says it typically takes four years for a new drug to get to that stage, but that thanks to AI, Insilico Medicine achieved it "in under 18 months, for a fraction of the cost".

He adds that the firm has another 31 drugs in various stages of development.

Back in Israel, Dr Cohen Solal says AI can help "solve the mystery" of which drugs work.

Read the original post:
How artificial intelligence is matching drugs to patients - BBC

Read More..

NAB Wrap: Artificial Intelligence, Virtual Production and Future of Broadcasting in Focus – Hollywood Reporter

NABs The Last of Us panel with (from left) THRs Carolyn Giardina, Craig Mazin, Ksenia Sereda, Timothy Good, Emily Mendez and Alex Wang.

The 2023 National Association of Broadcasters Show, which wrapped Wednesday in Las Vegas, attracted an estimated 65,000 delegates, according to show organizers, which many viewed as a healthy number for a post-pandemic show.

The attendance numbers marked a notable rise following NABs return to an in-person event in 2022, which counted 52,468 delegates, though it was still well below its last show before the lockdown, which drew 91,000 attendees in 2019.

Artificial intelligence was arguably the most widespread topic this year, as NAB marked its centennial. As the potential of AI rapidly evolves, its a topic that clearly will continue to cause significant anxiety, as well as staggering potential opportunities.

Generally, it was an evolutionary rather than a revolutionary year on the exhibition floor. From a tech standpoint, there was a large number of new and evolving tools for all sorts of cloud-based remote workflows. And while theres still a lot to understand before the promise of virtual production can be fully realized, attendees would have been hard-pressed to go anywhere in the vast exhibition halls, where 1,200 companies featured their latest technologies, and not see at least one LED wall or related demonstration.

NAB promoted the rollout of Next-Gen TV and turned a spotlight on sustainability with the launch of its Excellence in Sustainability Awards program, while participation from Hollywood took this event beyond broadcasting. Heres a look at some of the weeks highlights and biggest trends.

AI was rampant at NAB, from the conference sessions to the exhibition floor. This is an area where NAB will absolutely be active, said NAB president and CEO Curtis LeGeyt, who shared his views on the potential dangers, as well as benefits, of the tech during a state of the industry presentation. It is just amazing how quickly the relevance of AI to our entire economy but, specifically, since were in this room, the broadcast industry has gone from amorphous concept to real.

LeGeyt warned of several concerns that he has for local broadcasters where AI is concerned, among them, how big tech [uses] their platforms to access broadcast television and radio content. That, in our view, does not allow for fair compensation for our content despite the degree to which we drive tremendous traffic at their sites. He asserted that legislation is needed to put some guardrails on it, especially at a time when AI has the potential to put that on overdrive.

He warned of the additional diligence that will be needed to determine what is real and what is AI, as well as the caution that will be required when it comes to protecting ones likeness. He balanced these warnings with a discussion of potential opportunities, including the ability to speed up research at resource-constrained local stations.

Imax, which made its first appearance this year as an NAB exhibitor, was among many companies that showed AI-driven tech on the show floor. It demoed current and prototype technology from SSIMWAVE, the tech startup that it acquired for $21 million in 2022. This includes AI-driven tools aimed at bandwidth and image quality optimization, which may be used with the companys Imax Enhanced streaming format.

Other such exhibitors included Adobe, which showed a new beta version of Premiere Pro that includes an AI-driven, text-based editing tool developed to analyze and transcribe clips.

Sessions on HBO series The Last of Us and a conversation with Ted Lassos Brett Goldstein attracted standing-room-only crowds to the NAB Shows main stage, while talent from the American Society of Cinematographers and American Cinema Editors presented master class sessions during the week.

Writer, producer and actor Goldstein otherwise known as Ted Lasso footballer Roy Kent was featured in a freewheeling conversation with fellow Ted Lasso writer Ashley Nicole Black.

The life of just an actor, with all respect to actors, theyre insane. I dunno why they would live that way, he admitted when asked about working as both a writer and actor. Its fucking mental. Your life is a lottery. Every day you wait for a magical phone to ring, and you have zero control over it. I just didnt want to be an actor who sits around going, There arent any good scripts. You have to write yourself stuff, and then you cant complain.

He also talked about why collaboration makes the writers room work. Describing the teams on Shrinking and Ted Lasso as some of the smartest people in the world, in this fucking room, he said, Youd be mad not to take these ideas. And when you sort of allow this process of everyone joining in and taking this and taking that, its 100 percent going to be a better show.

ACE presented The Last of Us, during which showrunner and exec producer Craig Mazin teased that the series would extend beyond its announced season two, generating cheers from the crowd. The session went behind the scenes of the production with Mazin, DP Ksenia Sereda,editors Timothy Good and Emily Mendez, VFX supervisor Alex Wangand sound supervisor Michael J. Benavente.

There was no shortage of exhibitors showing tech and workflows for the evolving area of virtual production, with potential applications from advertising and series work to features.

Virtual production, to me, is an amazing tool in our arsenal of making stories come to life, but its a tool, like all tools, that needs to be properly applied to get the best out of it, asserts two-time Oscar-nominated cinematographer Jeff Cronenweth (The Social Network, The Girl With the Dragon Tattoo). He is currently an adviser to SISU, which develops robotic arms that were demoed as part of a virtual production pipeline at NAB.

Cronenweth reports that his next project is Disneys Tron: Ares starring Jared Leto (which Joachim Ronning is set to direct for a 2025 release), and hes eyeing virtual production. As you can imagine for a sci-fi film like this, we will embellish all of the technology available to bring it to life, including some virtual production.Im anticipating SISUs robotic technology to play a key part in that emerging technology.

NAB used its annual confab to promote the voluntary rollout of the next generation of digital television, known as ATSC 3.0, which is based on internet protocol and may include new capabilities such as free, live broadcasting to mobile devices. A change of this magnitude has a long way to go before its potential can be realized.

At NAB, Federal Communications Commission chairwoman Jessica Rosenworcel launched the Future of Television Initiative, which she described as a public-private partnership among stakeholders to support a transition to ATSC 3.0.

U.S. broadcasters delivered 26 new Next-Gen TV markets to reach 66 by year-end 2022, reported ATSC president Madeleine Noland. We are looking ahead to another year of continued deployments across the U.S. and sales of new consumer receivers.

Read the original:
NAB Wrap: Artificial Intelligence, Virtual Production and Future of Broadcasting in Focus - Hollywood Reporter

Read More..

News coverage of artificial intelligence reflects business and government hype not critical voices – The Conversation Indonesia

The news media plays a key role in shaping public perception about artificial intelligence. Since 2017, when Ottawa launched its Pan-Canadian Artificial Intelligence Strategy, AI has been hyped as a key resource for the Canadian economy.

With more than $1 billion in public funding committed, the federal government presents AI as having potential that must be harnessed. Publicly-funded initiatives, like Scale AI and Forum IA Qubec, exist to actively promote AI adoption across all sectors of the economy.

Over the last two years, our multi-national research team, Shaping AI, has analyzed how mainstream Canadian news media covers AI. We analyzed newspaper coverage of AI between 2012 and 2021 and conducted interviews with Canadian journalists who reported on AI during this time period.

Our report found news media closely reflects business and government interests in AI by praising its future capabilities and under-reporting the power dynamics behind these interests.

Our research found that tech journalists tend to interview the same pro-AI experts over and over again especially computer scientists. As one journalist explained to us: Who is the best person to talk about AI, other than the one who is actually making it? When a small number of sources informs reporting, news stories are more likely to miss important pieces of information or be biased.

Canadian computer scientists and tech entrepreneurs Yoshua Bengio, Geoffrey Hinton, Jean-Franois Gagn and Jolle Pineau are disproportionately used as sources in mainstream media. The name of Bengio a leading expert in AI, pioneer in deep learning and founder of Mila AI Institute turns up nearly 500 times in 344 different news articles.

Only a handful of politicians and tech leaders, like Elon Musk or Mark Zuckerberg, have appeared more often across AI news stories than these experts.

Few critical voices find their way into mainstream coverage of AI. The most-cited critical voice against AI is late physicist Stephen Hawking, with only 71 mentions. Social scientists are conspicuous in their absence.

Bengio, Hinton and Pineau are computer science authorities, but like other scientists theyre not neutral and free of bias. When interviewed, they advocate for the development and deployment of AI. These experts have invested their professional lives in AI development and have a vested interest in its success.

Most AI scientists are not only researchers, but are also entrepreneurs. There is a distinction between these two roles. While a researcher produces knowledge, an entrepreneur uses research and development to attract investment and sell their innovations.

The lines between the state, the tech industry and academia are increasingly porous. Over the last decade in Canada, state agencies, private and public organizations, researchers and industrialists have worked to create a profitable AI ecosystem. AI researchers are firmly embedded in this tightly-knit network, sharing their time between publicly-funded labs and tech giants like Meta.

AI researchers occupy key positions of power in organizations that promote AI adoption across industries. Many hold, or have held, decision-making positions at the Canadian Institute for Advanced Research (CIFAR) an organization that channels public funding to AI Research Chairs across Canada.

When computer scientists make their way into the news cycle, they do so not only as AI experts, but also as spokespeople for this network. They bring credibility and legitimacy to AI coverage because of their celebrated expertise. But they are also in a position to promote their own expectations about the future of AI, with little to no accountability for the fulfilment of these visions.

The AI experts quoted in mainstream media rarely discussed the technicalities of AI research. Machine learning techniques colloquially known as AI were deemed too complex for a mainstream audience. Theres only room for so much depth about technical issues, one journalist told us.

Instead, AI researchers use media attention to shape public expectations and understandings of AI. The recent coverage of an open letter calling for a six-month ban on AI development is a good example. News reports centred on alarmist tropes on what AI could become, citing profound risks to society.

Bengio, who signed the letter, warned that AI has the potential to destabilize democracy and the world order.

These interventions shaped the discourse about AI in two ways. First, they framed AI debates according to alarmist visions of distant future. Coverage of an open letter calling for a six-month break from AI development overshadowed real and well-documented harms from AI, like worker exploitation, racism, sexism, disinformation and concentration of power in the hands of tech giants.

Second, the open letter casts AI research into a Manichean dichotomy: the bad version that no onecan understand, predict, or reliably control and the good one the so-called responsible AI. The open letter was as much about shaping visions about the future of AI as it was about hyping up responsible AI.

But according to AI industry standards, what is framed as responsible AI to date has consisted of vague, voluntary and toothless principles that cannot be enforced in corporate contexts. Ethical AI is often just a marketing ploy for profit and does little to eliminate the systems of exploitation, oppression and violence that are already linked to AI.

Our report proposes five recommendations to encourage reflexive, critical and investigative journalism in science and technology, and pursue stories about the controversies of AI.

1. Promote and invest in technology journalism. Be wary of economic framings of AI and investigate other angles that are typically left out of business reporting, like inequalities and injustices caused by AI.

2. Avoid treating AI as a prophecy. The expected realizations of AI in the future must be distinguished from its real-world accomplishments.

3. Follow the money. Canadian legacy media has paid little attention to the significant amount of governmental funding that goes into AI research. We urge journalists to scrutinize the networks of people and organizations that work to construct and maintain the AI ecosystem in Canada.

4. Diversify your sources. Newsrooms and journalists should diversify their sources of information when it comes to AI coverage. Computer scientists and their research institutions are overwhelmingly present in AI coverage in Canada, while critical voices are severely lacking.

5. Encourage collaboration between journalists and newsrooms and data teams. Co-operation among different types of expertise helps to highlight the social and technical considerations of AI. Without one or the other, AI coverage is likely to be deterministic, inaccurate, naive or overly simplistic.

To be reflexive and critical of AI does not mean to be against the development and deployment of AI. Rather, it encourages the news media and its readers to question the underlying cultural, political and social dynamics that make AI possible, and examine the broader impact that technology has on society and vice versa.

If so, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. With the latest scientific discoveries, thoughtful analysis on political issues and research-based life tips, each email is filled with articles that will inform you and often intrigue you.

Get our newsletters

Editor and General Manager

Find peace of mind, and the facts, with experts. Add evidence-based articles to your news digest. No uninformed commentariat. Just experts. 90,000 of them have written for us. They trust us. Give it a go.

Get our newsletter

If you found the article you just read to be insightful, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. Each newsletter has articles that will inform and intrigue you.

Subscribe now

CEO | Editor-in-Chief

It helps you go deeper into key political issues and also introduces you to the diversity of research coming out of the continent. It's not about breaking news. It's not about unfounded opinions. The Europe newsletter is evidence-based expertise from European scholars, presented by myself in France, and two of my colleagues in Spain and the UK.

Get our newsletter

Head of English section, France edition

Go here to read the rest:
News coverage of artificial intelligence reflects business and government hype not critical voices - The Conversation Indonesia

Read More..

GPT-4 Passes the Bar Exam: What That Means for Artificial … – Stanford Law School

CodeXThe Stanford Center for Legal Informatics and the legal technology company Casetext recently announced what they called a watershed moment. Research collaborators had deployed GPT-4, the latest generation Large Language Model (LLM), to takeand passthe Uniform Bar Exam (UBE). GPT-4 didnt just squeak by. It passed the multiple-choice portion of the exam and both components of the written portion, exceeding not only all prior LLMs scores, but also the average score of real-life bar exam takers, scoring in the 90th percentile.

Casetexts Chief Innovation Officer and co-founder Pablo Arredondo, JD 05, who is a Codex fellow, collaborated with CodeX-affiliated faculty Daniel Katz and Michael Bommarito to study GPT-4s performance on the UBE. In earlier work, Katz and Bommarito found that a LLM released in late 2022 was unable to pass the multiple-choice portion of the UBE. Their recently published paper, GPT-4 Passes the Bar Exam quickly caught the national attention. Even The Late Show with Steven Colbert had a bit of comedic fun with the notion of robo-lawyers running late-night TV ads looking for slip-and-fall clients.

However for Arredondo and his collaborators, this is serious business. While GPT-4 alone isnt sufficient for professional use by lawyers, he says, it is the first large language model smart enough to power professional-grade AI products.

Here Arredondo discusses what this breakthrough in AI means for the legal profession and for the evolution of products like the ones Casetext is developing.

What technological strides account for the huge leap forward from GPT-3 to GPT-4 with regard to its ability to interpret text and its facility with the bar exam?

If you take a broad view, the technological strides behind this new generation of AI began 80 years ago when the first computational models of neurons were created (McCulloch-Pitts Neuron). Recent advancesincluding GPT-4have been powered by neural nets, a type of AI that is loosely based on neurons and includes natural language processing. I would be remiss not to point you to the fantastic article by Stanford Professor Chris Manning, director of the Stanford Artificial Intelligence Laboratory. The first few pages provide a fantastic history leading up to the current models.

You say that computational technologies have struggled with natural language processing and complex or domain-specific tasks like those in the law, but with advancing capabilities of large language modelsand GPT-4you sought to demonstrate the potential in law. Can you talk about language models and how they have improved, specifically for law? If its a learning model, does that mean that the more this technology is used in the legal profession (or the more it takes the bar exam) the better it becomes/more useful it is to the legal profession?

Large language models are advancing at a breathtaking rate. One vivid illustration is the result of the study I worked on with law professors and Stanford CodeX fellows Dan Katz and Michael Bommarito. We found that while GPT-3.5 failed the bar, scoring roughly in the bottom 10th percentile, GPT-4 not only passed but approached 90th percentile. These gains are driven by the scale of the underlying models more than any fine-tuning for law. That is, our experience has been that GPT-4 outperforms smaller models that have been fine-tuned on law. It is also critical from a security standpoint that the general model doesnt retain, much less learn from, the activity and information of attorneys.

What technologies are next and how will they impact the practice of law?

The rate of progress in this area is remarkable. Every day I see or hear about a new version or application. One of the most exciting areas is something called Agentic AI, where the LLMs (large language models) are set up so that they can themselves strategize about how to carry out a task, and then execute on that strategy, evaluating things along the way. For example, you could ask an Agent to arrange transportation for a conference and, without any specific prompting or engineering, it would handle getting a flight (checking multiple airlines if need be) and renting a car. You can imagine applying this to substantive legal tasks (i.e., first I will gather supporting testimony from a deposition, then look through the discovery responses to find further support, etc).

Another area of growth is mutli-modal, where you go beyond text and fold in things like vision. This should enable things like an AI that can comprehend/describe patent figures or compare written testimony with video evidence.

Big law firms have certain advantages and I expect that they would want to maintain those advantages with this sort of evolutionary/learning technology. Do you expect AI to level the field?

Technology like this will definitely level the playing field; indeed, it already is. I expect this technology to at once level and elevate the profession.

So, AI-powered technology such as LLMs can help to close the access to justice gap?

Absolutely. In fact, this might be the most important thing LLMs do in the field of law. The first rule of the Federal Rules of Civil Procedure exhorts the just, speedy and inexpensive resolution of matters. But if you asked most people what three words come to mind when they think about the legal system, speedy and inexpensive are unlikely to be the most common responses. By making attorneys much more efficient, LLMs can help attorneys increase access to justice by empowering them to serve more clients.

Weve read about AIs double-edged sword. Do you have any big concerns? Are we getting close to a Robocop moment?

My view, and the view of Casetext, is that this technology, as powerful as it is, still requires attorney oversight. It is not a robot lawyer, but rather a very powerful tool that enables lawyers to better represent their clients. I think it is important to distinguish between the near term and the long term questions in debates about AI.

The most dramatic commentary you hear (e.g., AI will lead to utopia, AI will lead to human extinction) is about artificial general intelligence (AGI), which most believe to be decades away and not achievable simply by scaling up existing methods. The near term discussion, about how to use the current technology responsibly, is generally more measured and where I think the legal profession should be focused right now.

At a recent workshop we held at CodeXs FutureLaw conference, Professor Larry Lessig raised several near-term concerns around issues like control and access. Law firm managing partners have asked us what this means for associate training; how do you shape the next generation of attorneys in a world where a lot of attorney work can be delegated to AI? These kinds of questions, more than the apocalyptic prophecies, are what occupy my thinking. That said, I am glad we have some folks focused on the longer term implications.

Pablo Arredondo is a Fellow at CodeX The Stanford Center for Legal Informatics and the co-founder of Casetext, a legal AI company. Casetexts CoCounsel platform, powered by GPT-4, assists attorneys in document review, legal research memos, deposition preparation, and contract analysis, among other tasks. Arredondos work at CodeX focuses on civil litigation, with an emphasis on how litigators access and assemble the law. He is a graduate of Stanford Law School, JD 05, and of the University of California at Berkeley.

Read more:
GPT-4 Passes the Bar Exam: What That Means for Artificial ... - Stanford Law School

Read More..

‘Gold Rush’ in Artificial Intelligence Expected To Drive Data Center … – CoStar Group

The rapid adoption of new artificial intelligence apps and an intensifying bid for dominance among tech giants Amazon, Google and Microsoft are expected to drive investment and double-digit growth for the data center industry in the next five years.

A gold rush of AI these days centers on the brisk development of tools such as ChatGPT, according to a new analysis from real estate services firm JLL. Voice- and text-generating AI apps could transform the speed and accuracy of customer service interactions and accelerate demand for computing power, as well as the systems and networks connecting users that data centers provide, the real estate firm said.

The emergence of AI comes on the heels of increased usage of data centers in the past few years, as people spend more time online for work and entertainment, fueling the need for these digital information hubs, which provide the speed, memory and power to support those connections.

JLL projected that half of all data centers will be used to support AI programs by 2025. The new AI applications need for enormous amounts of data capacity will require more power and expanded space for the data center services, particularly colocation facilities, which are a type of data center that rents capacity to third-party companies and may service dozens of them at one time. It's also a potential growth area for commercial property investors.

We expect AI applications, and the machine learning processes that enable them, will drive significant demand for colocation capabilities like those we provide, Raul Martynek, CEO of Dallas-based DataBank, told CoStar News in an email. Specifically, the demand will be for high-density colocation and data centers that provide significantly greater concentrations of power and cooling.

One kilowatt hour of energy can power a 100-watt light bulb for 10 hours, and traditional data server workloads might require 15 kilowatts per typical cabinet, or server rack, Martynek said. But the high-performance computing nodes required to train large language models like ChatGPT can consume 80 kilowatts or more per cabinet.

This requires more spacing between cabinets to maintain cooling, or specialized water-chilled doors to cool the cabinets, Martynek said.

In addition to the added energy and water needs, the growth in data centers faces other challenges. Credit-rating firm S&P Global Ratings noted that long-term industry risks include shifting technology, cloud service providers filling their own data center needs, and weaker pricing. The data center industry, with power-hungry facilities running 24 hours a day and 365 days a year, has also received criticism from environmentalists.

DataBank owns and operates more than 65 data centers in 27 metropolitan markets. This month, it secured $350 million in financing from TD Bank to fund its ongoing expansion.

It was DataBanks second successful financing this year, coming just weeks after completing a $715 million net-lease securitization in March 1. Under net-lease offerings, issuers securitize their rent revenue streams into bonds. The sale of those bonds replenishes the issuers capital to be used to pay down debt and continue investments.

ChatGPT and other apps are bots that use machine learning to mimic human speech and writing. ChatGPT debuted in November and is most arguably the most sophisticated to launch so far. AI software developer Tidio estimated recently that usage of such bots has already grown to 1.5 billion users worldwide.

In January, Microsoft announced a new multibillion-dollar investment in ChatGPT maker OpenAI. Google has recently improved its AI chatbot, Bard, in an effort to rival its competitors. And Amazon Web Services, the largest cloud computing provider, introduced a service last week called Bedrock aimed at helping other companies develop their own chatbots.

Amazon CEO Andy Jassy touted the e-commerce giants AI plans in his annual letter to shareholders.

Most companies want to use these large language models but the really good ones take billions of dollars to train and many years and most companies dont want to go through that, Jassy said last week on CNBC. So what they want to do is they want to work off of a foundational model thats big and great already and then have the ability to customize it for their own purposes. And thats what Bedrock is.

The growth projections of AI have data center owners and operators at the forefront of the securitized bond market. Three data center providers have issued $1.3 billion in net-lease securitized offerings already this year, according to CoStar data. Thats more than all of last year combined. In addition, two more providers have offerings in the wings.

The sector is a bright spot in an otherwise weakened market for other commercial real estate securitized bond offerings, down more than 70% from the same time last year.

The data center space remains extremely attractive to capital sources looking for quality and stability versus other asset classes that have been challenged amidst uncertain economic conditions, Carl Beardsley, managing director and data centers lead at JLL Capital Markets, told CoStar News in an email.

JLL said data center financing comes from a variety of sources including debt funds, life insurance companies, banks and originators of commercial-mortgage backed securities.

Although money center banks and some regional banks have become more conservative during this volatile interest rate period, there is still a large appetite from the lender universe to allocate funds toward data centers, Beardsley said.

JLL is forecasting that the global data center market is expected to grow 11.3% from 2021 through 2026.

Across its six primary data center markets Chicago, Dallas-Fort Worth, New Jersey, Northern California, Northern Virginia and Phoenix the United States has a strong appetite for data centers property transactions compared to other countries, according to JLL, accounting for 52% of all deals from 2018 to 2022. These markets also have a data center capacity of 1,939 megawatts under construction, JLL said. One megawatt is equal to 1,000 kilowatts.

The growth is expected to continue even heading into a potential recession, according to S&P, which has rated two of the three data center securitized bond offerings completed this year so far.

Overall supply and demand is relatively balanced as new data center development has been constrained in certain markets by site availability, lingering supply chain issues and more recently, power capacity constraints, S&P noted in its reviews. Although we expect data centers to see some growth deceleration in a recessionary environment, we believe it will be mitigated by the critical nature of data centers.

S&P added that market data suggests 2022 vacancy rates were low for key data center markets and rental rates increased year over year.

New net-lease securitized fundraisings this year have come from DataBank, Stack Infrastructure, and Vantage Data Centers.

Denver-based Vantage, a global provider of hyperscale data center campuses, saw unprecedented growth in 2022, outperforming its previous record set in 2021. The company began developing four new campuses internationally and opened 13 data centers. The company raised more than $3 billion last year to support that effort.

Last month, Vantage completed an additional securitized notes offering raising $370 million. The offering was backed by tenant lease payments on 13 completed and operating wholesale data centers located in California, Washington state and Canada.

Stack, a Denver-based developer and operator of data centers, issued $250 million in securitized notes last month.

Stacks growth is outpacing the industry with a portfolio of more than 1 gigawatt, or 1,000 megawatts, of built and under-development capacity, and more than 2 gigawatts of future development capacity planned across the globe. The company has more than 4 million square feet currently under development.

Stack most recently announced the expansion of a Northern Virginia campus to 250 megawatts, the groundbreaking for another 100 megawatt campus in Northern Virginias Prince William County and the expansion of its 200 megawatt flagship Portland, Oregon, campus.

In addition, Dallas firm CyrusOne and Seattle-based Sabey Data Centers have filed preliminary notices of offerings in the works with the Securities and Exchange Commission.

Here is the original post:
'Gold Rush' in Artificial Intelligence Expected To Drive Data Center ... - CoStar Group

Read More..

Africa tipped on power of artificial intelligence – Monitor

Pulse Lab Kampala (PLK) has called on stakeholders to support the creation of an environment that fosters the use of data and artificial intelligence (AI).

Speaking at the Conference on the State of Artificial Intelligence in Africa (COSAA) held at Strathmore University in Nairobi, Kenya, last month, Ms Morine Amutorinea data associate at PLKhighlighted the importance of collaboration in creating platforms and communities that can help scale up projects beyond Africa.

When working alone, a project cannot scale beyond Uganda or even Africa at large, noted Ms Amutorine.

During the event, Ms Amutorine showcased a radio mining tool developed by PLK.

The AI-powered social listening tool can monitor multiple radio stations at the same time and filter out content based on specific keywords.

When we wanted to assess public opinion about Covid-19 in Uganda, specific keywords would be fed in and the tool would retrieve audio clips of radio talk shows that were discussing that topic, she explained.

ProwessThe tool can listen to more than 20 radio stations simultaneously and comprehend three local languages in UgandaUgandan English, Acholi, and Luganda.

Mr Martin Gordon Mubangizi, a data scientist doubling as the PLK lead, explained that the Automatic Voice Recognition tool should be trained with a vast dataset of content, including text, pairs of text and audio, a list of all words in a given language and then attribute the rightful pronunciation.

Ms Amutorine toldMonitor that AI does not dispense with human effort.

The latter is needed at the stage of transcribing and analysing the data in order to assess personal perceptions.

Mr Pius Kavuma Mugagga, a data engineer with PLK, consequently demonstrated another AI tool. Named the Pulse Satellite Tool, it is a product of a group of researchers at PLK that was birthed in 2015 in partnership with the United Nations Satellite Centre (UNOSAT). The goal then was to develop an AI tool that would estimate economic development of a region.

We started by experimenting with satellite imagery, Mr Mugagga revealed, adding, The AI would then be programmed to identify, highlight and count shelter roof tops and later the results would be used to assess settlement mapping.

He went on to explain that by looking at the nature of roof tops, you may be able to make some inference regarding the economic transition of a place.This, according to Mugagga, led to the initial conception of the Pulse Satellite Tool that was eventually taken on by UNOSAT. The tool proved handy in flood mapping.

In January 2021, Mozambique experienced floods caused by Tropical Cyclone Eloise. They requested UNOSAT to do a rapid mapping on the area, Mr Mugagga revealed, adding, Using the Pulse Satellite Tool, UNOSAT was able to identify the areas of interest that were assessed for service delivery.

Not ready to rest on its laurels, PLK is already looking to develop a platform where anyone can upload a satellite image whose results would be downloaded for use after the mapping process.

However, there should be legal terms to it for it to be operational outside the UN systems, Mr Mubangizi told Saturday Monitor, adding that that is why Uganda has joined Nigeria in being one of only two countries in Africa to embark on a national data strategy.

Mr Mubangizi further revealed: PLK, in partnership with the Ugandan Ministry of ICT and National Guidance, are looking at achieving such goals by 2040 where big data and AI governance will be accessible by everyone for use, reuse and sharing.

The conferencethe first of its kind in Africa highlighted the potential for AI to transform the culture of the United Nations and deepen its impact. Experts say by creating a supportive environment for data and AI, stakeholders can unlock opportunities for innovation and growth, driving positive change across Africa and beyond.

About artificial intelligence

Since the early days of computers, scientists have strived to create machines that can rival humans in their ability to think, reason and learn in other words, artificial intelligence (AI).While todays AI systems still fall short of that goal, they are starting to perform as well as, and sometimes better than, their creators at certain tasks. Thanks to new techniques that allow machines to learn from enormous sets of data, AI has taken massive leaps forward.

AI is starting to move out of research labs and into the real world. It is having an impact on our lives. There can be little doubt that we are entering the age of AI.

As AI enters the real world by assessing loan applications, informing courtroom decisions or helping to identify patients who should receive treatment, so too does one of its most fundamental flaws: bias.

Algorithms are only as good as the code that governs them and the data used to teach them. Each can carry the watermark of our own preconceptions. Facial recognition software can misclassify black faces or fail to identify women, criminal profiling algorithms have ranked non-whites as higher risk and recruitment tools have scored women lower than men. But with these challenges, there has been mounting pressure on technology giants to fix them.

*Additional information source: BBC

Link:
Africa tipped on power of artificial intelligence - Monitor

Read More..

Simulations with a machine learning model predict a new phase of solid hydrogen – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

Hydrogen, the most abundant element in the universe, is found everywhere from the dust filling most of outer space to the cores of stars to many substances here on Earth. This would be reason enough to study hydrogen, but its individual atoms are also the simplest of any element with just one proton and one electron. For David Ceperley, a professor of physics at the University of Illinois Urbana-Champaign, this makes hydrogen the natural starting point for formulating and testing theories of matter.

Ceperley, also a member of the Illinois Quantum Information Science and Technology Center, uses computer simulations to study how hydrogen atoms interact and combine to form different phases of matter like solids, liquids, and gases. However, a true understanding of these phenomena requires quantum mechanics, and quantum mechanical simulations are costly. To simplify the task, Ceperley and his collaborators developed a machine learning technique that allows quantum mechanical simulations to be performed with an unprecedented number of atoms. They reported in Physical Review Letters that their method found a new kind of high-pressure solid hydrogen that past theory and experiments missed.

"Machine learning turned out to teach us a great deal," Ceperley said. "We had been seeing signs of new behavior in our previous simulations, but we didn't trust them because we could only accommodate small numbers of atoms. With our machine learning model, we could take full advantage of the most accurate methods and see what's really going on."

Hydrogen atoms form a quantum mechanical system, but capturing their full quantum behavior is very difficult even on computers. A state-of-the-art technique like quantum Monte Carlo (QMC) can feasibly simulate hundreds of atoms, while understanding large-scale phase behaviors requires simulating thousands of atoms over long periods of time.

To make QMC more versatile, two former graduate students, Hongwei Niu and Yubo Yang, developed a machine learning model trained with QMC simulations capable of accommodating many more atoms than QMC by itself. They then used the model with postdoctoral research associate Scott Jensen to study how the solid phase of hydrogen that forms at very high pressures melts.

The three of them were surveying different temperatures and pressures to form a complete picture when they noticed something unusual in the solid phase. While the molecules in solid hydrogen are normally close-to-spherical and form a configuration called hexagonal close packedCeperley compared it to stacked orangesthe researchers observed a phase where the molecules become oblong figuresCeperley described them as egg-like.

"We started with the not-too-ambitious goal of refining the theory of something we know about," Jensen recalled. "Unfortunately, or perhaps fortunately, it was more interesting than that. There was this new behavior showing up. In fact, it was the dominant behavior at high temperatures and pressures, something there was no hint of in older theory."

To verify their results, the researchers trained their machine learning model with data from density functional theory, a widely used technique that is less accurate than QMC but can accommodate many more atoms. They found that the simplified machine learning model perfectly reproduced the results of standard theory. The researchers concluded that their large-scale, machine learning-assisted QMC simulations can account for effects and make predictions that standard techniques cannot.

This work has started a conversation between Ceperley's collaborators and some experimentalists. High-pressure measurements of hydrogen are difficult to perform, so experimental results are limited. The new prediction has inspired some groups to revisit the problem and more carefully explore hydrogen's behavior under extreme conditions.

Ceperley noted that understanding hydrogen under high temperatures and pressures will enhance our understanding of Jupiter and Saturn, gaseous planets primarily made of hydrogen. Jensen added that hydrogen's "simplicity" makes the substance important to study. "We want to understand everything, so we should start with systems that we can attack," he said. "Hydrogen is simple, so it's worth knowing that we can deal with it."

More information: Hongwei Niu et al, Stable Solid Molecular Hydrogen above 900 K from a Machine-Learned Potential Trained with Diffusion Quantum Monte Carlo, Physical Review Letters (2023). DOI: 10.1103/PhysRevLett.130.076102

Journal information: Physical Review Letters

The rest is here:
Simulations with a machine learning model predict a new phase of solid hydrogen - Phys.org

Read More..