Taking the Time to Implement Trust in AI – Illinois Computer Science News

For years, but especially recently, the accelerated pace of development for new machine learning technology has caught the eye of researchers who also value security and privacy.

Vulnerabilities to these advancements and their AI applications do, of course, leave users open to attacks.

In response, Illinois Computer Science professor Bo Li has positioned her research career at the intersection of trustworthy machine learning, with an emphasis on robustness, privacy, generalization, and the underlying interconnections of these items.

As we have become increasingly aware, machine learning currently has been ubiquitous in the technology world through different domains ranging from autonomous driving, large language models, ChatGPT, etc., Li said. It is also a benefit found in many different applications, like face recognition technology.

The troubling aspect is that we have also learned threat these advancements are vulnerable to attack.

Earlier this month,Bo Li logged on to her computer and noticed several emails from colleagues and students congratulating her.

However, what exactly for, she wasnt sure.

I found out, eventually, the way we find out so much information these days on Twitter, Li said with a laugh.

There she saw several notifications stemming from IEEEs announcement of the AI 10 to Watch List for 2022 which included her name.

I was so pleased to hear from current and past students and collaborators, who said such nice things to me. Im also delighted to be a part of this meaningful list from IEEE, Li said.

The Illinois CS professors early-career in academia has become quite decorated already, with awards like the IJCAI Computers and Thought Award, Alfred P. Sloan Research Fellowship, NSF CAREER Award, AI's 10 to Watch List from IEEE, MIT Technology Review TR-35 Award, etc.

Lis work also includes research awards from tech companies such as Amazon, Meta, Google, Intel, MSR, eBay, and IBM, and best paper awards at several top machine learning and security conferences.

Each recognition and award signify a tremendous amount support for my research, and each have provided me confidence in the direction that Im working on, Li said. Im very glad and grateful to all the nominators and communities. Every recognition, including the IEEE AI 10 to Watch List, provide a very interesting and important signal to me that my work is valuable in different ways to different people.

Calling it a golden age in AI, San Murugesan, IEEE Intelligent Systems Editor in Chief, stressed the importance of this years recipients who are rising stars in a field that offers an incredible amount of opportunity.

Li thanked her mentor here at Illinois CS, professor David Forsyth, as well as influences from her time at the University of California, Berkely like Dawn Song and Stuart Russell.

Through their steady guidance, she has prepared her early academic career for success. And Li is ready to return the favor for the next generation of influential AI academicians.

The first piece of advice I would give is to read a lot of good literature and talk with senior people you admire. Therefore, develop your own deep and unique research taste, Li said. Great researchers provide insights that are both unique and profound. Its rare, and it takes hard work. But the work is worth it.

In an already successful start to her career focused on this problem, Li also earned $1 million to align her Secure learning Lab to DARPAs Guaranteeing AI Robustness Against Deception (GARD) program.

The project, she said, is purely research motivated. It will separate those involved into different teams; the red team presents the vulnerability or attack while the blue team attempts to defend against it.

Organizers believe the vulnerability to be too complex to solve during the duration of this project, but the value of the work goes well beyond simply solving the vulnerability.

For the students participating from my lab, this presents an opportunity to work on an ambitious project without the pressure of a leaderboard or competitive end result, Li said. Its ultimately an evaluation that can help us understand the algorithm involved. Its open source and structured with consistent meetings, so we can all work together to uncover advancements and understand them best.

The ultimate goal, for both her and her students, is to define this threat model in a more precise way.

We cannot say our system or machine learning system is trustworthy against any arbitrary attack that's almost impossible. So, we have to characterize our threat model in a precise way, Li said. And we must define trustworthy requirements. For instance, given a task, given a data set to provide a model, we have this different specific requirement.

And then we can optimize an end-to-end system, which can give you guarantees for the metrics you care about. At the same time, hopefully, we can provide tighter guarantees by optimizing the algorithm, optimizing the data, optimizing other components in this process.

This continues work Li has conducted with her students for years into the concept of Trustworthy AI.

For example, a previous breakthrough considered the consistent give-and-take between components that create Trustworthy AI.

Researchers felt that there were certain tradeoffs that had to occur between accuracy and robustness in their systems combating machine learning vulnerabilities.

But Li said that she and her group proposed a framework called Learning-Reasoning, which integrated human reasoning into the equation to help mitigate such tradeoffs.

What were striving for is a scenario in which the people responsible for developments in AI understand that both robustness and accuracy or safety are important to prioritize at the same time, Li said. Often times, processes simply prioritize performance first. Then organizers worry about safeguarding it later. I think both concepts can go together, and that will help the proper development of new AI and ML based technologies.

Additional work from her students has led to progress in related areas.

For example, Ph.D. student Linyi Li has built a unified toolbox to provide certified robustness for Deep Neural Networks.

Also, Ph.D. student Chejian Xu and masters student Jiawei Zhang have generated different safety-critical scenarios for autonomous vehicles. They will host a CVPR workshop on it in June.

Finally, Zhang and Ph.D. student Mintong Kang built the scalable learning-reasoning framework together.

These sorts of developments have also led to Lis involvement in the newly formed NSF AI Institute for Agent-based Cyber Threat Intelligence and OperatioN (ACTION).

Led by the University of California, Santa Barbara, the NSF ACTION Institute also aims to revolutionize protection for mission-critical systems against sophisticated cyber threats.

The most impactful potential outcomes from the ACTION Institute include a range of fundamental algorithms for AI and cybersecurity, and large-scale AI-enabled systems for cybersecurity tasks with formal security guarantees, which are realized by not only purely data-driven models, but also logic reasoning based on domain knowledge, weak human supervision, and instructions. Such security protection and guarantees will hold against unforeseen attacks, as well, Li said.

Its clear that, despite the speed with which AI and machine learning developments occur, Li and her students are providing a presence dedicated to stabilizing and securing this process moving forward.

Read more here:

Taking the Time to Implement Trust in AI - Illinois Computer Science News

Related Posts

Comments are closed.