Computer Science Professor Cynthia Dwork Delivers Annual Ding … – Harvard Crimson

Harvard Computer Science professor Cynthia Dwork discussed the shortcomings of risk prediction algorithms at the Center of Mathematical Sciences and Applications annual Ding Shum lecture Tuesday evening.

During the talk titled Measuring Our Chances: Risk Prediction in This World and its Betters Dwork presented her research to a crowd of nearly 100 attendees. The Ding Shum lecture, funded by Ding Lei and Harry Shum, covers an active area of research in applied mathematics. In the previous three years, the series was canceled due to the Covid-19 pandemic.

Tuesdays discussion, which was also livestreamed online, was moderated by CMSA postdoctoral fellow Faidra Monachou.

Dwork began her talk by presenting what she described as a fundamental problem in how algorithms are applied.

Im claiming that risk prediction is the defining problem of AI, meaning that theres a huge definitional problem associated with what we commonly call risk prediction, Dwork said.

She said that although predictions may assign a numerical probability towards an event happening, these predictions are very difficult to interpret for one-time events, which either happen or do not.

You have, maybe, intuitive senses of what these mean. But in fact, its quite mysterious. What is the meaning of the probability of an unrepeatable event? Dwork asked.

In addition, it can be difficult to tell whether a prediction function is accurate based on observing binary outcomes, Dwork said.

If I predict something with three quarters probability, both yes and no are possible outcomes. And when we see one, we dont know whether I was right about that three quarters, Dwork said.

How do we say: is that a good function, or is that a bad function? We dont even know what its supposed to be doing, she added.

To illustrate the complexity of the issue, Dwork asked the audience to consider two worlds. In one, there is no uncertainty in whats going to happen in the future, but current predictors may lack enough information to make an accurate guess. In the other, theres real inherent uncertainty, meaning that outcomes may change even if prediction processes are perfect.

The issue, Dwork said, is these two worlds are really indistinguishable.

Since all algorithms take in inputs to predict a yes or no output, Dwork said the predictions will not reveal whether real life is arbitrary and real valued or binary.

But Dwork said in her research, she has been able to draw from the field of pseudorandom numbers to help detect the difference between these two situations.

So now, instead of trying to determine whether a sequence is random, or pseudorandom, our distinguishers are trying to determine whether an outcome is drawn from real life, she explained.

Dwork, who is also a faculty affiliate at Harvard Law School and a distinguished scientist at Microsoft Research, is known for her contributions to the areas of differential privacy and algorithmic fairness. In 2020, she received the Institute of Electrical and Electronics Engineers Hamming Medal for her work in privacy, cryptography, and distributed computing, and for leadership in developing differential privacy.

To conclude her talk, Dwork presented a roadmap of her future research.

The next step is to try to develop an understanding of what it is that transformations that satisfy this technical property can actually accomplish, Dwork said.

Excerpt from:

Computer Science Professor Cynthia Dwork Delivers Annual Ding ... - Harvard Crimson

Related Posts

Comments are closed.