Aristotle, AI and the Data Scientist | The ILR School – Cornell University | ILR School

Nearly two and a half millennia since his time, Aristotle and his "virtue ethics" in short, to live a life of good character are every bit as relevant to budding statisticians as the technical skills they learn to build AI models, according to Elizabeth Karns, senior lecturer of statistics and data science at Cornell Bowers CIS and at the ILR School.

An epidemiologist and lawyer, Karns launched Integrated Ethics in Data Science (STSCI 3600), a seven-week course offered twice each spring semester, several years ago in response to what she viewed as a disconnect between statisticians and the high-powered, high-consequence statistical models they were being asked to build.

I started thinking more about algorithms and how we are not preparing students sufficiently to be confronted with workplace pressures to just get the model done Put in the data, don't question it, and just use it, she said.

The problem, as she sees it, is that these models are largely unregulated, have no governing body, and thus skirt rigorous scientific testing and evaluation. Lacking such oversight, ethics and fairness, then, become a matter of discretion on the part of the statisticians developing the models; personal values and virtues are brought into the equation, and this is where Aristotles wisdom proves vital, she said.

At this point in our lack of regulation, we need to depend on ethical people, Karns said. I want students to learn to pause and reflect before making decisions, and to ask, How well does this align with my values? Is this a situation that could lead to problems for the company or users? Is this something I want to be associated with? Thats the core of the class.

For the course, Karns with the help of Cornells Center for Teaching Innovation (CTI) developed an immersive video, Nobodys Fault: An Interactive Experience in Data Science Practice, which challenges students to consider a moral conflict brought about by a bad model.

I tell my students that we're going to be in situations in this class where there's not a clear right or wrong answer, she said. And that's the point to struggle with that ambiguity now and get some comfort in that gray space. That way, when they get out into the workplace, they can be more effective.

To read more about the work Bowers CIS is doing to develop responsible AI, click here. Louis DiPietro is a public relations and content specialist for Cornell Bowers CIS.

More here:

Aristotle, AI and the Data Scientist | The ILR School - Cornell University | ILR School

Related Posts

Comments are closed.