AI is not yet perfect, but it’s on the rise and getting better with computer vision – TechRepublic

It can beat anyone at chess but can't recognize fountains. One professor talks about the promise of AI and computer vision.

TechRepublic's Karen Roby spoke with David Crandall, assistant professor of computer science at Indiana University about artificial intelligence (AI), computer vision, and the effects of the pandemic on higher education. The following is an edited transcript of their conversation.

David Crandall: I'm a computer scientist, and I work on the algorithms, the technologies underneath AI. And I work, specifically, in machine learning and computer vision. Computer vision is the area that tries to get cameras that are able to see the world in the way that people do, and that, then, could power a lot of different AI technologies, from robotics to autonomous vehicles, to many other things.

SEE: Natural language processing: A cheat sheet (TechRepublic)

Karen Roby: Where are we right now, with AI? Are we where you thought that we would be, at this point? Are we moving past it? A lot of companies, we're learning now, are having to put AI projects on fast-forward, because of the pandemic, and really accelerate their use of it. Are you seeing that?

David Crandall: I think it's a really exciting time, in general, for the field, but I'd also say it's kind of a confusing time. Because, even in my lab, we often encounter problems that we think are going to be very hard, and then it turns out that they're very easy for AI. And then, on the other hand, we also encounter problems that we think are going to be easy, no problem, and then they turn out to be extremely difficult to solve.

I think that's kind of an interesting place where we're in, right now, with AI. We have programs that can play chess better than any human who can ever live, but we have robots that still make simple decisions. Like, a year or two ago, there was a case of a security robot in a mall that didn't see a fountain in front of it, and just ran right into it and drowned itself. So, it's just sort of, super confusing, how we can have machines that are so powerful, on one hand, and so, kind of, confusing, on the other hand.

SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)

I think, in terms of the pandemic, specifically, I think that there's still a lot of interest in AI, as there was before, but the fact that we're doing most meetings online, now, and so much of our life has become virtual, I think, has just increased interest in AI, and in technology, in general.

Karen Roby: We hear a lot about the ethical use of AI. What does that mean to you? And where are we, when it comes to ethics and AI, and how people perceive this technology?

David Crandall: I've been working in AI for, maybe, 20 years, now, again, as a computer scientist. To me, speaking personally, it's a little surprising that, suddenly, we're having to deal with the ethical issues in AI. Because for some reason, for most of those 20 years, none of the technology, at least that I was involved in, really seemed to be working well enough to really impact actual people's lives. And I think, in the last few years, we've made substantial progress in AI. And that means, now, people are wanting to use it in regular products, in day-to-day life, and that means we really need to confront some of these issues. And so, it's kind of, there's good news, and there's also, sort of, things that we need to work on.

In terms of the research community that I'm involved in, I think the good news is that now ethics and AI are really at the forefront of people's minds. When you submit a paper to many conferences, these days, the top conferences in AI, they're now asking researchers to explicitly state what are sort of the potential ethical implications of a paper or work that you're doing. And your paper very well may be rejected if you can't make that argument.

SEE: AI and machine learning are making insurance more predictable (TechRepublic)

There's a lot of issues that I think computer scientists, and us, as a community, need to think about, in terms of AI. And I think they range from lots of things, from, say, how we deal with the fact that AI is not perfect, so it's going to make errors. Those errors are going to impact people's lives, or they could impact people's lives, if we're not careful. AI is not very good at explaining its reasoning, so even when it makes an error, it's sort of hard to figure out why it made that error.

There's concerns about bias. There's recent work, for example, showing that face recognition tends to work much better in white, middle-aged men, than it does in Black, older women, for example. That's a significant concern. And, just like any new technology, that ties into the concerns about how AI might increase inequality, how it might affect some types of jobs, how it might affect employment, or unemployment, in the future.

I think it also raises questions about how AI will influence ourselves. How are we going to let AI influence ourselves? Not in the sense that robots might gain sentience, and turn into killer agents, or something like that, but just, what does it look like in a world where, maybe, people are relying on AI to interact with one another, as we do by communicating on Facebook, right now, and other technologies like that?

Karen Roby: Expand a little bit on computer vision and what excites you about this.

David Crandall: I think the most exciting and interesting thing about computer vision is, it is vision, being able to see, seems like such a simple thing, to us, right? From a very early age, we're able to recognize faces, recognize the objects that we see around us. We can give names to objects, right? It's one of the very first things that we learn to do in the first few months and years of life. And yet, it has defied computer scientists, now, confounded computer scientists, now, for close to 60 years in how to be able to replicate that ability.

SEE: Hiring kit: Computer Vision Engineer (TechRepublic Premium)

As a person, I just look around and I see things, and there's no conscious effort to that. And then, I've spent 20 years, and others have spent many more decades, trying to program computers to do it. It's just such a difficult, really difficult, thing to do.

That's why it's exciting for me. I think it's a really interesting technical challenge. Also, as we confront that technical challenge, I think it also, potentially, helps us understand how people are able to solve this problem. And if we can do that, if we can understand how kids learn to see the world, maybe using computer vision as a lens, then, maybe, we can help kids who have learning deficiencies, for example, learning difficulties, figure out strategies around them, by sort of reverse-engineering what's going on, and figuring out how, maybe, to account for that.

In terms of a research problem in our community, I also like that it touches on so many different fields. Because, we have computer scientists, but it also touches on optics, and statistics, and cognitive science, and lots of different fields. So, it's a great thing for students or faculty who are sort of interested in many things, to bring them together in this relatively interdisciplinary field that's trying to solve a big challenge.

Karen Roby: 2020 has thrown a lot at us, in the education space, obviously, and in higher ed. How have things changed, besides the obvious? Do you think this pandemic, in the minds of some of your students, has really changed how they look at their future careers?

David Crandall: I think, in computer science, we're probably luckier than in other fields, like chemistry, and biology, and stuff. All of the work that we tend to do is portable on our laptops, and we can log into servers from wherever we are in the world. And the kinds of skills we're working on, like programming skills, are things that are very naturally adapted to the online world. So, in that way, we're lucky.

SEE: Artificial intelligence can take banks to the next level (TechRepublic)

As an educator, I feel like, and I've heard this from other colleagues, also, it's really been a mixed bag. Some things that we expected we would not enjoy about online teaching have actually been great. For example, I routinely, these days, get 20, 30 students coming to my online office hours. Whereas, there's no way I could have crammed 20 or 30 students into my office hours in the real world.

So, that's an example of, actually, where the technology is increasing the amount of engagement that we have, one-on-one. But, I think there's, in terms of computer scientists, I think the pandemic has, in some ways, been good for technology. And there still seems to be a pretty good job market in computer science, for example.

But, like everybody else, I worry about the long-term impact on students' health, and our own mental health, by just not having those social connections that I think are really important. The more that I work in AI, the more impressed I am with people because people have been able to do all of these things for thousands of years, and being able to interact with other people as social creatures is super important, that I don't think we should try to replace with AI anytime soon.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Image: iStock/PhonlamaiPhoto

See the rest here:

AI is not yet perfect, but it's on the rise and getting better with computer vision - TechRepublic

Related Posts

Comments are closed.