Building bots to benefit all

by | Jan 7, 2021 | Big Ideas, Winter 2021 | 0 comments

During a recent lesson on pedestrian detection, Tom Williams’ computer vision students spent the first 15 minutes of class hashing out a major design issue. The problem wasn’t a mechanical or a coding dilemma but a moral one: How will the robots be used?

“If you’re a robot, you need to be able to tell where people are so you don’t run into them,” said Williams, an assistant professor of computer science whose research is focused on human-robot interaction. The problem sounds simple. “But there are a lot of morally fraught ways you can look at pedestrian traffic.” Based on the design, could humans use the robots to discriminate against some pedestrians? What implicit biases might designers bring to how the robot detects pedestrians? Would the robots gather data and create a privacy concern? Could they be used to cause harm, physical or otherwise, intended or not?

Ethical concerns aren’t new to robot and artificial intelligence developers. But as technology evolves, so do the moral questions about how it’s used, who uses it and who it’s used on. Though they’re trained in the technical aspects of this work, the designers of these technologies are also actively engaged in social and psychological conversations about race, class and gender equity, and how responsible designers are for the technology they create. The ethical considerations are wide-ranging, from questions of jobs and economic security to privacy, bias and discrimination.

“In engineering design, we often feel hesitant to talk much about the personal experience the designer has brought to the product,” said Qin Zhu, an assistant professor of ethics and engineering education who co-teaches a robot ethics course at Mines with Williams. “I think that’s a missing point. In engineering, design often assumes a linear model, where if you design a good technology, society will choose it. But ‘good technology’ is often subjective. Often, it doesn’t address the concerns of society early in the design.”

Zhu, who is trained as an ethicist, noted that one way designers can address moral questions is through the team itself. “A diverse team is a very important condition for building inclusive technologies,” he said. “I think there are many different reasons why many people from different cultural backgrounds can generate a lot of great ideas. If you want to tackle a complicated issue, it often calls for morally creative solutions.”

The workforce itself is where inclusive technology starts, said Dagny Stahl, a senior computer science major who is the head of Mines’ Association for Computing Machinery-Women chapter. “With inclusivity, I think the main thing that needs to happen is that the environment needs to change. Tech needs to be less of this exclusive bubble where if you’re not already inside it, you can’t contribute,” she said. “I think to break that bubble, we need people from more backgrounds coming in.”

‘A more friendly future’

When anti-racism protests broke out around the globe this summer, Williams co-authored an “open letter from robotics researchers” with other Colorado-based robotics professors that addresses the role designers have in creating certain technologies, such as non-consensual facial recognition and predictive policing models, that can be used by law enforcement against marginalized communities.

The open letter also calls for “requiring state and local governments to acquire informed public consent before acquiring robotic technologies.” This is a slice of a big-picture conversation happening with consent right now. Big tech companies have pervasive public power, Zhu said, but they didn’t consult with the public before implementing these technologies—some of which are changing people themselves.

“Artificial intelligence and robotics are so pervasive in your everyday life,” Zhu said. “If you live with a Google Home
for 10 years, this kind of influence is very nuanced but sometimes significant. If you look much longer-term, you may become a different person without even thinking, in your moral reasoning.”

The next generation of engineers is already thinking about these and other moral questions, though, Zhu said. “I have a lot of hope in Generation Z. When I talk to them, they are worried about privacy, not just robotics and AI, but how technology in general can build a more friendly future.”