Tech x Society: AI and Human Rights

AI was created to improve processes and make life easier overall — but what do we do when the lines between “helping” and “hurting” become blurred?

--

Everywhere you turn, people seem to be talking about AI — and with good reason. Artificial Intelligence has become an integral part of our everyday lives, powering everything from news feeds to insurance claim approvals to military drones. AI is pervasive across industries as well, from healthcare and manufacturing, to finance and agriculture. Even if we don’t realize the scale, it’s playing an increasingly larger role in our everyday lives. Ask Alexa a question? AI. Use Google Maps to find your destination? AI. Call a help line? AI.

Despite its ubiquitous nature, there’s no consensus on exactly what AI means. Google the term and you’ll be inundated with varying definitions of the technology. A few examples:

  • Artificial Intelligence (AI) is the branch of computer sciences that emphasizes the development of intelligence machines, thinking and working like humans.
  • AI refers to technology that can make machines think like human beings and robots can work the way humans use to do.
  • Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.
  • AI makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks.

Generally speaking, AI means machine learning algorithms — that is, algorithms that learn from data and use the learning process to make decisions or predictions on everything from recognizing a face to predicting the likelihood of someone defaulting on a loan. The advantages of AI are clear, but there are also concerns about unintended consequences and the potential for abuse.

BCG Digital Ventures chose this fascinating subject for the latest installment in our Tech x Society series. Lisa Lehmann, Director of Bioethics & Health Trust at Google, was joined by Aaina Agarwal, Producer and Host of The Indivisible AI Podcast, for an insightful discussion moderated by Anthony Koithra, Managing Director and Partner at BCG Digital Ventures and Funder of bluewhitered.org.

These renowned experts came together to share their insights into the world of AI, what this revolutionary technology means for businesses and consumers, and how it’s being used to promote — and undermine — fundamental human rights around the globe. A number of key themes emerged:

AI relies heavily on humans: Mimicking the abilities of the human mind lies at the heart of AI, but that doesn’t mean humans have been eliminated altogether. On the contrary, humans are intimately involved in the process — framing the problem, preparing the data, and determining what data set to use to train the models. Humans also play a key role in keeping bias out of the equation. “We have an obligation to ensure there isn’t bias in the training data so we get an outcome that is generalizable to the entire population and not enforcing bias,” said Lehmann. “It’s an iterative process and one that requires a lot of careful thought about what we’re trying to achieve.”

The potential for abuse is enormous. Algorithmic profiling is a huge problem. Often, it depends on the context in which it’s being used. Facial recognition technology when used on a phone is going to trigger a much different risk-based response than when it’s used at an airport where there’s transparency about its use and people are opting into it, for example. A tension exists between the legitimate uses of collecting biometric information and facial recognition for law enforcement and for national security purposes. It can be useful for identifying criminals and people who have been reported missing. When governments employ the technology without the public’s awareness and consent, it becomes problematic. “If we put the technology out there, the more the people can subvert it,” said Lehmann. “We have to be so cautious and thoughtful about the potential for misuse, abuse, and harm.”

Algorithmic profiling impacts self-determination. Personalized algorithmic media delivery, such as through Facebook, Twitter, Instagram, or other social media outlets exposes people to a narrow stream of content that not only reinforces their existing identity, but also insulates them from serious consideration of other viewpoints that would allow them to critically construct ideas that form the basis of their identity. “When you look to a lot of the modern politics in the U.S. and abroad, those dynamics are a contributing factor to a lot of the political dynamics you see playing out,” said Agarwal. “Algorithmic control of the media impacts us personally and, therefore, the free character of society in many ways.”

The future of AI is filled with hope. Despite their concerns about the potential abuse of AI, our experts expressed great hope for the application’s use over the next decade. In addition to AI’s potential to improve health outcomes for billions of people, it can also be leveraged to help improve well-being for people around the world. “We should be taking a step back and thinking about where we can leverage AI to meet the greatest needs of society on a global scale to address things like poverty and climate change,” said Lehmann. “Optimizing clean energy development, exploring how to avoid waste using autonomous vehicles to make transportation more efficient, or monitoring the environment in terms of weather predictions to better predict wildfires and floods — AI can do so much good in the world.”

Interested in joining BCGDV? See our current vacancies.

Want to find out more? Start the conversation with BCGDV.

Find us on Twitter @BCGDV, LinkedIn, and Instagram.

--

--

BCG Digital Ventures, part of BCG X, builds and scales innovative businesses with the world’s most influential companies.