Try Blinkist to get the key ideas from 7,500+ bestselling nonfiction titles and podcasts. Listen or read in just 15 minutes.
Start your free trialBlink 3 of 8 - The 5 AM Club
by Robin Sharma
Machine Learning and Human Values
The Alignment Problem by Brian Christian delves into the ethical and technical challenges of aligning artificial intelligence with human values. It explores the potential risks and offers insights into creating AI that serves our best interests.
In 2015 a young web developer named Jacky Alciné got a notification that his friend had shared a photo with him on Google photos. When he opened up the app, he saw that a new user interface (or UI) had been installed. Now, Google’s AI was grouping the photos into categories like “graduation” or “the beach.”
Alciné saw a selfie with him and his best friend, both of whom are black. The caption under the selfie read “gorillas.” When he opened up the folder, he saw dozens of pictures of him and his friend, and nothing else.
He immediately went to Twitter and called out Google photos. He received a response within two hours and Google went to work resolving the issue. As of 2023, their best solution was to remove the category of gorillas from their UI. They still haven’t found a way to get their program to identify gorillas without mis-categorizing black people.
To understand how this happened, we have to go back to the most photographed person of the nineteenth century – Frederick Douglass.
The famous abolitionist was happy when the technology of photography began to be accessible to the public. Until then, the only representations of black people had been in drawings done by white people. These drawings largely exaggerated black people’s features, making them appear more like animals than humans. Douglass believed photography would give black people better representation. He was happy to pose for photos for this reason, and he encouraged black people to take up photography.
This was a good start, but the problem wasn’t only representation. The problem went right down to the technology itself. Essentially, cameras themselves were racist. This is because film was created with a special chemical coating, and that coating had been created based on how the person or object being filmed or photographed looked in various lights.
To achieve good pictures in film, Hollywood took to using a white woman (the first was named Shirley) to sit for the camera and optimize the film. The coating of the film was developed to make Shirley look her best. This process completely disregarded people with darker skin tones.
Fortunately, this issue was resolved by Kodak in the ‘70s – but not because of the civil rights movement; rather, it was because furniture and candy companies wanted better photographic representations of their products in the media. So Kodak began optimizing film to include darker colors. A happy side effect (for Kodak) was that it opened up a whole new demographic. On the downside, decades of photography and film were missing accurate and clear representations of anyone who wasn’t white.
Fast forward to Alciné’s time and we see instances of AI not recognizing black people as human – or not recognizing their faces at all.
So now that we understand how intertwined racism is with technology, in the next section we’ll talk about why today’s AI can’t overcome this problem – and what is being done about it.
The Alignment Problem (2021) is both a history of the development of AI as well as a prophetic warning about what is to come. From the inherent bias in training data to the extreme speed of progress, Brian Christian details the potential dangers of and solutions to the AI problem.
The Alignment Problem (2020) by Brian Christian delves into the complex relationship between artificial intelligence and human values. Here's why this book is worth reading:
It's highly addictive to get core insights on personally relevant topics without repetition or triviality. Added to that the apps ability to suggest kindred interests opens up a foundation of knowledge.
Great app. Good selection of book summaries you can read or listen to while commuting. Instead of scrolling through your social media news feed, this is a much better way to spend your spare time in my opinion.
Life changing. The concept of being able to grasp a book's main point in such a short time truly opens multiple opportunities to grow every area of your life at a faster rate.
Great app. Addicting. Perfect for wait times, morning coffee, evening before bed. Extremely well written, thorough, easy to use.
Try Blinkist to get the key ideas from 7,500+ bestselling nonfiction titles and podcasts. Listen or read in just 15 minutes.
Start your free trialBlink 3 of 8 - The 5 AM Club
by Robin Sharma
What is the main message of The Alignment Problem?
The main message of The Alignment Problem is the challenges and consequences of aligning artificial intelligence with human values.
How long does it take to read The Alignment Problem?
The reading time for The Alignment Problem varies, but it takes several hours. The Blinkist summary can be read in a few minutes.
Is The Alignment Problem a good book? Is it worth reading?
The Alignment Problem is a thought-provoking book worth reading. It explores the ethical implications of AI and encourages critical reflection.
Who is the author of The Alignment Problem?
The author of The Alignment Problem is Brian Christian.