Last Updated:

Student Proves Algorithm ‘bias’ Toward Lighter, Younger Faces; Twitter Pays $3,500

An investigation into Twitter's algorithmic bias has found that its image cropping algorithm prefers younger, slimmer faces with lighter skin.

Twitter

Image Credit: Unsplash


An investigation into Twitter's algorithmic bias has found that its image cropping algorithm prefers younger, slimmer faces with lighter skin. Twitter faced an inquiry on the allegation of algorithmic bias, and it stated that the company's picture cropping algorithm favoured younger, thinner faces with lighter complexions. While the discovery is humiliating for Twitter, they have previously apologised to users following allegations of bias. It represents the successful end of the company's first-ever "algorithmic bug bounty."

Recently Twitter offered $3,500 to Bogdan Kulynych, a graduate student at Switzerland's EFPL university, who revealed the bias in the algorithm. As part of a competition at the DEF CON security conference in Las Vegas, it was used to emphasise image previews on the most intriguing sections of photos.

Kulynych proves the algorithmic bias

To prove the algorithmic biases, Kulynych demonstrated it by creating artificial faces then running them through Twitter's cropping algorithm to determine which qualities the program prioritised. As the faces are created artificially, it was feasible to produce almost similar faces but differed in skin tone, width, gender presentation, or age. This further reveals that the algorithm prioritised younger, thinner, and lighter faces over older, broader, and darker ones. 

Earlier, in 2020, Twitter was chastised for its picture cropping algorithm when users observed that it favoured white faces over those of black individuals and even more shocking, it has focused on white dogs over black. However, in a later investigation, Twitter's own researchers discovered just a little bias in favour of white faces and women's faces. 

To all these, Twitter gave the statement that the team did analysis for the biases in the algorithmic before delivering the model but did not discover indications of racial or gender bias in the testing. But after this finding, they said that it's apparent from these cases that the company needs to undertake additional research. They further said that they would share the findings and about the actions the company will take. Lastly, they said that they would open-source for analysis so that others can look at it and reproduce it. 

Later the firm responded by launching the algorithmic damages bug bounty, which offered thousands of dollars in incentives to academics who could show how the company's picture cropping algorithm caused harm. This has led student Kulynych to shine his fortune by winning the competition and finding the algorithmic bias.  

Kulynych, among many students who won the prize, expressed mixed views about the competition, saying that Algorithmic damages include more than just bugs.  

Image Credit: Unsplash

First Published: