Article

Algorithms Can Reduce Discrimination, but Only with Proper Data

IAPP Privacy Perspectives

19 Nov 2018

Since the advent of artificial intelligence technology, there have been countless instances of machine learning algorithms yielding discriminatory outcomes. For instance, crime prediction tools frequently assign disproportionately higher risk scores to ethnic minorities. This is not because of an error in the algorithm, but because the historical data used to train the algorithm are “biased” (as the police stopped and searched more ethnic minorities, this group, by extension, also shows more convictions). To solve this, group indicators such as race, gender, and religion are often removed from the training data. The idea is that if the algorithm cannot “see” these elements, the outcome will not be discriminatory.

In her op-ed for IAPP Privacy Perspectives, Morrison & Foerster Senior Of Counsel Lokke Moerel explains why this approach is ineffective in combatting algorithmic bias. She argues that only if we know which data subjects belong to vulnerable groups can biases in the historical data be made transparent and algorithms trained properly. The taboo against collecting such sensitive group indicators must be broken if we ever hope to eliminate future discrimination.

Read Lokke's op-ed.

Close
Feedback

Disclaimer

Unsolicited e-mails and information sent to Morrison & Foerster will not be considered confidential, may be disclosed to others pursuant to our Privacy Policy, may not receive a response, and do not create an attorney-client relationship with Morrison & Foerster. If you are not already a client of Morrison & Foerster, do not include any confidential information in this message. Also, please note that our attorneys do not seek to practice law in any jurisdiction in which they are not properly authorized to do so.