Subventions et des contributions :

Titre :
Developing robust methods for the measurement of online social media populations
Numéro de l’entente :
RGPIN
Valeur d'entente :
170 000,00 $
Date d'entente :
10 mai 2017 -
Organisation :
Conseil de recherches en sciences naturelles et en génie du Canada
Location :
Québec, Autre, CA
Numéro de référence :
GC-2017-Q1-02482
Type d'entente :
subvention
Type de rapport :
Subventions et des contributions
Informations supplémentaires :

Subvention ou bourse octroyée s'appliquant à plus d'un exercice financier. (2017-2018 à 2022-2023)

Nom légal du bénéficiaire :
Ruths, Derek (Université McGill)
Programme :
Programme de subventions à la découverte - individuelles
But du programme :

In June 2016, troubling evidence surfaced showing Google’s search engine to be “racist.” At issue were the radically different images returned by the search terms “three white teenagers” and “three black teenagers.” The former search returned wholesome pictures of teenagers playing sports and hanging out. The latter returned mugshots. Google defended the search results, indicating that their search algorithms were informed by web content and the frequency with which it was accessed and linked. Google - the company that runs software used daily by millions of people to find information on the internet - asserted that any such bias in their search results were “fundamentally a societal problem.”
And there, in essence, is the conundrum. We look to Google search – along with myriad other online services from Yelp to OkCupid to Facebook – to reflect the world. And in doing so, we give it the power to reinforce the social structures in it. When machine learning algorithms become vehicles for propagating undesirable stereotypes and marginalizing minority views – which we call “algorithmic prejudice” - we have a problem.
Addressing algorithmic prejudice presents two challenges. First, we must detect when a prejudice exists. Second, where a prejudice exists and is deemed necessary to correct, we require methods for adjusting the algorithm’s performance. These are the central tasks we consider in this proposal.
We will tackle these problems in three phases. First, we will develop methods to aid in the detection of algorithmic prejudice. Second, in collaboration with domain experts, we will use our detection methods to do a detailed characterization of the prejudice present in algorithms for detecting hate speech and identifying political orientation. Using these findings, we will approach the final topic of this proposal, which is methods for correcting algorithmic prejudice – either by modifying the algorithms themselves or by adjusting the data that the algorithms are trained on.
As increasing numbers of services migrate online, we must ensure that these systems reflect the aspirations of society, not its worst and most predatory elements. Our hope is that this work will provide viable solutions for companies to ensure prejudice-free systems and that our findings stimulate further work on this important topics.