Project
FairCV
Computer vision has been employed in an increasingly wide range of applications, some of which have serious consequences in our daily lives. However, research has shown that learned models could produce biased results for minorities or socially vulnerable groups. Some prominent examples include Google's algorithm tagging an African American's photo album under "Gorilla"and commercial gender classification systems performing worse for females and people with darker skin color.
Researchers have proposed various definitions of fairness as desirable forms of equality, often relying on sensitive attributes such as race, gender, age, etc. Unfortunately, current bias detection approaches transfer poorly to the image domain for two main reasons: (i) higher-level concepts are often inferred, for example from a collection of pixels, rather than stated explicitly as a feature and (ii) since causes of bias are not limited to sensitive attributes, brute force attempts to generate finely labeled data tend to be infeasible in terms of efficiency or even replicate existing bias.
We are investigating non-traditional mechanisms to understanding the causes of bias in the image domain and on tools and definitions of fairness for improving them.
Related Links
Contact us
If you would like to contact us about our work, please refer to our members below and reach out to one of the group leads directly.
Last updated Oct 09 '21