INDIVUDAL PROJECT - FALL 2019 ROLE: Designer, Researcher, Technologist
Machined DataMachined Data presents two speculative scenarios around civic data collection for machine learning, illustrating different ways bias is introduced in the process of data collection, and imagining how we might intervene within those systems.
Full case study coming soon.
Scenario 1: Corporate Data CollectionThe first speculative scenario is an extrapolation of current consumer surveillance tools like the Ring doorbell. This scenario imagines a major tech company creating a machine learning product that allows users to collect images to train personalized image classification algorithms to their neighborhood.
The system would pull data from neighborhood social media apps like NextDoor to create labels that end up being codewords for xenophobia and racism like “suspicious activity” and “loitering.”
What would happen if the categories were relabeled to reflect structural and systemic causes rather than the symptoms of those issues?
Could the infrastructure be reappropriated to identify McMansions or other symbols of inequity, rather than automate oppression and prejudice?
Scenario 2: Municipal Data CollectionThe second scenario imagines a local, smart city initiative where residents become data surveyors to creat a dataset mapping residents’ sense of fear, safety, and security within the city in an effort to determine where city funds should be spent.
This scenario is meant to facilitate conversations around how smart city tech initatives typically sample only the most well-off, property owning residents. It also asks why the main solution we see proposed to solve this “sampling bias” is to put already vulnerable people of color under further surveillance.