APR 1 & 8 | 12:00 PM — 1:15 PM ET (both days)
Bracketing the commonly used categories of surveillance and privacy within discussions of technological creep into all domains of life, we instead ask participants to consider the multiple, contested, and potentially hopeful axes of seeing and being seen within, through, and against software, algorithms, automated systems, and platforms. We ask not only how these axes of seeing work within and against racial-colonial and gendered scopic regimes which have to do with policing and managing labor and populations, but also they draw the boundary between “the private” and “the public.” While accounting for various technological objects and platforms as a part of digital surveillance capitalism, we also ask, what is not captured about care and vision within the liberalism’s binary rubrics of surveillance and privacy that reproduce the primacy of the free self-possessed individual as the political ideal? How are the boundaries of the inside and outside, family and stranger, and subjects/objects worthy of being seen, watched, or monitored drawn through the design and intended uses of particular caring technologies? What is it that remains unseen--as in unrecognized, unnoticed, or otherwise unworthy of our attention?
ORGANIZERS
STUDENT COLLABORATORS
Aditya Anupam, PhD Candidate, Digital Media, Georgia Tech
Pooja Casula, PhD Student, Digital Media, Georgia Tech
Shubhangi Gupta, PhD Student, Digital Media, Georgia Tech
FUNDING AND SPONSORS
Digital Integrative Liberal Arts Center, Georgia Tech
➚
Ethics, Technology, and Human Interaction Center (ETHICx), Georgia Tech
➚
Center for Racial Justice, UC Santa Cruz
➚
PARTICIPANTS
Claudio Celis Bueno
Research Fellow,
Sant’Anna School of Advanced Studies
Katie Keliiaa
Assistant Professor,
University of California, Santa Cruz
Tamara Kneese
Assistant Professor,
University of San Francisco
Iván Chaar López
Assistant Professor,
University of Texas at Austin
Erin McElroy
Postdoctoral Researcher,
AI Now Institute
Renee Shelby
Postdoctoral Fellow,
Northwestern University
Luke Stark
Assistant Professor,
University of Western Ontario
Mitali Thakor
Assistant Professor, Wesleyan University
APR 1 & 8 | 12:00 PM — 1:15 PM ET (both days)
APRIL 1 REGISTRATION LINK
APRIL 8 REGISTRATION LINK
ORGANIZERS
Nassim Parvin, Associate Professor, School of Literature, Media, and Communication, Georgia Tech
Neda Atanasoski, Professor and Chair of Feminist Studies, University of California, Santa Cruz
FUNDING AND SPONSORS
Digital Integrative Liberal Arts Center, Georgia Tech
Ethics, Technology, and Human Interaction Center (ETHICx), Georgia Tech
Center for Racial Justice, UC Santa Cruz
STUDENT COLLABORATORS
Aditya Anupam, PhD Candidate, Digital Media, Georgia Tech
Pooja Casula, PhD Student, Digital Media, Georgia Tech
Shubhangi Gupta, PhD Student, Digital Media, Georgia Tech
Speakers
Claudio Celis Bueno
Research Fellow, Sant'Anna School of Advanced Studies
Beyond Labour: Reflections on Technology, Capitalism and Resistance
The Covid crisis has brought a radical massification and naturalisation of technology, in particular those related with digital
telecommunications (online teaching, videoconferencing, working-from-home, telemedicine, etc). For many, these technologies have been the only way to safeguard some
sort of continuity with pre-Covid conditions of existence (labour, social relations, entertainment, shopping, etc.). At the same time, however, these technologies have
also entailed a process of precarisation for many, intensifying mechanisms of exploitation, surveillance, and social injustice that were already at play in pre-Covid times.
To a large extent, the implementation of technology during the Covid crisis has responded to the need to ensure the reproduction of capitalist (neoliberal) social relations.
Furthermore, the intensification of neoliberal forms of precarisation has made the contradictions of this mode of production even more evident. Nevertheless, this has also
created new spaces and strategies of resistance.
Following these general considerations, this paper will develop some critical reflections on the question of ‘domestic labour’, both in the sense of traditional household labour
as well as in the new sense of teleworking. On the one hand, teleworking has normalised the expansion of labour time to what Jonathan Crary has called ‘24/7 capitalism’. This seems
to reinforce the thesis of the ‘social factory’ put forth by Italian post-operaist thinkers. On the other hand, the merging of the domestic space and the working space under one single
screen has helped to unveil the sexual division of labour that structures many households around the world (and that feminist authors have been denouncing for decades). In response to this,
many feminist groups have called for an appropriation of this screen to render visible the asymmetric material conditions and power relations that shape this division of labour.
To develop these critical reflections on the relation between labour and technology under Covid conditions of existence, this paper will compare two theoretical strands.
Following Silvia Federici and Amaia Pérez Orozco, the first part will examine the issue from an intersectional approach that highlights the crossroad between gender and class
relations. Complementarily, the second section will follow Rosi Braidotti’s recent work to explore the connection between technology, labour, and power relations from a critical
posthumanist perspective. The final aim of the paper is to use this comparison to highlight the strengths and limitations of each of these approaches.
Claudio Celis Bueno is a Research Fellow at the Sant’Anna School of Advanced Studies (Pisa, Italy). He holds a PhD in Critical and Cultural Theory from Cardiff University. He has developed post-doctoral research on the relationship between technology, images and power. He is the author of the book The Attention Economy: labour, time and power in cognitive capitalism (2017) and several journal articles. He has taught courses in Film Studies, Political Philosophy, Critical Theory, and Visual Studies.
Katie Keliiaa
Assistant Professor, University of California, Santa Cruz
Scales of Containment, Sexual Surveillance and Native Women’s Bodily Regulation in 20th-century Bay Area
This talk demonstrates the how bodily regulation unfolded on Native women domestic workers in the 20th-century Bay Area. Scholars have examined
Indian health in the context of boarding schools and in specific tribal communities. But none have considered Urban health in the early 20th-century, much less from a lens of
Native women. This talk address the overarching question, how did sexual surveillance in the Bay Area Outing Program effect Native women?
To this end, I examine the state of Indian health throughout the 19th and early 20th-centuries, providing a context of federal negligence and rampant disease.
I pay close attention to the prevalence of sexually transmitted infections. While gonorrhea and other diseases were common, in the early 20th- century U.S., syphilis
was by far the most deadly and difficult to treat. In my analysis of this highly stigmatized disease, I highlight how anti-venereal disease efforts targeted the “immoral”
woman. These ideologies fused the notion of female delinquency and sexually transmitted diseases, leading to widespread policing of young women’s bodies. I connect this history
to sexual surveillance within the Bay Area Outing Program and the criminalization of Native women. Through close analysis of case files, I trace various “scales of containment”
Native women experienced. First in boarding schools, then in outing homes followed by various Bay Area institutions such as the Salvation Army Home and juvenile Detention centers.
Through these intimate stories I consider the “care” and particularly the violence of care that fiercely controlled and contained Native women. Thus, I demonstrate the ways in which
the settler state attempted to and at times succeeded in managing and controlling Native bodies.
Caitlin “Katie” Keliiaa is an Assistant Professor of Feminist Studies at UC Santa Cruz. She is an interdisciplinary feminist historian specializing in 20th-century Native experiences in the West. Her scholarship engages Indian labor exploitation, dispossession and surveillance of Native bodies especially in Native Californian contexts. Her book project examines how Native women domestic workers negotiated and challenged an early 20th-century Indian labor program based in the San Francisco Bay Area. In this work Dr. Keliiaa centers Native women’s voices uncovered from federal archives.
Tamara Kneese
Assistant Professor, University of San Francisco
Tracking for Two: Surveillance and Self-Care
A seeming explosion of pregnancy tracking apps —the majority of which are backed by venture capitalists and founded by men—promise to optimize
reproduction. Pregnancy, especially the uncertain first trimester, can be an isolating experience. Unlike doctors, apps provide around-the-clock support. Their automated updates
and scheduled pings, however, are embedded with ads and the values of app designers, and thus often make heteronormative, classist, and racist assumptions. Apps also promise
community through their message boards, enabling users to crowdsource answers to intimate questions and provide support for one another through anonymous posting infrastructures.
In this paper, I focus on the problems with outsourcing community and healthcare to apps, relating fertility startups to feminist discourses around neoliberal productivity
and self-tracking as self-care in the United States (Dow Schüll 2017; Gregg 2018; Hobart and Kneese 2020). Feminist technology scholars have discussed pregnancy in relation to
surveillance and privacy, as users’ sexual habits and bodily functions are monitored (Lupton 2015) while creating digital footprints for children who are not yet born (Barassi 2017):
the supposed sins of mothers may even be used against their future children, affecting their insurance premiums and job opportunities (Glabau 2018). Apps track pregnant people alongside
their fetuses and social networks, leveraging data-based surveillance as a form of care. What is the relationship between structural inequalities in healthcare and the specter of care
offered by venture-backed technologies?
Drawing on histories of 1970s feminist praxis positioning self-knowledge as self-care—a way of wresting control of reproductive technologies and expertise away from a racist,
colonialist, and sexist medical establishment (Nelson 2011; Murphy 2012 ), as well as longer histories of self-care as an instrument of oppression, productivity enhancement, and eugenics
within the life insurance industry (Brine and Poovey 2013; Bouk 2015)—and through a textual and socio-technical analysis of contemporary pregnancy tracking apps, including my personal use
of such apps during my own pregnancy, I argue that apps replace institutional and kinship-based forms of care work in the US, particularly for those without access to adequate medical
services. Unlike historical forms of self-knowledge as community care, app-based health interventions for pregnant people are tied to medical boards, insurance companies, and practitioners,
as medical schools partner with or even sponsor corporate apps.
The Covid-19 pandemic lays bare the kinds of social inequalities that leave some oppressed groups more likely to die than others, while also intertwining sacred embodied care
rituals with corporate platforms and apps like FaceTime and Zoom. Doulas comfort their clients and celebrants conduct sacred ceremonies through digital means. In this moment when
corporate platforms have more reach and power than ever, the importance of what I call platform temporality becomes abundantly clear. Tensions within care, and the relationship between
self-care and collective movements, are being intensified by the current crisis. Can apps intended for productivity and surveillance give way to more radical social movements around caring
for the self and for others?
Tamara Kneese is Assistant Professor of Media Studies and Director of Gender and Sexualities Studies at the University of San Francisco. Prior to her appointment at USF, Kneese received a PhD from the Department of Media, Culture, and Communication at New York University, an MA in Social Sciences and Anthropology from the University of Chicago, and a BA in Anthropology from Kenyon College. Her work has been supported by the Mellon Foundation, the American Council of Learned Societies, the Data & Society Research Institute, the Intel Science & Technology Center for Social Computing, the Consortium for History of Science, Technology, and Medicine, and the Social Sciences and Humanities Research Council. Her first book on digital death care practices is under advance contract with Yale University Press. Her next major project looks at tech industry labor organizing and worker resistance, from subcontracted workers at tech campuses to the gig economy and platformized retail and service labor. In general, Kneese writes and teaches about the intersection of digital media and material culture. Her work has been published in academic journals such as Social Media + Society, Cultural Studies, and Social Text and in popular outlets including The New Inquiry, Logic, Real Life, Slate, and The Atlantic.
Iván Chaar López
Assistant Professor, University of Texas at Austin
Un-Civil Technoscience: Anti-Immigration and Citizen Science in Boundary Making
Citizen science is often framed through debates about the expansion of democratic participation in the
production of scientific knowledge (Irwin; Irwin & Horst). Scholars have shown the important role of activists in defending the articulation
of “lay expertise” as a means to contest yet expand the normative politics of science and state making (Epstein). Wider participation has meant an
expansion of who gets to be included within the bounds of citizenship.
The trope of a “flooded” border overwhelming United States government officials is a recurring theme in debates about border and immigration enforcement,
with particular intensity since the Cold War. Framing the border as “out of control,” actors maneuver to justify renewed investments and commitments to
treating some populations as “intruders” and “enemies” of the nation. In the early 2000s, anti-immigrant paramilitary organizations like the American
Border Patrol (ABP) pushed the U.S. government to expand and innovate its security operations. These organizations used small unmanned aerial systems,
streaming cameras, and Internet user-generated content to augment the reach of border enforcement. ABP should be understood not only as a paramilitary
organization but as a citizen technoscience venture.
This paper traces the collaborations between ABP, defense contractors, and the federal government in the articulation of what
I call the border technopolitical regime, a sociotechnical arrangement of entities enrolled in the producing and governing the
material boundaries of the US nation. The paper asks what kind of citizenship is enacted in the technoscientific projects of paramilitary
organizations such as ABP? Doing so sheds light on the limitations of citizen science and calls to question its wider acceptance as a democratizing
endeavor. The paper builds on the work of scholars who have shown the relation between citizen science and neoliberal governmentality (Kimura), and the
capacity of civil society, generally seen as central to citizen science, to be uncivil (Beittinger-Lee; Chambers). In this sense, it argues that, in the
context of the US, citizen technoscience has been central to the articulation of a White supremacist sovereignty.
Iván Chaar López is an assistant professor in Digital Studies in the Department of American Studies at the University of Texas at Austin, where he leads the Border Tech Lab. His research and teaching examine the politics of digital technologies. He is especially interested in the place of Latina/o/xs as targets, users, and developers of digital lifeworlds.
Erin McElroy
Postdoctoral Researcher, AI Now Institute
Landlord Technologies of Surveillance Capitalism in the Pandemic Past and Present
As many renters know too well, Covid-19 has dramatically exacerbated racialized housing injustice globally.
Evictions, houselessness, and corollary health impacts are on the rise, landing upon deeply entrenched geographies of racial capitalism. In response,
there has been a growing movement to cancel rent and evictions, facilitating numerous housing court blockades, rent strikes, and mutual aid networks. Yet
landlords often claim to feel “threatened” by this tenant justice movement, and in response have opened the floodgates of technological solutionism, using novel
technologies to surveil and evict tenants. As I continue to explore, landlord tech, while alive and well prior to the pandemic, maintains a long history of exploiting
crises to augment its scope. Further, while contemporary landlord tech employs algorithms and artificial intelligence, it also galvanizes a deeper history of private property
itself functioning as a technology of racial surveillance and dispossession. More often than not, today’s landlord tech is paternalistically implemented under the auspices
caring for tenants, but rather privileges care for buildings and their value more than those living within them.
In this paper I explore landlord tech’s pandemic expansion as it implements novel surveillance and tracking technologies into intimate and domestic space.
From biometric cameras controlling building access to AI tenant screening processes, I outline the new modes of seeing that landlord tech implements. By framing
this within a longer trajectory of crisis and surveillance capitalism, I also look to continuities, discontinuities, and novel conjunctures of the pandemic present.
At the same time, I explore housing justice collectives’ utilization of maps and software to flip the gaze back upon landlords themselves. New tools such as the Anti-Eviction
Mapping Project’s Evictorbook are enabling tenants to study landlord property portfolios, eviction histories, and webs of shell companies. Tools such as this support
multi-building organizing work and the broader movement to abolish rent.
Erin McElroy is a postdoctoral researcher at New York University’s AI Now Institute, researching the digital platforms and technologies used by landlords in order to surveil, evict, and racialize tenants. Erin is founder of the Anti-Eviction Mapping Project and coeditor of its forthcoming atlas, Counterpoints: A San Francisco Bay Area Atlas of Displacement and Resistance. Erin earned a doctoral degree in Feminist Studies from the University of California, Santa Cruz, with a focus on the politics of space, race, and technology in and between postsocialist Romania and post-Cold War Silicon Valley. Erin is also founding editor of the Radical Housing Journal.
Renee Shelby
Postdoctoral Fellow, Northwestern University
Hesitancy, Solidarity, and Whiteness: The Limits and Possibilities of Rape Reporting Apps
One of the mainstream public lessons of #MeToo is that authorities often disregard reports of assault and deny survivors accountability.
In response, technologists and advocates have championed “rape reporting apps” as a way to address the gendered power dynamics of reporting violence, and confront survivors’ hesitancy in
reporting violence alone. Rape reporting apps are promoted as offering survivors an equitable means to communicate safely and anonymously with other victims and to archive encrypted records
of their assaults to allow them to come forward at a time and place they feel safe. Promoted for “survivors” broadly, the mobilization of reporting apps seeks to increase institutional
accountability by reducing the risk of coming forward alone. While framed as a way to generate survivor solidarity and collective action, the popularization of reporting apps raises
urgent questions about what justice paradigms, forms of surveillance, and social and data relations are enabled through these systems.
Broadly, two research questions guide this paper: (1) What social relations and issues of inequality are reporting software responding to?
And (2) How can practitioners engage in intersectional design practices in order to mobilize against a wider range of survivor injustices?
Using insights from feminist Science and Technology Studies and whiteness studies, I answer these questions through a discursive analysis of three
popular sexual violence reporting apps (Callisto, Spot AI, and JDoe), examining their interfaces and protocols, public advertisements and published reports
on user data, and media coverage of the apps between 2015 and 2021.
Despite efforts to collectivize the reporting experience, I find reporting apps contribute to an inclusion-through-technology paradigm that
positions anti-sexual violence technology as an efficient risk managing solution and techno-fix, despite its insufficient
attention to racialization and class hierarchy. Without accounting for how whiteness operates in cultural constructions of vulnerability and survivorship, it is not
possible to dismantle the rape myths that negatively constitute the hegemonic reporting experience, to the detriment of all survivors. In doing so, the use of rape
reporting apps serves to challenge only certain oppressive practices while maintaining existing racial and ethnic hierarchies, and thus limits the potential transformative
outcomes of these resistive technologies.
Renee Shelby is a Postdoctoral Fellow with the Sexualities Project at Northwestern (SPAN). Her work examines the relationships between surveillance technologies and social inequalities, with particular attention to the politics of sexuality, gender, race, and nation. In her current project, she studies how technology has become a political strategy of social movements and examines the potentials and limits of anti-sexual violence technology as a mediator of justice and its role in upholding racial and gendered inequality. Her research has received support from the Carnegie Mellon/ACLS foundation and the American Sociological Foundation.
Luke Stark
Assistant Professor, University of Western Ontario
Situating London’s AI Homelessness Model: Considering Historical, Social, Political, Legal and Policy Contexts
In collaboration with Joanna Redden, Dan Lizotte, Alissa Centivany, Melissa Adler
In August 2020, the city of London, Ontario announced that it had developed a Chronic Homelessness Artificial Intelligence model to help officials predict which individuals are likely to become chronically homeless. London’s use of AI to tackle homelessness reflects a larger shift in governance approaches, as governments around the world increasingly turn to AI and automated systems in an attempt to use automated prediction systems to better target services, increase efficiency and lower cost in the social provision of care. The promise of AI carries with it significant risks, given the range of harms people are experiencing as a result of this datafied turn by governments. This paper will describe exploratory research around the context for and conditions under which this AI model has been developed, situating London’s turn to AI to tackle homelessness within its historical, social, political, legal and policy context in order to better understand the social justice implications of this system and what can be learned from the design and implementation approaches taken by the City. In doing so, we seek better understand the risk/benefit trade-offs of CHAI as understood by city planners, and to understand how CHAI could inform similar projects elsewhere.
Luke Stark is an Assistant Professor in the Faculty of Information and Media Studies at the University of Western Ontario. His work interrogating the historical, social, and ethical impacts of computing and AI technologies has appeared in journals including The Information Society, Social Studies of Science, and New Media & Society, and in popular venues like Slate, The Globe and Mail, and The Boston Globe. Luke was previously a Postdoctoral Researcher in AI ethics at Microsoft Research, and a Postdoctoral Fellow in Sociology at Dartmouth College; he holds a PhD from the Department of Media, Culture, and Communication at New York University.
Mitali Thakor
Assistant Professor, Wesleyan University
Digital Abolitionism: Caring for the Future Child
Amidst the QAnon frenzy that led up to the 2021 far-right insurrection at the US Capitol were calls to combat the “epidemic” of child sex trafficking taking over digital platforms. Moral panics about child trafficking are not new, but this latest iteration has taken on a peculiar form with calls for platform regulation and the dismantling of Big Tech alongside QAnon’s more outlandish claims of the rise of Satanic, cannibalistic pedophile rings. In this paper, I describe the contemporary anti-trafficking movement as a counter-network to the supposedly networked trafficking that it seeks to address: under the promise of rescuing child victims online, new actors with digital expertise are becoming enrolled into this counter-network and at times co-opting the language of abolitionist organizing to justify their fight against ‘modern-day slavery.’ The contemporary digital sex panic motivating child protection fuses child pornography legislation (communications law) with anti-trafficking legislation (antiviolence law). Victims of child pornography are virtualized as imagined innocents constantly under threat by abstract ‘pedophiles’ and would-be sexual exploiters. The virtual child acts as a securitizing, disciplining force justifying the deployment of increasingly invasive digital policing technologies, as exemplified in the US with the passage of recent legislation against trafficking, sexual services sites, and content moderation. This paper examines the ontologies of insecurity within the constitution of the political: in order to care for a virtual, future child, digital architecture must require the insecurity and policing of imagined virtual predators. I highlight the ongoing tension between control and freedom that characterize calls for the regulation and algorithmic maintenance of the virtual public sphere, and consider how liberal demands for privacy intersect with attachments to the perpetually imperiled child in need of protection. The construction and care of the “virtual child” is critical to understanding how child protection has moved beyond the criminological domains of pedophilia and trafficking and into broader political concerns fetishizing individual privacy whilst supporting big data accumulation.
Mitali Thakor is an Assistant Professor of Science in Society at Wesleyan University, with affiliations in Anthropology and Feminist, Gender, and Sexuality Studies. She is currently working on a book project, provisionally titled Encoding the Child (MIT Press), on new technologies deployed in the global policing of child pornography. Her research and teaching interests include critical race studies of surveillance, the anthropology of digital technologies, and feminist studies of AI, robotics, and sexuality.