Publications



















Algorithm

Forthcoming in Political Concepts: A Critical Lexicon
2024

Pre-Print PDF

The question “What is an algorithm?” quietly presupposes another, one rarely asked outright but nonetheless presumed in anticipation of a particular answer: “What is its essence, its logic, its real object?” This essay considers various historical accounts of the algorithm circulated among computer programmers in the 1960s and 1970s as they sought to locate its “mathematical essence” across time and place—from ancient algorithms to snippets of modern code—just in order to dislocate the concept from its social and historical specificity. It argues that many efforts to historicize the algorithm simultaneously involves thinking them ahistorically. As such, definitional work surrounding the “algorithm” can be neither descriptively neutral nor scientifically autonomous. In its timeless and ideal form, the algorithm is made to do exactly the work of politics: to sanitize its material connection to industrial and feminized labor, to supply an origin for modern logics of economization and optimization and, above all, to lend discursive support for the legibility of a capitalist social formation. To think the algorithm as a political concept is to acknowledge that it has become a cyborg concept. It invokes, at once, a technical artifact, a drive towards ever-more efficient production, an aspiration for a world reducible to math, a metaphor for the neoclassical economic subject. To answer the question “What is an algorithm?” indeed surfaces an essence—but rather than one that unifies origins and objects in the actually existing past, this is an essence that crystallizes capital’s practically-existing metaphysics in the historical present.




























‘Good Tech’ and Technologies of Elite Capture

Under review at Catalyst: Feminism, Theory, and Technoscience
2025


This paper examines the utopian fantasies of technologies developed in the service of social good–or “good tech”–and situates their increasing purchase within the technology industry in the broader context of a global crisis of care. We explore how aspirations towards greater empathy, global connectivity, and diversity are captured by elite tech entrepreneurs in a “progressive neoliberal” strategy to raise capital in the name of disaffected and exhausted workers. Through an analysis of emergent AI-enabled accent modification technologies, which promise to relieve call center workers from accent-based discrimination by artificially modifying the sound of their voice, we locate the affective lures operating in their futuristic fantasies and marketing strategies. In a peculiar alliance where entrepreneurs, venture capital, and modes of labor-discipline conspire toward making globalization “feel good,” we trace the ideological conditions that allow the exploitation of offshore workers to be re-coded as the employment of diverse workers. Thus understood, good tech rhetorics are productive discourses that function both as a mechanism of value accumulation and as a counterinsurgency tactic—they constitute concrete “structures of feeling” that sustain attachments to the social reproduction of racial capitalism and the continuation of postindustrial, colonial dispossession.

(co-authored with Juana C. Becerra)






































On Emplotment: Phantom Islands, Synthetic Data and the Coloniality of Agorithmic Space

Abstract accepted to Social Text
2025

While digital platforms have been consistently analogized to historical capitalist and colonial apparatuses of dispossession—whether through the conceptualization of data as something stolen, extracted, mined, or otherwise expropriated (Myers West 2017, Couldry and Mejias 2024)—conceptualizing data within the idiom of property often presumes precisely that which is to be explained: that data is something that would always already be owned, before it can be alienated. This paper focuses on the coloniality of data, on what makes data appear in forms of property that can be subject to expropriation in the first place, and argues that these conditions prevail even in what has been characterized as the post-extractivist era of digital platforms. Towards this end, we extend Harney and Moten’s (2021) work on the “emplotment of time and space” to describe a key phase in data production: algorithmic spatialization. Where emplotment, for Harney and Moten, is a play on two meanings—the enclosure of land on plots and the structuring of historical events into a narratological plot—in this paper, we put into play a third (dis)ambiguation: emplotment as the spatialization of data on the vector plot. This historiography approaches techniques for plotting data across computational grids, vector planes, and embedding spaces (Siegert 2014, Gabrys 2016, Beller 2021, Dhaliwal 2022), as co-located with the rise of logistics (Hockenberry 2021), and as being rooted fundamentally in colonial logics of developmentalist improvement (Bhandar 2018). Through emplotment, data is transformed into an abstract commodity form, obliterating its history and social relations such that it is made capable of being valued and possessed (Toscano 2008).

(co-authored with Juana C. Becerra and Ranjodh Singh Dhaliwal)






















Voice/Noise and the Contested Terrain of the Human in the Age of AI

Abstract accepted to Big Data and Society
2025

Deployed for everything from noise cancellation and accent clarification to wildlife detection, borderscape surveillance, and protecting private property against trespassers, present-day machine listening systems rely on distinctions between voice and noise that have historically underpinned hierarchical constructions of the human. Indeed, the sensory acts of locating the human voice in a cacophony of noise, isolating and picking up voices from their surrounding environment, or discarding certain vocal sounds as noise or noisy, have long been embroiled in attempts to define the bounds of personhood, autonomy, and reason.

Once this function of separating voice from noise is given to data-driven machines, these systems appear to exemplify modes of auto-essentialization that re-inscribe normative differences established during historical regimes of domination. This commentary forwards the provocation that such re-inscriptions are never clean, and never complete. Drawing on Sylvia Wynter and Jacques Rancière, as well as critical scholarship on the non/human, the commentary explores how datafied distinctions between voice and noise come to be contested, interrupted, and exceeded.

What counts as ‘the human voice’ here emerges as a boundary that must be actively encoded and persistently enforced. Practices such as bandwidth cutoffs, data labeling, and predictive algorithmic outputs try to pass normative sonic distinctions between voice and noise through economic and material limits that never fully cooperate with sensory orders. Excavating how such technical negotiations play out within the logics of capital, this commentary thus illustrates how the datafied mediation of sound not only reflects but actively produces a shifting, contested terrain of “the human” in the age of AI.

(co-authored with Juana C. Becerra)












Designing for Agonism: 12 Workers’ Perspectives on Contesting Technology Futures

ACM Conference on Computer Supported Work and Social Computing
Peer Reviewed Proceedings
2024

Open Access PDF

In this paper, we gather 12 workers from a large technology company, as recent participants of a research initiative on the social impact of emerging technologies, to present a collaborative analysis of the opportunities and limitations of dissensus-based approaches to technology research and design. We introduce a series of speculative and deconstructive probes and present findings from their use in four collaborative design sessions. We then draw on the theoretical tradition of Agonism to identify moments of friction, refusal, and disagreement over the course of these sessions. We contend that this approach offers a politically important alternative to consensus-based collaborative design methods and can even surface new rhetorics of contestation within discourses on technology futures. We conclude with a discussion of the importance of worker-authored research and an initial set opportunities, challenges, and paradoxes as a resource for future efforts to "Design for Agonism."



























Towards Labor Transparency in Situated Computational Systems Impact Research


ACM Conference on Fairness, Accountability, and Transparency
Peer Reviewed Proceedings
2023

Open Access PDF

Researchers seeking to examine and prevent technology-mediated harms have emphasized the importance of directly engaging with community stakeholders through participatory approaches to computational systems research. However, recent transformations in strategies of corporate capture within the tech industry pose significant challenges to established participatory practices. In this paper we extend existing critical participatory design scholarship to highlight the exploitative potential of labor relationships in community collaborations between researchers and participants. Drawing on a reflexive approach to our own experiences conducting agonistic participatory research on emerging technologies at a large technology company, we highlight the limitations of doing participatory work within such contexts by empirically illustrating how and when these relationships threaten to appropriate and alienate participant labor. We argue that a labor-conscious approach to computational systems impact research is critical for countering the commodification of inclusion and invite fellow researchers to more actively investigate such dynamics. To this end, we provide (1) a framework for documenting divisions of labor within participatory research, design, and data practices, and (2) a series of short provocations that help locate and inventory sites of extraction within participatory engagements.


   * Nominated 2023 Most Impactful Research Paper by the RAI Institute



















Politics without Privacy


Techné: Research in Philosophy and Technology
2022

Open Access PDF

Digital platforms stand to open new spaces for political assembly and enable social movements to materialize at unprecedented speed and scale. Yet, this promise has largely fallen short of its goal, as networked movements have thus far failed to produce the sustainable modes of collective action that early and mid-twentieth century labor and civil rights movements had delivered. Why can we not muster digital communities with the same power and contestational force?

Answers to this question arrive one after the other and, often before the ink can dry, new political ruptures emerge, demanding our ever-renewed analysis. Amidst this flurry, one particular answer demands pause, if only at first for its unexpectedness: Indeed, collective action has been disarmed in the digital agora but it is our fixation on privacy that is to blame. This provocation, delivered quietly in the closing chapters of Firmin DeBrabander’s Life After Privacy, follows a broader meditation on the historical emergence of our modern entitlement to privacy. It is written against common liberal democratic narratives that tell us that privacy is an essential condition to political autonomy and self-determination—that it forms the basic foundation of our democracy. On this story, it is no wonder that in our age of mounting digital surveillance, we lack the protected spaces necessary to nurture the independent spirit which previously drove democratic engagement and political organization before the emergence of digital media. DeBrabander’s position on the matter, however, flies in the face of our apparently deep historical relationship to privacy.



















Dissertation Project Summary

My dissertation project, titled “Untimely Algorithms, Technology, Political Thought, and Futurity” is a methodological contribution to the critical history of algorithms. It traces an unconventional genealogy of the algorithm from the technological imaginaries of classical political economic thought to the neoliberal thought project. Through this historiography, I propose to locate the algorithm avant la lettre, as a particular site of contestation over the boundary of the economic and political, the role of the human in political life, and the production of revolutionary desire. To this end, this dissertation surfaces the “political unconscious“ our technological imagination in the 21st century, and asks how this inheritance imposes limits on our capacity to imagine a different future. 


Sample Chapter: 

“Algorithms & Revolutionary Desire: Utopia After the Socialist Calculation Debate”

This chapter begins from the premise that ‘the algorithm’ is a collective speculation toward utopia. More than a set of technoscientific practices, material infrastructures, or forces of production, the algorithm is also a collection of desires that has mediated political imagination since at least the socialist calculation debates. Tracing an unconventional genealogy of the concept, I depart from typical narrations of its history as a progression of scientific inventions from the 1950s-present. I instead center a technological imaginary of the 1910-40s, one forged between socialists and neoliberals to answer the question of central planning. That the promise of AI rings so hollow today, I suggest, owes to our inheritance of the political unconscious of these debates. The concept of the algorithm was forged within a fundamentally elite discourse, one that imagined utopia as a rational economic order (whether realized by technocratic management or by spontaneous coordination) that will always disqualify popular and proletarian modes of economic planning, e.g., a technological revolution without a political or social revolution. 























                                                                                               
Socials

       


Research Sites

   Google Scholar

   ACM


Contact

    fjing1 [at] jhu.edu

    felicia.jing [at] ibm.com


































Felicia JingNew York, NY   



A political theorist by training, my research approaches the politics of algorithms and computing from radical democratic, STS, Marxist and Marxian traditions of critique. My dissertation project, titled “Untimely Algorithms: Technology, Political Thought and Futurity,” is a methodological contribution to the critical history of algorithms. 

Currently, I am completing my PhD in Political Science at Johns Hopkins University and, before that, I received my BA in Philosophy from Reed College. Since 2022, I have also concurrently worked in the tech industry as a full-time researcher at IBM. There, I interact daily with algorithmic systems, observe their design and development, and conduct AI audits using methods from the humanities.

My work in political theory has been published in Techné, is forthcoming in Political Concepts, and is under preparation for special issues of Social Text, and Big Data & Society.  My empirical research conducted at IBM has appeared in venues like FAccT and CSCW. From 2024-2026, my work will be supported by grants from the Notre Dame-IBM Tech Ethics Lab and the National Endowment for the Humanities.


SELECTED Publications



2025 “Algorithm” in Political Concepts
Article


2025 “Technologies of Elite Capture“
Article


2026 “On Emplotment, Synthetic Data, and Coloniality” in Social Text
Article

2026 “Voice/Noise and the Human in the Age of AI” in BD&S
Article

2024 “Designing for Agonism” in ACM CSCW
Proceeding

2023 “Toward Labor Transparency” in  ACM FAccT
Proceeding

2022 “Politics without Privacy” in Techné
Book Review


legend:   = theoretical work
  =   empirical work
Read more

       CV

       Dissertation Project Summary  
       How to get in contact