Treating technology as if it has will, or intention is nothing new. Maybe before Arthur arrived, many knights talked to the sword in the stone, asking in frustration, “Why are you so stubbornly set on remaining lodged there? Come out, I say! Yield to me!”
That’s really a silly example. I should instead use Google Search, which speaks to me all the time, interrupting my search to suggest alternatives. Relentlessly correcting me, as if I didn’t know in the first place what I was searching for. Auto correct feels like an invasion into my own thought process. Every time. If we analyzed Google Search as a person, what sort of person would it be? I mean the sort of person with whom you are in an interpersonal relationship of some sort. How might the oh-so-helpful, but interruptive auto-completing Google Search be influencing my sense of self identity?
Far beyond impacting what I’m searching for, how is this entity influencing my life? On the surface, it seems so banal. Helpful suggestions. Of course, we should critically examine this ‘helpfulness’, since it can be promoting racism and sexism. And the suggestions might be completely hacked, to embarrass public officials. At the level of interpersonal communication and relationships, how does Google, as a conversational partner, make the human counterpart feel about themselves? Put slightly differently, how are these conversational interactions influencing the identify formation of the human participant?
Exploring these interactions from the perspective of symbolic interactionism and interpersonal communication opens the opportunity to explore what counts as utterance and how relations develop in conversational interactions between humans and their devices or software like Google. I wonder: how does it happen when it happens? If features or defaults of interfaces invite users to respond in particular ways, how are these invitations phrased, especially when they are not phrased directly? How do they appear, communicatively?
If we want to understand algorithmic identity from a symbolic interactionist perspective, these interactional instances are usefully reconstructed as utterances, responses, and response to responses, which the field of symbolic interactionism (SI) has long held to be the fundamental processes of identity formation, as well as social structures.
After all, as I noted in a recent publication on digital identity, our actions become data that, when entering algorithmic systems, develop social lives of their own. These data function ‘on a global range of stages’ meaning that if, at first, we were ‘performing’ identity within a limited sphere of social roles that might be understood by us, in context, this performance might later get passed through various networks, becoming a script or stage for other actors (actants, agents) to use in their own performance. The interactions continue long past the moment when my body produced the data, and the force of the performance might ripple out in quite unexpected says. (you could read more about this in the chapter I wrote, 2013, p. 287).
Data in networked culture transition back and forth; they shift from distinct data points to floating signifiers to fragments of a user’s identity. They function on my behalf, they are flattened and equalized to be standardized metrics, yet can be essential identity markers. Noticed again when I get an advertisement that is specifically designed for me. Where is agency in this situation? As Adele Clarke asks in 2003/5, what human and nonhuman elements are influencing this situation? We can take this one step further to ask, how does an ‘agent’ like Google Search become a Significant Other?
Anything we consider to be “self identity” will be an outcome of a deeply intertwined relation of the human and machinic. Datafication processes add more complexity to what counts as interaction in this process, since there are many elements that will exert powerful influence in the process of sensemaking about one’s identity.
Whether or not there is intention or motive in the response of Google Search as I type in the URL search bar, there is agential force. This power to construct or reify particular frameworks for meaning emerges in the smallest moments of automated interfaces. It need not be alive in the traditional sense to have liveness, or be acting with independent will to have agency. Agency is the outcome of a continuous interactive process, as well as an assignment of the attribute of intention, will, or control. The specific conceptualization will therefore shift back and forth in different types of interactions, or over time.
All of this is to say that algorithmic identity is a critical consideration in the age of big data and datafication, at the level of the interpersonal.
________
Nov 14, 2014. Thinking Aloud.
References:
Markham, A. N. (2013). Dramaturgy of digital experience. in Edgley, C. (Ed.). The drama of social life: A dramaturgical handbook (pp. 279-294). Ashgate Press. Personal reprint copy here
also see related post on “figuring control” which was a paper recently presented by Annette Markham and Claus Bossen at the 2014 AoIR conference. And of course, Cheney-Lippold’s influential piece on algorithmic identity: Cheney-Lippold, J. (2011). A new algorithmic identity: Soft biopolitics and the modulation of control. Theory, Culture and Society 28(6): 164-181. As well as Adele Clarke’s work on Situational Analysis: Grounded Theory after the Postmodern Turn (Sage Press, 2005), where she talks in detail about mapping the human and nonhuman agents in situations.