Twitter Bot as Analytical Prototype (Final Project)

Posted in bot, twitter.

Results

The process of making a functioning Twitter bot followed a similar path as learning to code using p5.js; watching instruction, mimicking code, and slowly building a knowledge base that I was able to experiment from. I began watching Daniel Shifmann’s video series on making a Twitter bot, viewing and working to comprehend much of the information but also pausing at times to follow specific coding or setup instructions. Compared to previous projects, I more immediately applied the information I was presented with in this project, though the balance of my time on initial learning was weighted toward comprehension. In other words, application beyond comprehension was present throughout the project, but coding and setup of the Twitter bot began slowly, in small stages.

First, I worked to comprehend the purpose and function of Node.js as a programming language. The process of building and working with node was both eased and complicated by learning about NPM and package managers. At first, I encountered the concept of a package manager as an extra layer of complication to the process of coding in general and building a Twitter bot in particular. While it was an extra layer that introduced a bit of complexity, along with the requisite learning and navigation of a new interface and repository, once I became comfortable with NPM, I saw clearly how it fit into, and made easier, the workflow and production of a bot.

Next, I had to become more comfortable working with terminal commands. Again, while the content and instruction of the semester prepared me a bit for this part of the project, it was the getting-in-there and trying-it-out that made the space more accessible to me conceptually. I would have perhaps benefitted more from seeking out additional video instruction specific to terminal commands and usage, however time constraints and a satisfaction with an ability to at least crudely operate terminal commands sufficed in this execution of this project.

Next, an additional layer of complexity followed with the process of deploying the bot to Heroku. I had anticipated this step of the process as, having made a simple bot that tweeted random numbers, I realized it stopped when I closed my laptop. I realized, in other words, that I needed to get the software off of my local machine and into the cloud. Again, with the help of video instruction, I was able to successfully mimic the process of deploying the bot code to a cloud service while not entirely comprehending the underlying systems that I was using.

Finally, I began to think about how the code and technical systems I had been learning would manifest through the mediated interface of Twitter. That process began with an image that recalled my youth and prompted me to think about the ways in which we remember childhood. I will discuss this in more detail below in the rationale and variation sections.

Rationale

First, I will briefly describe the Twitter bot. An image of the He-Man character Prince Adam is used as the avatar for the Twitter account I created. The handle for this character, who sullenly looks down and toward the bank of Tweets posted on his Twitter feed, is @SadPrinceAdam. The image, along with variations of programmed Twitter activity, is oriented toward an exploration of what it means to remember one’s own childhood and life experiences. The notions of nostalgia and remembering drive the project. These concerns stem from a personal source, namely my own childhood playing with He-Man and Masters of the Universe toys. The bot seeks to wonder at different dimensions of nostalgia and remembering that are not limited to the kinds of mildly happy memories typically afforded by social media platforms. My hope is that the prototype, as a piece of software programmed to interact in a social media webspace, begins to explore, uncover, or reclaim a depth or breadth of emotional territory that often seems lacking in public exhibitions of personal memory, particularly in the networked spaces of social media. The orientation of this project might be considered as a variation of the type of work Scott Richmond seeks to explore in the boundaries and topographies of “networked boredom.” Richmond begins to trace the affective resonances, or lack thereof, in relation to or as indexed by social media applications such as Grinder, which, Richmond notes is but “one small part of a much, much larger field of networked boredom” (24). In a similar vein, this prototype is oriented toward an initial, small exploration of much larger affective domains that are assembled by notions of nostalgia and remembering.

This project began with simple parameters on the Twitter bot: the bot was programmed to Tweet a random number at the end of a set string of words. In thinking of my childhood at age …” with the age being a random number between zero and seventeen. I expanded the number set from zero to thirty-nine when contemplating “life.” I had the string with the random number set to retweet at regular intervals, initially every four hours. With these parameters, I was hoping to explore the affective dimensions of what it means to remember in general and to be nostalgic for childhood experiences in particular. The programmed regularity of the tweets, alongside the sullen moniker and cartoon image of the avatar, produced a kind of gentle yet insistent dwelling on the less happy experiences of childhood. This programmed approach to exploring the affective dimensions of nostalgia seeks to parallel the kinds of protocological analyses performed by Alexander Galloway. Galloway’s protocological approach is defined through the terms of algorithms, sets of rules that are followed computationally. Galloway sees protocological analysis as focusing “not on the science of meaning (representation/interpretation/reading), but rather on the sciences of possibility (physics or logic)” (52). My Twitter bot code, alongside the infrastructure it utilizes to operate autonomously, is rooted in a set of instructional, protocological commands as defined primarily through the programming language Node.js. Of course, the bot operates in a network of much larger technical infrastructures that invoke algorithmic operations at various scales as well, among them the Twitter platform, the cloud application resources of Heroku, and the global network and protocols of internet communication.

On making and tweaking this prototype, I have learned a number of things. First, to name technical processes, I have learned how Node.js functions as a coding language for server-side programming. I have learned about how coding and execution of code happens locally in various environments, from the files and applications rooted in my Mac OS X to the commands I enter in Terminal through UNIX operating system. I have been able to learn how to functionally operate these systems and processes without having a working, extensible knowledge of them; in other words, my knowledge of the systems and processes is limited and rather static for the time being. The process of learning to deploy a Twitter bot has, once again, reiterated to me what Bernard Stiegler calls the “deep opacity” of technological systems (21). Despite having a functional understanding of how various technologies and media interfaces operate, I find myself unable to grasp their machinations at conceptual levels, let alone able to efficiently troubleshoot them when they begin to behave abnormally. For instance, I found myself confused and at a loss for solution when my Twitter bot seemed to be alternately tweeting two different messages. I thought I have changed the code in one place, and expected the bot to register the change. It did, however I soon discovered that a local version of my code existed alongside a cloud-deployed version of my code. In moments like this, I realize anew that my grasp of the systems I am programming and interacting with, even after hours of practice and learning how to use them, is profoundly limited.

Finally, I learned a bit about how the act of programming may begin with intention or desired effect, only to morph into something else once the code is deployed. For instance, in one of the variations I tried for this bot (described more below), I instructed the bot to retweet the most recent instances of #childhood. I had intended for the bot, with it’s glum demeanor, to be juxtaposed against cheery recollections of Twitter users’ youth. While this did happen, the bot also captured other affective instantiations of #childhood. My feed framed by a sad cartoon character retweeted mentions of childhood cancer, war, and child poverty. The feed became a confluence of emotional responses to childhood that dramatically subverted my initial intentions in the code that was being algorithmically deployed. In other experiences with unintended coding consequences, programs, for the most part, simply stopped functioning. To put it another way, my only way of understanding bugs in code was through the codes lack of functionality, and not through different, unintended functionality. With this prototype, I discovered a new way for deployed code to slip away from the initial intention of the coder.

Variation

In this section, I will describe and explore actual or potential variations to the code and deployment of the Twitter bot.

Variation 1: Retweeting

The first variation of this prototype has been and is currently being deployed by the bot. I changed the code of the bot to retweet a hashtag at periodic intervals, rather than tweet a stock phrase with a random number. As discussed above, the effect of this change was, in part, related to affect. In other words, the result of the change led to a blurring of affect in terms of the initial intention of the bot. While the code calling forth a #childhood post every fifteen minutes was intended to resist overly optimistic nostalgia related to childhood, the feed of the Twitter bot soon ebbed toward the despair with the cartoon avatar contemplating childhood cancer and poverty alongside rosy moments of nostalgia. The bot’s avatar, in other words, oscillated between the mildly sarcastic and profoundly tragic.

In another vein, the variations surfaced through automated retweeting recent hashtags produced what James Hodges might call a kind “phatic call” (5). Hodges explains that devices we use, and the coded programs that run them, produce an deep, affective connection to the periphery of our attention that always beacons. Hodges explains that “We’re beholden to our devices, and our networked indebtedness demonstrates the thin yet persistently libidinal attachment to devices because they call out” (5). During this variation of retweeting, and especially after coding for the #beyonce during crits, I began to notice a little number appearing at the top of my bot’s feed page, a number next to the icon of a bell signaling new notifications. The appeal of theses notifications grew (continues to grow) as I see more and more people, and bots, following He-Meh, or assigning him to a list. The bot is likely having a similar effect on other users. The notification for a retweet is flattering, a kind of address that may feel personal. I felt this in the first follower I discovered for the bot, even though that follower was likely another bot. Which is strange, as the bot that followed my bot is devoid of personal connection; it’s a bit of code executing commands and procedures in protocologically defined ways. As Hodges explains, “Nonhuman forms of address may hail us ‘personally’—Jim, we recommend X for you—but they do so algorithmically” (5). Yet the call of notifications can’t be explained solely through recourse to the impersonal machinations of code on a network. That code has a source, and that source is human, even should it be necessary to trace the code back to the compiled language it’s structured by. The siren call of the notification, in other words, is hybrid, or echoing Donna Haraway, a cyborg. My bot, and the bots that follow it, are, as Haraway describes, a hybrid of machine and organism, a creature of social reality as well as a creature of fiction” (291). The bots on Twitter retain the trace of organism through the human-compiled code that instructs their operation. Yet the bots act autonomously, too, according to the machine-readable parameters that dictate their operation.

Variation 2: Replies

In a second potential variation, the bot potentially can be coded to reply to other users on Twitter. In this variation, a bit of code would be written to instruct the bot to send a reply to any user who has followed @SadPrinceAdam. The code might include a stock reply in recognition and gratitude of the follow. Or, more interestingly, the code could offer a unique reply to users. One way to accomplish a unique reply would be to obtain a Wordnik API which would give the code access to a collection of resources, including words as defined by part of speech. I could, for instance, have He-Meh (a so-called Master of the Universe) reply to followers with a phrase that suits his sullen demeanor. He-Meh could suggest in reply that followers choose only one of two items (drawn from nouns in the Wordnik API) to keep company through the eternal heat death of the universe.

With this variation, the direct contact of the bot would qualitatively deepen the allure of an interaction with an impersonal network that Hodges explores. Drawing on Tung-Hui Hu, a bot coded in this way might also serve to explore how the cloud acts as “a topography or architecture of our own desire” (xvii). The bot, deployed and active in the cloud via Heroku, becomes a kind of sounding board in this variation, echoing back to the user who follows it a variation on the theme of their desire. The reply to the follow, in other words, is at first an immediate recognition of the follower’s desire; their desire to follow a bot is immediately recorded and made public through the bot’s reply; the cloud indexes the user’s desire and simultaneously broadcasts that desire. The desire of the user to follow the bot is then also engaged through the bot’s reply. The bot uses the desire of the follower to suggest an interaction; the desire of the user who follows the bot becomes is not only indexed by the cloud, but operationalized as well in the imperative reply that is triggered by the code of the bot.

Variation 3: Bot-on-Bot Action

In a third variation of this prototype, the @SadPrinceAdam bot could be programmed to interact with a second bot, such as @MonsignorMerde. More sophisticated programming would be required to successfully deploy these bots in a form of bot-driven relationship that generates interaction (see this Atlantic article for an account of a time two bots interacted autonomously). The end of this type of variation would move toward an exploration of the affective resonances possible when witnessing the interactions between between two coded bots. This variation, in some sense, removes the human from active participant to passive viewer, though not completely as I will explore in a moment. Despite the more passive role as viewer, however, affective resonances would not be vacated, especially if explicit reference to the Twitter bots’ bot-ness is made–in a Twitter bio description, for example. The personality established by avatars, coupled with the code that informs the bots, would provide a framework for interaction. But the potentialities of the interactions between the bots would actively shape the affective resonances that reverberate from their intermingling. In other words, the human viewer (and potential interlocutor) of such interactions may try to make meaning or simply experience some affective response based on the interactions between the bots. Yet the complexities of code and algorithmic responses of the bots could simultaneously resist meaning and produce syncopated or discordant affective resonances. I draw from Anthony Dunne and Fiona Raby’s work concerning speculative design to ground this potential variation. Dunne and Raby suggest that “intentional fictional objects” carve out an imaginative space for invention. The Twitter bots in a variation such as this seek to provide human viewers/interactors with a fictional arena wherein an imaginative universe, however small it may be, unfolds. Users who witness the interactions between these two self-identifying bots, then, become more imaginatively active as they consider the interactions between the bots. Dunne and Raby see experiences like this as integral to speculative design projects: “this process of mental interaction is important for encouraging the viewer to actively engage with the design rather than passively consuming it” (90). The best design projects and the props they employ, Dunne and Raby argue, “are functional and skillfully designed; they facilitate imagining and help us entertain ideas about everyday life that might not be obvious” (90). In a variation like the one described here, the purpose would be to explore and surface affective resonances which might otherwise remain dormant or tacit in the audiences that stumble onto the conversations propelled by a pair of Twitter bots.

Works Cited

Dunne, Anthony and Fiona Raby. Speculative Everything: Design, Fiction, and Social Dreaming. Cambridge: MIT Press. 2013. Print.

Galloway, Alexander R. Protocol: How Control Exists after Decentralization. Cambridge: MIT Press. 2004. Print.

Haraway, Donna. “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late 20th Century.” Springer Netherlands, 2006. Print.

Hodges, James J. “Sociable Media: Phatic Connection in Digital Art.” 2105. TS. Collection of James J. Hodge, Evanston, Illinois.

Hu, Tung-Hui. A Prehistory of the Cloud. Cambridge: MIT Press. 2015. Print.

Stiegler, Bernard. Technics and Time: The Fault of Epimetheus. Trans. Richard Beardsworth and George Collins. Stanford: Stanford University Press. 1998. Print.

Richmond, Scott C. “Networked Boredom: Grindr, Norm, and Protocol.” 2016. TS. Collection of Scott C. Richmond, Detroit, Michigan.