We’ve all been there. Scrolling through our Twitter feeds, casually skimming the snippets of text, we stumble across one tweet that seems a bit off. It takes a second to register: it’s a bot. This isn’t the product of a human mind: it’s algorithmically-authored content.
While no one’s written a history of Twitterbots (yet!), these little minions have gradually been asserting themselves in the mainstream. Bots like InspiroBot, for example, have wriggled their way into the hearts of Twitter users around the world. InspiroBot combines generated quotes with soothing photos to create inspirational posters with humorous – and often quite dark – twists for its 13,000-some followers. ‘Stay extremely screwed’, one poster advises. ‘Keep burning things’, another encourages. One InspiroBot poster bluntly declares what we’ve all been thinking lately: ‘Face it, participating in society is like medieval torture.’
InspiroBot’s content has generally been well-received . ‘Don’t try to find yourself in the vacuous regurgitations of an algorithm. Just let the unintelligible wash over you. The future is meaningless, baby’, one article reviewing the bot suggests. Another article argues that ‘this AI is so bad at its job that it turns out to be uplifting in the most inadvertent way possible.’
Of course, readers don’t always see the positive side of AI gone awry. Recently the Web anxiously quivered over Facebook’s decision to shut down a chatbot conversation system wherein the chatbots developed their own coded language for communication (specifically negotiation) with one another. ‘Facebook manages to shut down chatbot just before it could become evil’, read the title of one article covering the story. ‘The incident has echoes of the storyline to movie The Terminator, in which robots become self-aware and wage war against humans’, reported another article. The Twitter hashtag #shutdowntheAI promptly emerged, and AI researchers from around the world broadcast their experiences developing faulty systems. Janelle Shane tweeted about her joke-generating neural network that became obsessed with the cow with no lips. After his lyric-generating neural network channeled its inner Swiftie by generating nothing but ‘I shake it off, I shake it off’, Josh Miller decided to call it a day. And then there’s Brian Kelly’s meta tweet: ‘My twitter bot started tweeting about #shutdowntheAI so I #shutdowntheAI.’
To be sure, Facebook’s chatbot system and Twitterbots like InspiroBot need not be evaluated using the same criteria. They’re different systems, made for different purposes. Although the term ‘bot’ remains largely undefined, it helps to delineate the various kinds of bots, many of which appear to defy categorisation due to a general unawareness of their potential and limits. At its most basic, a bot could simply be considered a particular algorithm, or series of algorithms, that facilitates the processing of mass data, or performs routine and/or monotonous tasks that can be satisfied with impersonal (or pseudo-personal) responses. Some of the more common types of bots include:
- Chatbots (also called conversational bots or chatterbots)
Conversational bots interact with users using natural language through phonetic or written means. They may serve to signpost users to appropriate information across the Web, to entertain users through a ‘social’ interaction, or to facilitate technological or financial transactions. While users are generally humans, conversational bots may be put into conversation with one another.
Examples: SmarterChild (AOL/MSN Messenger); Mitsuku (2013 Loebner Prize winner); the TacoBot (Taco Bell); Siri (Apple)/Cortona (Microsoft)/Google Now (Google)
- Art bots
Art bots pull from a predetermined set of texts, code, or images to produce output that combines these sources in unique ways. Art bots may also draw from collections to display textual and visual content within that collection at random. These bots are typically used for entertainment or aesthetic purposes.
- Web crawlers (also called web spiders)
Web crawlers systematically browse the Internet, generally for automatic indexing purposes that may improve search engine functionality. For non-malicious purposes, they can be used to validate hyperlinks and code, or to facilitate longitudinal documentation of the Web.
Examples: Googlebot (Google); Bingbot (Microsoft); the Wayback Machine (Archive.org)
Spambots are programmed to create email addresses, harvest email addresses, and send spam email messages.
Examples: Unsolicited advance-fee emails requests (ever receive an email from a Nigerian Prince?); unsolicited emails peddling Viagra and other taboo medications
Bot categories aren’t mutually exclusive, and some bots fit comfortably into multiple categories. A bot can be interactive (like a chatbot), or it can simply execute assigned tasks (like a web crawler). While most bots run on Web-based platforms, and/or use the Web as grounds for full functionality (for example, by linking users to search engine results), some bots function – even if only limitedly – on offline platforms. When offline, Apple’s Siri can navigate through a user’s Apple device; following voice commands, Siri can call a specified contact or open a specified application. When connected to the Web, however, Siri’s full capabilities are accessible, and Siri can scour the Web to provide its most comprehensive service possible.
Of all of the types of bots, chatbots have garnered the most scholarly and public attention lately. The literature is rife with documented attempts – both successful and unsuccessful – to implement chatbots for referential and educational purposes, citing a bot’s personality, language and grammar usage, and even racial identity as issues for consideration. The Loebner Prize is a popular contest that’s been run annually from 1991; the developer who creates the most human-like computer of that year’s contest is awarded a bronze medal and cash prize. The contest uses restricted Turing tests, wherein human judges must determine whether responses to a set of questions are being generated by a human or a computer. Popular audiences also have access to Chatbots Magazine, an online community focused on chatbot development and usage, particularly in business contexts.
Bots can also contribute to conversations about timely events. The Political Bots project, for example, is exploring how bots influence public opinion over major social networking applications, especially on Twitter. Need fake news, quick? Employ a bot, Political Bots argues.
The Political Bots project has also shown that automatically-generated content likely influenced the opinions of the voting public in such high-stakes elections as the latest US presidential election and Brexit. Bots have political power.
This political power can also be exercised through more humorous means. Meet the honeybot – a bot that posts deliberately controversial statements to bait Internet trolls into arguing with it. Nora Reed’s now-defunct @christianmom18 – which produced bigoted content parodying radical Christianity – and Sarah Nyberg’s @Arguetron – which produces highly liberal statements about controversial topics – are just two honeybot examples. Presumably, trolls are initially unaware that the posts to which they are replying were computer-generated; therefore, when ‘carol’ (@christianmom18) posts that ‘brightly painted bedrooms made my daughter a lesbian’, unknowing Twitter users such as David Stewart (@djsfw) respond as through this were a genuine belief broadcasted by a radical individual. When the honeybot promptly replies to a Twitter mention, as many honeybots are programmed to do, it becomes increasingly obvious that a human is not directly responsible for the content in question. Yet some Twitter users nevertheless appear unfazed, and attempt to combat the bots (image: conversation occurred 30 October; screen captured 1 November 2016).
Twitterbots regularly inject humour into their followers’ timelines. Chris and Ali Rodley’s Magic Realism Bot, for one, generates short summaries of happenings within a magic-infused world that harkens back to those teenage years spent in the basement playing Dungeons and Dragons. ‘A YouTube star is riding a bicycle inside an infinitely large maze. The maze is located in New Zealand’, reads one tweet. ‘Just before sunrise all the clocks stop, and a hymen appears in the sky’, reads another. Each tweet is a story in itself, challenging readers to imagine the world it describes. Magic Realism Bot produces some funny one-liners, sure, but the content also encourages deeper thought about the world’s potentialities through wonky combinations. Dungeons and Dragons taught us to negotiate what we know about the world and what we imagine the world could be: Magic Realism Bot does something similar, albeit without a Dungeon Master to respond.
Twitterbots are all created by humans, but run autonomously – without human intervention – save for any necessary maintenance. Developers create with intention, and include preferences and constraints within their code that ensure task completion and streamline output.
At this time, no program is capable of producing truly randomly output: output stems directly from the mathematical methods that determine their input and output.
Developers use a range of methods to develop their Twitterbots. Magic Realism Bot, plus other bots like Harry Giles’s autoflâneur (‘a guide to getting lost’), functions like a digital game of Mad Libs, pulling from word banks to fill in template blanks. The simplest bots simply pull from other online sources. John Emerson’s MoMA Bot, for example, produces tweets highlighting pieces from the Museum of Modern Art’s collections. Parker Higgins’ Old Fruit Pictures tweets random images from the Pomological Watercolor Collection at the USDA National Agricultural Library. And then some bots combine these methods: Liam Cooke’s poem.exe, for one, generates haiku-like poems by juxtaposing randomly-selected verses from a corpus of English translations of haiku by Kobayashi Issa. ‘give me a homeland/out of the water/catching the moon.’ Pretty poetic, eh?
Of course, template-based bots needn’t be limited to linguistic realizations. Some bots generate images using characters and emojis. Eli Brody’s Tiny Astronaut soars through starscapes generated by Katie Rose Pipkin’s Tiny Star Fields. Hugo VK’s Tiny Bus Stop shows emoji people waiting at a bus stop. Poor people – the bus never seems to arrive.
There are easy ways for people to make their own Twitterbots, even with little to no prior programming experience. Many of the bots mentioned here have been created using George Buckenham’s Cheap Bots, Done Quick!, which offers an intuitive platform that doesn’t require fluency in Python. The more complex bots, however, are not so easy to create. As Internet artist Darius Kazemi declares, ‘the reason I am able to make Twitter bots is because I have been programming computers in a shitty, haphazard way for 15 years, followed by maybe 5 years of less-shitty programming.
Moreover, bots are always created with intention, with a desire to either explore or push the limits of what we consider creative thought. Ranjit Bhatnagar explains that his Twitterbot Pentametron, which pairs public tweets written in iambic pentameter to create rhyming couplets, was born from his interest ‘in surrealist techniques, like exquisite corpse, and in the Oulipo — using games and relatively simple mechanical techniques — algorithms, really — for making interesting and weird art and poetry.’ For Bhatnagar, Pentametron represents a manifestation of creativity through mechanical processes: a detournement of language to create poetry through unexpected – and, for those whose tweets actually get used, unintended – juxtaposition.
From 2007 to 2014, Allison Parrish’s everyword tweeted 109,157 words from the English language, in alphabetical order. Parrish describes this project as an exercise in imaginative potentialities, much like Magic Realism Bot has been described above. Parrish notes that it’s ‘a cultural practice of ours to consider individual worlds in the abstract’. everyword, however, juxtaposes single English words with the rest of whatever appears on a follower’s timeline: ‘nuns’ may appear alongside a tweet about local politics; ‘éclairs’ might appear next to a tweet about Katy Perry. ‘When you see these words juxtaposed like this’, Parrish argues, ‘you can’t help but try to find some connection between them.’ And what about everyword’s successor, fuck every word? It’s still going, hatin’ on all the things.
Twitterbots capitalise on their medium’s form. Many bots pull from or respond to public tweets, repurposing them for humour or aesthetics. Other bots depend on the personalised timeline form: a steady stream of emoji people waiting at a bus stop may not sound too exciting, but the occasional appearance of the emoji bus stop on one’s own timeline, amongst ramblings about an upcoming solar eclipse or general world devastation, may offer much-needed intellectual relief. Bots may, in a sense, be Twitter’s cat videos.
The Twitterbot knows nothing of current events. It knows nothing of its place within the world. Indeed, it knows nothing of what it creates. It simply does what it has been programmed to do. Mindlessly generating tweets every hour or so, the abovementioned Twitterbots pepper feeds otherwise dominated by disparaging news with little bits of art. And there’s beauty in what these Twitterbots create – beauty that is discerned by each individual who may stumble across these accounts. Even more, there’s beauty in these Twitterbots’ naivety. These bots are not Skynet: they are children posting their drawings on the fridge.
Leah Henrickson is a doctoral student at Loughborough University, researching the social and literary implications of natural language generation and computer-generated texts. Follow her on Twitter