In an economy where data is oil, where tech companies aggressively compete to extract value from your Internet behavior via AI—and where their products are largely designed to foster addiction, possibly even at the expense of your cognitive faculties, no less—one begins to wonder about things like the role of government intervention, the nuances of technological determinism, and even human nature to some extent. 1
These speculations are partly rooted in the social construct of “man versus machine” — an archetypal theme of Western culture and ideas, as deeply ingrained as technology itself.
Although the archetype can be seen at times like the industrial revolution (from Marx to the Luddites) to the social dynamics of the bike boom, since the mid-20th century it has been mainly influenced by AI technologies in modern culture. For instance, the theme is manifest most glaringly in movie franchises like Terminator and the Matrix — and more recently, TV shows like Westworld and best-sellers such as Martin Ford’s The Rise of the Robots work to further extend this line of thinking throughout our culture. (And of course, there’s no shortage of think pieces fluttering across the internet which popularize this latent socio-cultural narrative of “man versus machine.”)
It’s tempting to interpret these ideas as representations of society’s concerns about our AI-enabled future, wherein humans risk falling victim to systematic ‘outsmarting’ by artificially intelligent machines — but this is the stuff of science fiction and easy news angles.
Rather, it’s important that we recognize that this “man versus machine” concept is essentially a symbolic framework through which we can understand the trade-offs between technological progress and authentic human experience. Technology isn’t a binary variable, a boon or backslide for the human condition; much like religion, we cannot know for certain if technological progress (at the expense of the status quo) is ultimately a positive or negative thing. In their delusions, Luddites (who, importantly, were concerned with their jobs and standard of living rather than the blind or naive destruction of machines) are comparable to the technologists in Bay Area (who, notably, stand to materially profit from their products, even if their widespread adoption generates some negative effects on society, such as filter bubbles and social isolationism). 2
At any rate, all this is just to lay the groundwork for the point that this “man versus machine” concept is a deeply ingrained archetype for our modern condition — it’s a common narrative that we all understand at some base level in order to make sense of the world around us.
Transhumanists, techno-skeptics, anarcho-libertarians, and anyone else dealing with an ideology that features some conceptualization of the benefits of technological progress build their arguments around this vague concept.
And it’s this idea forms the crux of my thesis topic: This cultural concept of “man versus machine.”
Because the now-ubiquitous emergence pf the man versus machine concept can be at least partly explained by a communications theory called symbolic convergence theory. This theory seeks to explain how large groups of people communicate — it posits that we do so, at least in part, by enacting or entertaining common fantasies to convey meaning, emotions and values based on a common experience (e.g. based on social/political/cultural interactions).
For my master’s thesis, I used this theory to look at popular media coverage surrounding three major events:
- The May 11, 1997 defeat of world champion chess master Gary Kasparov by IBM’s Deep Blue supercomputer
- The February 14, 2011 defeat of Jeopardy! champions Ken Jennings and Brad Rutter by IBM’s Watson supercomputer.
- The May 14, 2016 defeat of Go world champion Lee Sedol by Google Deepmind’s AlphaGo supercomputer.
My rationale is that by analyzing how these events are communicated, we can extend understandings of how the man versus machine concept is evolving (and, accordingly, cultural perceptions of AI). In turn, we’re afforded a schema through which to understand our intellectual and socio-psychological orientations within the concept, providing for richer discussions of technology across a wide range of professional domains.
To be perfectly honest, I hadn’t heard of symbolic convergence theory before getting nudges in that direction by my graduate faculty — but it certainly seems to have been an appropriate framework for this study. Using fantasy theme-based content analysis, I basically just had to code a ton of articles for interpretations on the basis of fantasy themes.
While perhaps unremarkable that the themes existed at all, reading all those papers so in-depth provided an invaluable perspective into the evolution of technology and society. What’s more, symbolic convergence theory seems to be a relatively under-theorized conceptual framework, having usefully helped explicate such a muddy and nebulous concept. Learning about how SCT has been applied across a range of more concrete fantasies ranging from the Cold War to the Knights of Columbus, I think we can strengthen our grip on methodological creativity in exploring new ways of researching communications phenomena.
The impetus for this research was wide-ranging and complex. On a broad level, I believe the existential nature and new technological potency of artificial intelligence makes it an important area of research; with regard to communications theory, I believe conceptual projects such as SCT are going to gain increasing importance as we increasingly build logic containing domain-centric, ontological and morally relativistic biases into machines.
Moreover, I believe these relatively under-theorized communications and cybernetics theories in general can, in an age of limitless information and artificial intelligence, provide novel modes of communication to help unpack ideas that can escape plaintext discussions of things like AI, blockchain, and the Anthropocene confer.