Recently, I had the pleasure of reading Joy Lisi Rankin’s 2018 book, A People’s History of Computing in the United States. As someone who thinks a lot and writes a bit about technology, this book resonated with me, and I doubt that I am alone on that in the library world. Rankin does an excellent job of presenting the history of computing’s development from the perspective of the marginalized and cuts through the dominant Silicon Valley narrative of how personal computing came to be. There is a lot in this book that would appeal to most current and future library professionals.
Computers have become ubiquitous in almost every profession and in our personal lives. There are reasons to be both hopeful and cynical about this. On one hand, it opens avenues for collaboration, communication, and scholarly investigation previously unavailable to us. On the other, as Rankin makes quite clear, digital technology has had a long and storied history of being co-opted by profit-driven enterprises.
Rankin shows us how the Silicon Valley narrative of the development of technology is not much more than myth. The book Rankin takes her title from, A People’s History of the United States by Howard Zinn shows us how the mythology of the founding fathers emerged, and how, in general, the “Great Man” theory of history shrouds the actual lived experience of the birth and growth of the United States, and Rankin works in the same spirit. As the Silicon Valley narrative goes, computing was restricted to huge mainframes, forcing users to pay for access, or leverage institutional affiliation for tightly regulated terminal use. That was, until a couple quirky geniuses, tinkering in their garages, invented the personal computer and freed people from their slavish dependence on mainframe and timeshare computing. The Silicon Valley myth is compelling and, because of the general acceptance of the mythos of individual genius in the US, quite widely believed. The narrative has the effect of erasing the many contributions to computing that don’t fit neatly into a capitalist model of understanding invention and innovation.
Much of the infrastructure for the modern internet, modern microchips, programming languages, operating systems, and so on, were all developed prior to mass marketing of personal computing. They popped up and matured in universities, in high schools, libraries, and other collective spaces. Software was freely shared around social groups and through computer networks as early as the 1960s. The founding of Microsoft 1975 and Apple in 1976 were not as disruptive to the computing industry as we often imagine. In fact, I was shocked when I heard that these companies were founded so early. It was a long slow grind to convince people to give up social and shared computing and to turn computers into a consumer item rather than a public utility.
Most telling, I think, is an open letter written by Bill Gates aimed at computer “hobbyists.” This letter slammed amateur computer enthusiasts (read: people part of a computing community whose collective work benefited all members) were removing any incentive for people to continue producing programs and applications due to their “stealing” of software. The thing is, no one in the computing world at the time saw software sharing as stealing. Software was freely and openly shared. That was just the norm. The creators of BASIC, one of the first accessible programming languages, Kemeny and Kurtz fully intended their language to be used in this way. They made everything about the language freely available, and even made a compiler freely available to encourage widespread use. It is not a little hypocritical of Gates to decry the creators, users and builders of free software as “parasites” when he went on to profit from proprietary BASIC dialects. The pushback was so intense against this attitude, and clearly the attitude was incredibly unpopular in the computing community. So much so that Apple, seeing a marketing opportunity, declared that their software would be provided free or very inexpensively (their tactic was to aggressively market their hardware, of course, but that’s a story for another day).
There was little resonance in Bill Gates’ argument that software authors, just like authors of novels and writers of songs, should be due royalties for their work. Developing software was a group effort, continuously built on the work of others. It made as much sense to get royalties on software as it would to pay someone back for a birthday present. Gates’ letter marks a distinct shift away from the “computing citizen” to the “computing consumer,” as Rankin describes it.
I want to connect this with the digital humanities, as that’s kind of been my jam lately. The digital humanities is a loose collection of practices and methods defined by the use of technology and the analysis of cultural objects. It is, from the perspective of some traditional humanists, an invasion of an academic discipline by profit-motivated technocrats. This is not totally unfounded–indeed, the humanities have been burned by the Massive Open Online Courses (MOOCs) and for-profit educational platforms like Coursera and Udacity. These services capitalize on the most popular, and thus, profitable classes taught by humanities departments and leaves the universities the task of teaching less popular courses. This could very well be the cause behind student attrition in the humanities.
At any rate, these digital learning tools have done irreparable damage to the humanities departments of universities across America. It’s hard to blame humanists for being skeptical about other introductions of technology to their discipline. That may be missing the point of what DH can offer the humanities though.
DH, I think, is not here to turn humanities programs into ersatz STEM degrees. While the skepticism towards tech in the humanities may be well-founded, the reason for tech’s use in these departments should be seen as a necessary step in the development of the discipline. After all, if the purpose of the humanities is the critical analysis of the production, dissemination, and consumption of culture, then it must be willing to respond to the digital nature of our contemporary world.
DH offers the humanities an escape from the hegemony of single-authored papers and monographs. It presents opportunities for collaborative, cross-discipline work. It may not be a panacea to everything afflicting the humanities, but it offers ways to bring the discipline into the 21st century. It provides an avenue for young scholars to investigate the digital cultural milieu in which they find themselves. Perhaps, in some ways, DH can help us find our way to the pre-marketized, and collaborative version of computing that existed before the corporate domination of software and hardware. Maybe DH can be a path towards a computing for the people.
Categories: Advocacy & Activism, data, Digital Humanities, History, Specializations, tools
1 reply ›