The Battle over Source Code

Aram Sinnreich—

Professional prognosticators and commentators often treat the development of media technology as a one-way street, an inevitable series of magical innovations leading to an equally inevitable set of disruptions in business, culture, and society. In reality, of course, the process is far more complex. Technology developers and media programmers must constantly navigate a dense and contradictory web of expectations and obstacles presented by consumers, suppliers, partners, competitors, investors, legislators, regulators, creators, and their own human and material resources, and thus what ends up being offered to the public is often very far from what a given creator, designer, or marketer may have planned at the outset of the process.

Even once they’ve introduced their new products and services to the marketplace, businesses can never fully anticipate how interested consumers will be, which set of consumers might show an interest, or what uses those consumers will ultimately find for the things they choose to spend their money on. Why did the VHS videocassette format win out over the technically superior Betamax? Why did FM radio gain dominance over AM technology half a century after its invention, decades after it had been written off by many as a waste of the electromagnetic spectrum? Why did the television cartoon franchise My Little Pony, initially targeted at young girls, gain a massive fan base among adult men? Plenty of ink has been spilled explaining such phenomena in retrospect, but few people, if any, anticipated these developments beforehand.

The coevolution of media, technology, and markets rarely follows a straight line, let alone a clear trajectory toward some vision of progress or perfection. One of the ways in which this process frequently defies conventional wisdom, mystifying and bedeviling many creators and consumers, is when media technology becomes less useful over time, making it more difficult for people to access, edit, store, and share information with one another. An excellent case in point is computer software, which has become much more difficult to copy and build upon than it was a few decades ago.

A nonprogrammer might be excused for being unaware of this trend or, if he is aware of it, for considering it to be an inevitable consequence of Moore’s law and the geometric growth in computer capacity and complexity over the years. Yet there is nothing remotely inevitable about it; the truth is that computer programs have become more difficult to access and edit because software publishers have engineered it that way, mostly for the purposes of forestalling competition and preventing unlicensed sharing.

In the early days of computers, there were no screens or keyboards as we now know them, and programs existed in paper form as stacks of cardboard cards with holes punched in them reflecting the 1s and 0s of binary computer code. A programmer would create a sequence on paper, then very carefully feed it, card by card, through the computer to run the program itself. At this point, copying programs was still logistically difficult—it meant reproducing the thousands of cards in a stack flawlessly and in the correct order, or re-creating the stack of cards based on detailed instructions.

By the 1960s and ’70s, computer programming languages, and the programs themselves, had evolved considerably. Instead of keying 1s and 0s directly into a machine, software developers could use a text-based screen interface, writing code that would tell the machine itself where to put the 1s and 0s. This is how most programming is done to this day, with the programmer writing source code (legible to humans) and special software converting it into machine code (also known as binary code; these are the 1s and 0s legible to computers).

Even though computer programs became copyrightable as literary works in the U.S. beginning in the late 1970s, there was no technical reason they couldn’t be copied or edited by an end user, either for legal or illegal purposes. That’s because every software publisher, from individual hobbyists to big businesses, included the source code along with the machine code, allowing anyone to tweak and tinker with the programs they ran, optimizing them for their individual needs and idiosyncratic machines.

This began to change drastically in 1983, when IBM—one of the biggest names in the computer industry at that time—started distributing machine code without source code attached to it, so programs could be run by machines but not edited or copied by humans. This move was seen as an affront to the computer programming community, both because it prevented users from accessing software they had bought and rightfully owned and also because untold numbers of programmers within the community had contributed to the open commons of shared code that was now being privatized and severed from its source. Furthermore, the move made it harder for third-party companies to develop innovative products and services based on IBM’s hardware and software; now, any such efforts would require permission and participation by IBM.

The move was also seen as a massive business risk; even by the late 1980s, mainstream tech publications like Computerworld openly wondered whether IBM’s anticompetitive strategy would be worth the bad will it engendered among the company’s customer base and the impediments to innovation it erected for the industry at large. Yet, within a few more years, IBM’s closed-source strategy would become the norm for commercial software, contributing to the massive industry consolidation that would elevate companies like Microsoft to dominance.

The programming community didn’t sit idly by while Congress transformed its previously public domain of code into a privatized market of copyrighted goods, or while IBM and its followers started shipping private software, built on the foundations of that public domain, without its source code. Tectonic changes of these magnitude meant there was no more neutral ground; coders were either for privatization or against it, and very few people could comfortably remain in the widening chasm between the two positions.

Plenty of programmers supported these changes, of course. More privatization meant more investment, which meant more jobs and more opportunities to make money. After decades of hobbyist hacking at home with little or no financial remuneration, there were many coders who welcomed a paycheck and a cubicle in an air-conditioned office building. Others relished the chance to turn their passion into a profession by launching their own software companies, taking advantage of the same laws and practices used by IBM to achieve similar market effects on a smaller scale.

Yet there were also many programmers who resented these legal and technological obstacles to sharing code freely, and some who actively resisted the new regime of privatized code. Chief among these was a programmer named Richard M. Stallman, a veteran of the artificial intelligence lab at MIT, which was a hothouse for computer innovation in the 1970s and widely considered to be one of the birthplaces of “hacker culture.” Frustrated and morally appalled at the implications of copyrighted, closed-source software, Stallman set out to create legal and technological alternatives to the “proprietary and secret” products, services, and business practices that he believed were coming to dominate computer programming. In 1983, Stallman announced plans to start work on a new, open-source operating system called GNU (the acronym stands for “GNUs Not Unix,” a recursive joke calculated to appeal to those with an appropriately nerdy sense of humor). As he explained two years later in a manifesto published in the hacker periodical Dr. Dobb’s Journal, “Copying all or parts of a program is as natural to a programmer as breathing, and as productive. It ought to be as free.”

Later, Stallman would coin a widely repeated slogan defining freedom in the context of the “free software” movement. In his formulation, software should be free as in speech, not as in beer. In other words, Stallman wasn’t advocating that tech laborers simply give away the fruits of their labors or toil without the benefit of a paycheck. To the contrary, he expressed hopes that software would thrive as an industry and a community not only despite having open source code but because of it. Yet he also believed that free-as-in-speech software would be one of the keys to achieving what he called a “postscarcity world,” in which well-engineered, universally accessible software programs would help to liberate humanity from the drudgery of labor and the privations of poverty, allowing everyone to “devote themselves to activities that are fun.”

Creating an open-source operating system and laying the ideological foundations for the free software movement were both impressive and influential interventions, earning Stallman an important place in hacker history. Yet these technical and philosophical efforts would become far more durable and powerful when he added a legal component to the mix. In 1989, Stallman wrote the GNU General Public License (GPL), a new species of legal instrument that would help to change the way that people thought about copyright and technology for decades to come.

As we have discussed throughout this book, the history of copyright is one of increasingly broadly defined powers to restrict access to an increasingly wide range of artifacts for an increasingly long duration. For better and for worse, copyright exists explicitly for the purpose of monopolizing cultural expression, at a cost to free and open discourse that is calculated to be worth the corresponding boost in incentive for creators to share their work. The GPL reversed this trend and inverted this power dynamic in a way that no other legal technique had ever done in the previous two and a half centuries.

Stallman accomplished all of this primarily through the use of two clever legal requirements in the GPL. First of all, any software published under this license would have to include the source code along with the machine code, allowing any user to edit and adapt the program to suit his individual needs. Second, anyone who adapted code released under the GPL into something new would also have to use the GPL if she released it, whether commercially or not. Thus, for the first time in legal history, the power of copyright was leveraged to enforce openness and access rather than exclusivity and exclusion. This concept came to be called copyleft—a play on words that served to call attention to copyright’s historically one-sided role in the marketplace of ideas while providing a quick and easy term to encapsulate the premise that the law could be used to liberate creative expression rather than to restrain it.

In the years following the launch of the GPL, both open licenses and the free software movement caught on like wildfire. Alternative licenses like Berkeley Software Distribution (BSD) and Apache, which defined freedom and openness in slightly different ways than the GPL, also gained widespread traction (although by many measures GPL remains the free software license with the widest use to this day). In the meantime, countless free and open-source software initiatives were launched, building on the success of GNU and the legal foundations of the GPL.

Perhaps the best-known and most widely used free software initiative is Linux, an open-source operating system first proposed and developed by a Finnish student named Linus Torvalds in 1991. Over the following decades, thousands of programmers would contribute their time and expertise to the Linux project and, with the aid of a GPL license, it would come to dominate many of the largest computing markets in the world, including (at the time of writing) over 95 percent of all internet servers and over 80 percent of all smartphones (in the form of Android, which is based on the Linux kernel).

True to Stallman’s vision, Linux is also responsible for tens of billions of dollars in annual revenues for a variety of companies large and small, from Red Hat, which makes over $2 billion per year providing open-source software and services to the business community, to IBM itself, which currently generates billions of dollars per year by selling and servicing hardware that runs on Linux. Ironically, it seems IBM’s decision to switch to closed-source software in 1983 paid off for the company over the long term—by inspiring Stallman, Torvalds, and thousands of others to invest in developing better, cheaper, open-source alternatives!

From The Essential Guide to Intellectual Property by Aram Sinnreich. Published by Yale University Press in 2019. Reproduced with permission.


Aram Sinnreich is associate professor and chair of communication studies at American University. His previous publications include The Piracy Crusade and Mashed Up.


Further Reading:

Featured photo by Nordwood Themes on Unsplash

Recent Posts

All Blogs

Categories