CHAPTER ONE Solutionism and Its Discontents "In the future, people will spend less time trying to get technology to work . . . because it will just be seamless. It will just be there. The Web will be everything and it will also be nothing. It will be like electricity. . . . If we get this right, I believe we can fix all the worlds problems." —Eric Schmidt "'Solutionism' [interprets] issues as puzzles to which there is a solution, rather than problems to which there may be a response." —Gilles Paquet "The overriding question, 'What might we build tomorrow?' blinds us to questions of our ongoing responsibilities for what we built yesterday." —Paul Dourish and Scott D. Mainwaring Have you ever peeked inside a friend's trash can? I have. And even though I've never found anything worth reporting—not to the KGB anyway—I've always felt guilty about my insatiable curiosity. Trash, like one's sex life or temporary eating disorder, is a private affair par excellence; the less said about it, the better. While Mark Zuckerberg insists that all 1 To Save Everything, Click Here activities get better when performed socially, it seems that throwing away the garbage would forever remain an exception—one unassailable bastion of individuality to resist Zuckerberg's tyranny of the social. Well, this exception is no more: BinCam, a new project from researchers in Britain and Germany, seeks to modernize how we deal with trash by making our bins smarter and—you guessed it— more social. Here is how it works: The bin's inside lid is equipped with a tiny smartphone that snaps a photo every time someone closes it—all of this, of course, in order to document what exactly you have just thrown away. A team of badly paid humans, recruited through Amazon's Mechanical Turk system, then evaluates each photo. What is the total number of items in the picture? How many of them are recyclable? How many are food items? After this data is attached to the photo, it's uploaded to the bin owner's Facebook account, where it can also be shared with other users. Once such smart bins are installed in multiple households, BinCam creators hope, Facebook can be used to turn recycling into a game-like exciting competition. A weekly score is calculated for each bin, and as the amounts of food waste and recyclable materials in the bins decrease, households earn gold bars and leaves. Whoever wins the most bars and tree leaves, wins. Mission accomplished; planet saved! Nowhere in the academic paper that accompanies the BinCam presentation do the researchers raise any doubts about the ethics of their undoubtedly well-meaning project. Should we get one set of citizens to do the right thing by getting another set of citizens to spy on them? Should we introduce game incentives into a process that has previously worked through appeals to one's duties and obligations? Could the "goodness" of one's environmental behavior be accurately quantified with tree leaves and gold bars? Should it be quantified in isolation from other everyday activities? Is it okay not to recycle if one doesn't drive? Will greater public surveillance of one's trash bins lead to an increase in eco-vigilantism? Will participants stop doing the right thing if their Facebook friends are no longer watching? Questions, questions. The trash bin might seem like the most mundane of artifacts, and yet it's infused with philosophical puzzles 2 Solutionis™ and Its Discontents and dilemmas. It's embedded in a world of complex human practices, where even tiny adjustments to seemingly inconsequential acts might lead to profound changes in our behavior. It very well may be that, by optimizing our behavior locally (i.e., getting people to recycle with the help of games and increased peer surveillance), we'll end up with suboptimal behavior globally, that is, once the right incentives are missing in one simple environment, we might no longer want to perform our civic duties elsewhere. One local problem might be solved—but only by triggering several global problems that we can't recognize at the moment. A project like BinCam would have been all but impossible fifteen years ago. First, trash bins had no sensors that could take photos and upload them to sites like Facebook; now, tiny smartphones can do all of this on the cheap. Amazon didn't have an army of bored freelancers who could do virtually any job as long as they received their few pennies per hour. (And even those human freelancers might become unnecessary once automated image-recognition software gets better.) Most importantly, there was no way for all our friends to see the contents of our trash bins; fifteen years ago, even our personal websites wouldn't get the same level of attention from our acquaintances—our entire "social graph," as the geeks would put it—that our trash bins might receive from our Facebook friends today. Now that we are all using the same platform—Facebook— it becomes possible to steer our behavior with the help of social games and competitions; we no longer have to save the environment at our own pace using our own unique tools. There is power in standardization! These two innovations—that more and more of our life is now mediated through smart sensor-powered technologies and that our friends and acquaintances can now follow us anywhere, making it possible to create new types of incentives—will profoundly change the work of social engineers, policymakers, and many other do-gooders. All will be tempted to exploit the power of these new techniques, either individually or in combination, to solve a particular problem, be it obesity, climate change, or congestion. Today we already have smart mirrors that, thanks to complex sensors, can track and display our pulse rates based on slight variations in the 3 To Save Everything, Click Here brightness of our faces; soon, we'll have mirrors that, thanks to their ability to tap into our "social graph," will nudge us to lose weight because we look pudgier than most of our Facebook friends. Or consider a prototype teapot built by British designer-cum-activist Chris Adams. The teapot comes with a small orb that can either glow green (making tea is okay) or red (perhaps you should wait). What determines the coloring? Well, the orb, with the help of some easily available open-source hardware and software, is connected to a site called Can I Turn It On? (http://www.caniturnit on.com), which, every minute or so, queries Britain's national grid for aggregate power-usage statistics. If the frequency figure returned by the site is higher than the baseline of 50 hertz, the orb glows green; if lower, red. The goal here is to provide additional information for responsible teapot use. But it's easy to imagine how such logic can be extended much, much further, BinCam style. Why, for example, not reward people with virtual, Facebook-compatible points for not using the teapot in the times of high electricity usage? Or why not punish those who disregard the teapot's warnings about high usage by publicizing their irresponsibility among their Face-book friends? Social engineers have never had so many options at their disposal. Sensors alone, without any connection to social networks or data repositories, can do quite a lot these days. The elderly, for example, might appreciate smart carpets and smart bells that can detect when someone has fallen over and inform others. Even trash bins can be smart in a very different way. Thus, a start-up with the charming name of BigBelly Solar hopes to revolutionize trash collecting by making solar-powered bins that, thanks to built-in sensors, can inform waste managers of their current capacity and predict when they would need to be emptied. This, in turn, can optimize trash-collection routes and save fuel. The city of Philadelphia has been experimenting with such bins since 2009; as a result, it cut its center garbage-collecting sorties from 17 to 2.5 times a week and reduced the number of staff from thirty-three to just seventeen, bringing in $900,000 in savings in just one year. Likewise, city officials in Boston have been testing Street Bump, an elaborate app that relies on accelerometers, the now ubiquitous motion detectors found in many smartphones, to map out potholes 4 Solutionis™ and Its Discontents on Boston's roads. The driver only has to turn the app on and start driving; the smartphone will do the rest and communicate with the central server as necessary. Thanks to a series of algorithms, the app knows how to recognize and disregard manhole covers and speed bumps, while diligently recording the potholes. Once at least three drivers have reported bumps in the same spot, the bump is recognized as a pothole. Likewise, Google relies on GPS-enabled Android phones to generate live information about traffic conditions: once you start using its map and disclose your location, Google knows where you are and how fast you are moving. Thus, it can make a good guess as to how bad the road situation is, feeding this information back into Google Maps for everyone to see. These days, it seems, just carrying your phone around might be an act of good citizenship. The Will to Improve (Just About Everything!) That smart technology and all of our social connections (not to mention useful statistics like the real-time aggregate consumption of electricity) can now be "inserted" into our every mundane act, from throwing away our trash to making tea, might seem worth celebrating, not scrutinizing. Likewise, that smartphones and social-networking sites allow us to experiment with interventions impossible just a decade ago seems like a genuinely positive development. Not surprisingly, Silicon Valley is already awash with plans for improving just about everything under the sun: politics, citizens, publishing, cooking. Alas, all too often, this never-ending quest to ameliorate—or what the Canadian anthropologist Tania Murray Li, writing in a very different context, has called "the will to improve"—is shortsighted and only perfunctorily interested in the activity for which improvement is sought. Recasting all complex social situations either as neatly defined problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized—if only the right algorithms are in place!—this quest is likely to have unexpected consequences that could eventually cause more damage than the problems they seek to address. I call the ideology that legitimizes and sanctions such aspirations "solutionism." I borrow this unabashedly pejorative term from the 5 To Save Everything, Click Here world of architecture and urban planning, where it has come to refer to an unhealthy preoccupation with sexy, monumental, and narrow-minded solutions—the kind of stuff that wows audiences at TED Conferences—to problems that are extremely complex, fluid, and contentious. These are the kinds of problems that, on careful examination, do not have to be defined in the singular and all-encompassing ways that "solutionists" have defined them; what's contentious, then, is not their proposed solution but their very definition of the problem itself. Design theorist Michael Dobbins has it right: solutionism presumes rather than investigates the problems that it is trying to solve, reaching "for the answer before the questions have been fully asked." How problems are composed matters every bit as much as how problems are resolved. Solutionism, thus, is not just a fancy way of saying that for someone with a hammer, everything looks like a nail; it's not just another riff on the inapplicability of "technological fixes" to "wicked problems" (a subject I address at length in The Net Delusion). It's not only that many problems are not suited to the quick-and-easy solutionist tool kit. It's also that what many solutionists presume to be "problems" in need of solving are not problems at all; a deeper investigation into the very nature of these "problems" would reveal that the inefficiency, ambiguity, and opacity—whether in politics or everyday life—that the newly empowered geeks and solutionists are rallying against are not in any sense problematic. Quite the opposite: these vices are often virtues in disguise. That, thanks to innovative technologies, the modern-day solutionist has an easy way to eliminate them does not make them any less virtuous. It may seem that a critique of solutionism would, by its very antireformist bias, be the prerogative of the conservative. In fact, many of the antisolutionist jibes throughout this book fit into the tripartite taxonomy of reactionary responses to social change so skillfully outlined by the social theorist Albert Hirschman. In his influential book The Rhetoric of Reaction, Hirschman argued that all progressive reforms usually attract conservative criticisms that build on one of the following three themes: perversity (whereby the proposed intervention only worsens the problem at hand), futility (whereby the intervention yields no results whatsoever), and jeop- 6 Solutionis™ and Its Discontents ardy (where the intervention threatens to undermine some previous, hard-earned accomplishment). Although I resort to all three of these critiques in the pages that follow, my overall project does differ from the conservative resistance studied by Hirschman. I do not advocate inaction or deny that many (though not all) of the problems tackled by solutionists— from climate change to obesity to declining levels of trust in the political system—are important and demand immediate action (how exactly those problems are composed is, of course, a different matter; there is more than one way to describe each). But the urgency of the problems in question does not automatically confer legitimacy upon a panoply of new, clean, and efficient technological solutions so in vogue these days. My preferred solutions—or, rather, responses—are of a very different kind. It's also not a coincidence that my critique of solutionism bears some resemblance to several critiques of the numerous earlier efforts to put humanity into too tight a straitjacket. Today's straitjacket might be of the digital variety, but it's hardly the first or the tightest. While the word "solutionism" may not have been used, many important thinkers have addressed its shortcomings, even if using different terms and contexts. I'm thinking, in particular, of Ivan Illich's protestations against the highly efficient but dehumanizing systems of professional schooling and medicine, Jane Jacobs's attacks on the arrogance of urban planners, Michael Oakeshott's rebellion against rationalists in all walks of human existence, Hans Jonas's impatience with the cold comfort of cybernetics; and, more recently, James Scott's concern with how states have forced what he calls "legibility" on their subjects. Some might add Friedrich Hayek's opposition to central planners, with their inherent knowledge deficiency, to this list. These thinkers have been anything but homogenous in their political beliefs; Ivan Mich, Friedrich Hayek, Jane Jacobs, and Michael Oakeshott would make a rather rowdy dinner party. But these highly original thinkers, regardless of political persuasion, have shown that their own least favorite brand of solutionist—be it Jacobs's urban planners or Illich's professional educators—have a very poor grasp not just of human nature but also of the complex 7 To Save Everything, Click Here practices that this nature begets and thrives on. It's as if the solution-ists have never lived a life of their own but learned everything they know from books—and those books weren't novels but manuals for refrigerators, vacuum cleaners, and washing machines. Thomas Molnar, a conservative philosopher who, for his smart and vehement critique of technological utopianism written in the early 1960s, also deserves a place on the antisolutionist pantheon, put it really well when he complained that "when the Utopian writers deal with work, health, leisure, life expectancy, war, crimes, culture, administration, finance, judges and so on, it is as if their words were uttered by an automaton with no conception of real life. The reader has the uncomfortable feeling of walking in a dreamland of abstractions, surrounded by lifeless objects; he manages to identify them in a vague way, but, on closer inspection, he sees that they do not really conform to anything familiar in shape, color, volume, or sound." Dreamlands of abstractions are a dime a dozen these days; what works in Palo Alto is assumed to work in Penang. It's not that solutions proposed are unlikely to work but that, in solving the "problem," solutionists twist it in such an ugly and unfamiliar way that, by the time it is "solved," the problem becomes something else entirely. Everyone is quick to celebrate victory, only no one remembers what the original solution sought to achieve. The ballyhoo over the potential of new technologies to disrupt education—especially now that several start-ups offer online courses to hundreds of thousands of students, who grade each other's work and get no face time with instructors—is a case in point. Digital technologies might be a perfect solution to some problems, but those problems don't include education—not if by education we mean the development of the skills to think critically about any given issue. Online resources might help students learn plenty of new facts (or "facts," in case they don't cross-check what they learn on Wikipedia), but such fact cramming is a far cry from what universities aspire to teach their students. As Pamela Hieronymi, a professor of philosophy at the University of California, Los Angeles (UCLA), points out in an important essay on the myths of online learning, "Education is not the transmission of information or ideas. Education is the training needed to make use of information and ideas. As information breaks loose from book- 8 Solutionis™ and Its Discontents stores and libraries and floods onto computers and mobile devices, that training becomes more important, not less." Of course, there are plenty of tools for increasing one's digital literacy, but those tools go only so far; they might help you to detect erroneous information, but they won't organize your thoughts into a coherent argument. Adam Falk, president of Williams College, delivers an even more powerful blow against solutionism in higher education when he argues that it would be erroneous to pretend that the solutions it peddles are somehow compatible with the spirit and goals of the university. Falk notes that, based on the research done at Williams, the best predictor of students' intellectual success in college is not their major or GPA but the amount of personal, face-to-face contact they have with professors. According to Falk, averaging letter grades assigned by five random peers—as at least one much-lauded start-up in this space, Coursera, does—is not the "educational equivalent of a highly trained professor providing thoughtful evaluation and detailed response." To pretend that this is the case, insists Falk, "is to deny the most significant purposes of education, and to forfeit its true value." Here we have a rather explicit mismatch between the idea of education embedded in the proposed set of technological solutions and the time-honored idea of education still cherished at least by some colleges. In an ideal world, of course, both visions can coexist and prosper simultaneously. However, in the world we inhabit, where the administrators are as cost-conscious as ever, the approach that produces the most graduates per dollar spent is far more likely to prevail, the poverty of its intellectual vision notwithstanding. Herein lies one hidden danger of solutionism: the quick fixes it peddles do not exist in a political vacuum. In promising almost immediate and much cheaper results, they can easily undermine support for more ambitious, more intellectually stimulating, but also more demanding reform projects. Kooks and Cooks Once we leave the classroom and enter the kitchen, the limitations of solutionism are delineated in even sharper colors. Political philosopher Michael Oakeshott, conservative that he was, particularly 9 To Save Everything, Click Here liked emphasizing that cooking, like science or politics, is a very complex set of (mosdy invisible) practices and traditions that guide us in preparing our meals. "It might be supposed that an ignorant man, some edible materials, and a cookery book compose together the necessities of a self-moved (or concrete) activity called cooking. But nothing is further from the truth," he wrote in his 1951 essay "Political Education." Rather, for Oakeshott the cookery book is "nothing more than an abstract of somebody's knowledge of how to cook; it is the stepchild, not the parent of the activity." "A cook," he wrote in another essay, "is not a man who first has a vision of a pie and then tries to make it; he is a man skilled in cookery, and both his projects and his achievements spring from that skill." Oakeshott didn't much fear that our cooking habits would be destroyed by the proliferation of culinary literature; interpreting that literature was only possible within a rich tradition of cooking, so perusing such books might even strengthen one's appreciation of the culinary culture. Or, as he himself put it, "the book speaks only to those who know already the kind of thing to expect from it and consequently how to interpret it." He was not against using the book; rather, he took issue with people who thought that the book—rather than the tradition that produced it—was the main actor here. Whatever rules, recipes, and algorithms the book contained, all of them made sense only when interpreted and applied within the cooking tradition. For Oakeshott, the cookbook was the end (or an output), not the start (or an input), of that tradition. An argument against rationalists who refused to acknowledge the importance of practices and traditions, rather than a celebration of cookery books, it's a surprisingly upbeat moment in Oakeshott's thought. However, one can only wonder if Oakeshott would need to revise his judgment today, now that cooking books have been replaced with the kinds of sophisticated gadgetry that would have Buckminster Fuller, the archsolutionist who never stopped fantasizing about the perfect kitchen, brimming with envy. Paradoxically, as technologies get smarter, the maneuvering space for interpretation—what Oakeshott thought would bring cooks in touch with the world of practices and traditions—begins to shrink and potentially disappear entirely. New, smarter tech- 10 Solutionis™ and Its Discontents nologies make it possible to finally position, as it were, the cookery book's instructions outside the tradition; almost no knowledge is required to cook with their help. Today's technologies are no longer dumb, passive appliances. Some of them feature tiny, sophisticated sensors that "understand"—if that's the right word—what's going on in our kitchens and attempt to steer us, their masters, in the right direction. Here is modernity in a nutshell: We are left with possibly better food but without the joy of cooking. British magazine New Scientist recently covered a few such solutionist projects. Meet Jinna Lei, a computer scientist at the University of Washington who has built a system in which a cook is monitored by several video cameras installed in the kitchen. These cameras are clever: they can recognize the depth and shape of objects in their view and distinguish between, say, apples and bowls. Thanks to this benign surveillance, chefs can be informed whenever they have deviated from their chosen recipe. Each object has a number of activities associated with it—you don't normally boil spoons or fry arugula—and the system tracks how well the current activity matches the object in use. "For example, if the system detects sugar pouring into a bowl containing eggs, and the recipe does not call for sugar, it could log the aberration," Lei told New Scientist. To improve the accuracy of tracking, Lei is also considering adding a special thermal camera that would identify the user's hands by body heat. The quest here is to turn the modern kitchen into a temple of modern-day Taylorism, with every task tracked, analyzed, and optimized. Solutionists hate making errors and love sticking to algorithms. That cooking thrives on failure and experimentation, that deviating from recipes is what creates culinary innovations and pushes a cuisine forward, is discarded as whimsical and irrelevant. For many such well-meaning innovators, the context of the practice they seek to improve doesn't matter—not as long as efficiency can be increased. As a result, chefs are imagined not as autonomous virtuosi or gifted craftsmen but as enslaved robots who should never defy the commands of their operating systems. Another project mentioned in New Scientist is even more degrading. A group of computer scientists at Kyoto Sangyo University in Japan is trying to marry the logic of the kitchen to the logic of 11 To Save Everything, Click Here "augmented reality"—the fancy term for infusing our everyday environment with smart technologies. (Think of Quick Response Codes that can be scanned with a smartphone to unlock additional information or of the upcoming goggles from Google's Project Glass, which use data streams to enhance your visual field.) To this end, the Japanese researchers have mounted cameras and projectors on the kitchen's ceiling so that they can project instructions—in the form of arrows, geometric shapes, and speech bubbles guiding the cook through each step—right onto the ingredients. Thus, if you are about to cut a fish, the system will project a virtual knife and mark where exacdy that it ought to cut into the fish's body. And there's also a tiny physical robot that sits on the countertop. Thanks to the cameras, it can sense that you've stopped touching the ingredients and inquire if you want to move on to the next step in the recipe. Now, what exactly is "augmented" about such a reality? It may be augmented technologically, but it also seems diminished intellectually. At best, we are left with "augmented diminished reality." Some geeks stubbornly refuse to recognize that challenges and obstacles—which might include initial ignorance of the right way to cut the fish—enhance rather than undermine the human condition. To make cooking easier is not necessarily to augment it—quite the opposite. To subject it fully to the debilitating logic of efficiency is to deprive humans of the ability to achieve mastery in this activity, to make human flourishing impossible and to impoverish our lives. A more appropriate solution here would not make cooking less demanding but make its rituals less rigid and perhaps even more challenging. This is not a snobbish defense of the sanctified traditions of cooking. In a world where only a select few could master the tricks of the trade, such "augmented" kitchens would probably be welcome, if only for their promise to democratize access to this art. But this is not the world we inhabit: detailed recipes and instructional videos on how to cook the most exquisite dish have never been easier to find on Google. Do we really need a robot—not to mention surveillance cameras above our heads—to cook that stuffed turkey or roast that lamb? Besides, it's not so hard to predict where such progress would lead: once inside our kitchens, these data-gathering devices would 12 Solutionis™ and Its Discontents never leave, developing new, supposedly unanticipated functions. First, we'd install cameras in our kitchens to receive better instructions, then food and consumer electronics companies would tell us that they'd like us to keep the cameras to improve their products, and, finally, we'd discover that all our cooking data now resides on a server in California, with insurance companies analyzing just how much saturated fat we consume and adjusting our insurance premiums accordingly. Cooking abetted by smart technology could be a Trojan horse opening the way for far more sinister projects. None of this is to say that technology cannot increase our pleasure from cooking—and not just in terms of making our food tastier and healthier. Technology, used with some imagination and without the traditional solutionist fetishism of efficiency and perfection, can actually make the cooking process more challenging, opening up new vistas for experimentation and giving us new ways to violate the rules. Compare the impoverished culinary vision on offer in New Scientist with some of the fancy gadgetry embraced by the molecular gastronomy movement. From thermal immersion circulators for cooking at low temperature to printers with edible paper, from syringes used to produce weird noodles and caviar to induction cookers that send magnetic waves through metal pans, all these gadgets make cooking more difficult, more challenging, and more exciting. They can infuse any aspiring chef with great passion for the culinary arts—much more so than surveillance cameras or instruction-spewing robots. Strict adherence to recipes can produce predictable, albeit tasty, dishes—and occasionally this is just what we want. But such standardization can also make our kitchens as exciting as McDonald's franchises. Celebrating innovation for its own sake is in bad taste. For technology truly to augment reality, its designers and engineers should get a better idea of the complex practices that our reality is composed of. As the molecular gastronomy example illustrates, to reject so-lutionism is not to reject technology. Nor is it to abandon all hope that the world around us can be ameliorated; technology could and should be part of this project. To reject solutionism is to transcend the narrow-minded rationalistic mind-set that recasts every instance of an efficiency deficit—like the lack of perfect, comprehensive 13 To Save Everything, Click Here instructions in the kitchen—as an obstacle that needs to be overcome. There are other, more fruitful, more humanistic, and more responsible ways to think about technology's role in enabling human flourishing, but solutionists are unlikely to grasp them unless they complicate their dangerously reductionist account of the human condition. Pasteur and Zynga I'll be the first to acknowledge that the problems posed by solu-tionism are not in any sense new; as already noted, generations of earlier thinkers have already addressed many related pitfalls and pathologies. And yet I feel that we are living through a resurgence of a very particular modern kind of solutionism. Today the most passionate solutionists are not to be found in city halls and government ministries; rather, they are to be found in Silicon Valley, trying to take the lessons they have learned from "the Internet"— and there's never been a more deceptively didactic source of great lessons about "life, the universe and everything" (to use Douglas Adams's memorable phrase)—and put them into practice in various civic initiatives and plans to fix the bugs of humanity. Why the scare quotes around "the Internet"? In the afterword to my first book, The Net Delusion, I made what I now believe to be one of its main, even if overlooked, points: the physical infrastructure we know as "the Internet" bears very little resemblance to the mythical "Internet"—the one that reportedly brought down the governments of Tunisia and Egypt and is supposedly destroying our brains—that lies at the center of our public debates. The infrastructure and design of this network of networks do play a certain role in sanctioning many of these myths—for example, the idea that "the Internet" is resistant to censorship comes from the unique qualities of its packet-switching communication mechanism—but "the Internet" that is the bane of public debates also contains many other stories and narratives—about innovation, surveillance, capitalism—that have little to do with the infrastructure per se. French philosopher Bruno Latour, writing of Louis Pasteur's famed scientific accomplishments, distinguished between Pasteur, 14 Solutionis™ and Its Discontents the actual historical figure, and "Pasteur," the mythical almighty character who has come to represent the work of other scientists and entire social movements, like the hygienists, who, for their own pragmatic reasons, embraced Pasteur with open arms. But anyone interested in writing the history of that period cannot just deploy the name "Pasteur" as an unproblematic, objective term; it needs to be disassembled so that its various parts can be studied in their own right. The story of how these disparate parts—including the actual Louis Pasteur—have become "Pasteur," the national hero of France whom we see in textbooks, is what the history of science, at least in its Latourian vision, should aspire to uncover. Now, I do not set out to write history in this book. If I did, I would indeed try to show the contingency and fluidity of the very idea of "the Internet" and attempt to trace how "the Internet" has come to mean what it means today. In this book, I'm interested in a much narrower slice of this story; namely, I want to explore how "the Internet" has become the impetus for many of the contemporary solutionist initiatives while also being the blinkers that prevent us from seeing their shortcomings. In other words, I'm interested in why and how "the Internet" excites—and why and how it confuses. I want to understand why and how iTunes or Wikipedia—some of the core mythical components of "the Internet"—have become models to think about the future of politics. How have Zynga and Facebook become models to think about civic engagement? How have Yelp's and Amazon's reviews become models to think about criticism? How has Google become a model for thinking about business and social innovation—as if it had a coherent philosophy—so that books with titles like What Would Google Do? can become best sellers? The arrival of "the Internet" both boosted and vindicated many of the solutionist attitudes that I describe in this book. "The Internet" has allowed solutionists to significantly expand the scope of their interventions, running experiments on a much grander scale. It has also given rise to a new set of beliefs—what I call "Internet-centrism"—the chief of which is the firm conviction that we are living through unique, revolutionary times, in which the previous truths no longer hold, everything is undergoing profound 15 To Save Everything, Click Here change, and the need to "fix things" runs as high as ever. "The Internet," in short, has supplied solutionists with ample ammunition to ratchet up their war on inefficiency, ambiguity, and disorder, while also providing some new justification for doing so. But it has also supplied them with a set of assumptions about both how the world works and how it should work, about how it talks and how it should talk, recasting many issues and debates in a decidedly Internet-centric manner. Internet-centrism relates to "the Internet" very much like scientism relates to science: its epistemology tolerates no dissenting viewpoints, while all recent history is just about how the great spirit of "the Internet" presents itself to us. This book, then, is an effort to liberate our technology debates from the many unhealthy and erroneous assumptions about "the Internet." In this, it's much more normative than history aspires to be. Following the work of Latour and Thomas Kuhn, many historians of science have come to accept that, while the idea of "Science" with a capital S is even more chock-full of myths than the idea of "the Internet," they have made peace with this discovery, reasoning that, as long as there are scientists who think there is this "Science" with a capital S out there, they are still worth studying, regardless of whether historians of science themselves actually share this belief. It's an elegant and reassuring approach, but I find it very hard to pursue when thinking about "the Internet" and the corrosive influence that this idea is beginning to have on public discourse and the kinds of reform projects that are getting priority. In this sense, to point out the many limitations of solutionism without also pointing out the limitations of what I call "Internet-centrism" would not be very productive; without the latter, the former wouldn't be half as powerful. So before we can embark on discussing the shortcomings of solutionism in areas like politics or crime prevention, it's worth getting a better grasp of the pernicious intellectual influence of Internet-centrism—a task we turn to in the next chapter. Revealing Internet-centrism for what it is will make debunking solutionism much less difficult. 16 CHAPTER TWO The Nonsense of "the Internet"— and How to Stop It "The internet is not territory to be conquered, but life to be preserved and allowed to evolve freely." —Nicolas Mendoza, AlJazeera.com "What made Blockbuster close? The Internet. What made At the Movies get canceled? The Internet. Who went tromping across my lawn and ruined my petunias? The Internet." —Eric Snider, cinematical blog These days, "the Internet" can mean just about anything. "The Next Battle for Internet Freedom Could Be over 3D Printing," proclaimed the headline on TechCrunch, a popular technology blog, in August 2012. Given how fuzzy the very idea of "the Internet" is, derivative concepts like "Internet freedom" have become so all-encompassing and devoid of any actual meaning that they can easily cover the regulation of 3D printers, the thorny issues of net neutrality, and the rights of dissident bloggers in Azerbaijan. Instead of debating the merits of individual technologies and crafting appropriate policies and regulations, we have all but surrendered to catchall terms like "the Internet," which try to bypass any serious and empirical debate altogether. 17 To Save Everything, Click Here Today, "the Internet" is regularly invoked to thwart critical thinking and exclude nongeeks from the discussion. Here is how one prominent technology blogger argued that Congress should not regulate facial-recognition technology: "All too many U.S. lawmakers are barely beyond the stage of thinking that the Internet is a collection of tubes; do we really want these guys to tell Face-book or any other social media company how to run its business?" You see, it's all so complex—much more complex than health care or climate change—that only geeks should be allowed to tinker with the magic tubes. "The Internet" is holy—so holy that it lies beyond the means of democratic representation. That facial-recognition technology developed independently of "the Internet" and has its roots in the 1960s research funded by various defense agencies means litde in this context. Once part of "the Internet," any technology loses its history and intellectual autonomy. It simply becomes part of the grand narrative of "the Internet," which, despite what postmodernists say about the death of metanarratives, is one meta-narrative that is doing all right. Today, virtually every story is bound to have an "Internet" angle—and it's the job of our Internet aposdes to turn those little anecdotes into fairy tales about the march of Internet progress, just a tiny chapter in their cyber-Whig theory of history. "The Internet": an idea that effortlessly fills minds, pockets, coffers, and even the most glaring narrative gaps. Whenever you hear someone tell you, "This is not how the Internet works"—as technology bloggers are wont to inform everyone who cares to read their scribblings—you should know that your interlocutor believes your views to be reactionary and antimodern. But where is the missing manual to "the Internet"—the one that explains how this giant series of tubes actually works—that the geeks claim to know by heart? Why are they so reluctant to acknowledge that perhaps there's nothing inevitable about how various parts of this giant "Internet" work and fit together? Is it really true that Google can't be made to work differently? Tacitly, of course, the geeks do acknowledge that there is nothing permanent about "the Internet"; that's why they lined up to oppose the Stop Online Privacy Act (SOPA), which—oh, the irony—threatened to completely alter "how the Internet works." So, no interventions will work "on the Internet"—except for those that will. SOPA was 18 The Nonsense of "the Internet"—and How to Stop It a bad piece of legislation, but there's something odd about how the geeks can simultaneously claim that the Internet is fixed and permanent and work extremely hard in the background to keep it that way. Their theory stands in stark contrast to their practice— a common modern dissonance that they prefer not to dwell on. "The Internet" is also a way to shift the debate away from more concrete and specific issues, essentially burying it in obscure and unproductive McLuhanism that seeks to discover some nonexistent inner truths about each and every medium under the sun. Consider how Nicholas Carr, one of today's most vocal Internet skeptics, frames the discussion about the impact that digital technologies have on our ability to think deep thoughts and concentrate. In his best-selling book The Shallows, Carr worries that "the Internet" is making his brain demand "to be fed the way the Net fed it— and the more it was fed, the hungrier it became." He complains that "the Net. . . provides a high-speed system for delivering responses and rewards . . . which encourage the repetition of both physical and mental actions." The book is full of similar complaints. For Carr, the brain is 100 percent plastic, but "the Internet" is 100 percent fixed. Does "the Net" that Carr writes about actually exist? Is there much point in lumping together sites like Instapaper—which lets users save Web pages in order to read them later, in an advertising-free and undisturbed environment—and, say, Twitter? Is it inevitable that Facebook should constantly prompt us to check new links? Should Twitter reward us for tweeting links that we never open? Or punish us? Or do nothing—as is the case now? Many of these are open questions—and the way in which technology companies resolve them depends, in part, on what we, their users, tell them (provided, of course, we can get our own act together). There may be some business hurdles to making the digital services we use less amenable to discussion, but this is where one has to explore the world of political economy, not that of neuroscience, even if the latter is the much more fashionable of the two. Carr, however, refuses to abandon the notion of "the Net," with its predetermined goals and inherent features; instead of exploring the interplay between design, political economy, and information science, he keeps telling us that "the Net" is, well, shite. Alas, it 19 To Save Everything, Click Here won't get any better until we stop thinking that there is a "Net" out there. How can we account for the diversity of logics and practices promoted by digital tools without having to resort to explanations that revolve around terms like "the Net"? "The Net" is a term that should appear on the last—not first!—page of our books about digital technologies; it cannot explain itself. Like Marshall McLuhan before him, Carr wants to score, rank, and compare different media and come up with some kind of quasi-scientific pecking order for them (McLuhan went as far as to calculate sense ratios for each medium that he "studied"). This very medium-centric approach overlooks the diversity of actual practices enabled by each medium. One may hate television for excessive advertising—but then, a publicly supported broadcasting system may have no need for advertising at all; TV programs don't always have to be interrupted by ads. Video games might make us more violent—but, once again, they can do so many other things in so many different ways that it seems unfair to connect them only to one function. There's very little that the New York Times has in common with the Sun or that NPR shares with Rush Limbaugh. Likewise, there's nothing inevitable about Google making information available permanently or Facebook trying to pitch un-needed products or not limiting the number of links it shows users to, say, ten a day. These are not "inherent" properties of "the Net"; these companies have chosen to do these things—perhaps for business reasons or out of sheer arrogance and self-confidence—but they could have easily chosen otherwise. In fact, all these companies seem to be adding or subtracting at least one feature per week; if anything, this is the best argument for not assuming that their platforms are somehow just a way in which "the Net" speaks to us. If "the Net" does have a voice when it speaks to us, it's that of a schizophrenic. Given his McLuhanesque medium-centrism, it's not surprising that Carr has little to say about fighting all the digital distractions he identifies: his notion of the ever-permanent and rigid "Net" prevents him from identifying structural reforms that can result in less destruction ("My interest is description, not prescription," Carr told the New York Observer). In Carr's universe, we can only arm ourselves with software that can cut our Internet connections. 20 The Nonsense of "the Internet"—and How to Stop It Or we can all move to the silent sanctity of the mountain ranges of Colorado, as Carr himself did when writing his book. Tinkering with "the Net" itself is not just impossible, it's unthinkable: its logic cannot be reversed; it can only be (occasionally) circumvented. Against the Internet Grain As it happens, Internet skeptics and optimists share quite a lot of common ground; both depend on some stable notion of "the Internet" to advance their arguments. Remove that notion, along with its simplistic assumptions about the inherent benefits of openness or publicness, and the pundits are suddenly forced to confront complex empirical matters, to inquire into the politics of algorithms, to grapple with the history of facial-recognition technologies, to understand how techniques like "deep packet inspection" actually work. As long as Internet-centrism rules supreme, our technological debate will remain lazy, shallow, and unproductive: "the Internet," no matter how many TED talks and Kindle singles are dedicated to it, will not tell us whether we need regular public audits of search engine giants like Google. Of course, pundits might say that such audits are "a war on Internet openness"—but this is precisely the kind of discourse we ought to avoid, as it makes claims about what appears to be a mythical entity. It's not surprising then that imagining life after "the Internet" is so often an exercise in despair, a one-way ticket to irrelevance, cynicism, or madness. "The Internet," it seems, has arrived for good, and its finality is hardly contested; "the network," as its foremost theorist Lawrence Lessig assures us in the pages of the New Republic, "is not going away." It's not just that we no longer remember the world before Google, Facebook, and Wikipedia; it's also that large chunks of that world either no longer exist or, as is the case with the print edition of Encyclopaedia Britannica, are in the process of liquidation. Some might feel nostalgic for the time when they actually flipped through those hefty and dusty tomes, but overall it seems that humanity has placed its bet on the younger, leaner, and more efficient offspring. Still, there's something peculiar about this failure of our collective imagination to unthink "the Internet." It is no longer discussed 21 To Save Everything, Click Here as something contingent, as something that can go away; it appears fixed and permanent, perhaps even ontological—"the Internet" just is and it always will be. To paraphrase Frederic Jameson on capitalism, it's much easier to imagine how the world itself would end than to imagine the end of "the Internet." Of course, some claim that they can still imagine what it's like to go without "the Internet" and its toys for a week or two. What they don't realize is that this experience of the "offline" is also profoundly affected by the experience of the "online"; that we think of technology through the lens of this bifurcation between the two is also a contingent fact of history, not a God-given fact of nature. It is possible to think about activities like search and social networking without positing any such split between two seemingly different worlds. But even if we bracket concerns over this bifurcation, such withdrawal from "the Internet" is not the same as imagining a completely different world—a world where withdrawal itself is no longer required, for the coveted object itself is no longer available. A world in which there is no "Internet" to withdraw from eludes our creative faculties. Even more peculiar is the fact that our smartest technologists— the guys who basically see the future in their bathroom mirror every morning—are equally helpless in this endeavor. These techies, who worship the god of creative destruction and pray on the altar of innovation and see industries come and go without shedding a tear, might be spending their weekends mining asteroids and jogging on other planets—but even they can't imagine how "the Internet" would die, let alone suggest what might succeed it. Their predictive models can anticipate and simulate the odds (and probably the consequences) of a global porcupine rebellion, but the basic oudine of a world without cables, switches, and URLs still remains beyond their computing abilities. Was it always like this? Could the Victorians imagine life after the telegraph or the steam engine? Could Marconi and his disciples imagine life after radio? Could the people of 1950s America imagine life after television? Could the French imagine life after Minitel? Science fiction and Utopian literature of those eras do contain many a fine testament to that effort. Of course, one might counter, such 22 The Nonsense of "the Internet"—and How to Stop It analogies are imperfect, unfair even. For one, radio and television are still with us, and only in June 2012 did the French finally pull the plug on Minitel. Besides, radio, television, even the telegraph—for what is e-mail if not better telegraph?—have reinvented themselves online. But this only adds confusion to our inquiry, for now that most other technologies are mediated by "the Internet," it's even harder to imagine how the whole enterprise might be supplanted by something else. If "the Internet" goes, it seems, the entire armament of our technologies—all those artifacts on display in our museums of science and technology and history textbooks— would go with it. But perhaps we can't imagine life after "the Internet" because we don't think that "the Internet" is going anywhere. If the public debate is any indication, the finality of "the Internet"—the belief that it's the ultimate technology and the ultimate network—has been widely accepted. It's Silicon Valley's own version of the end of history: just as capitalism-driven liberal democracy in Francis Fukuyama's controversial account remains the only game in town, so does the capitalism-driven "Internet." It, the logic goes, is a precious gift from the gods that humanity should never abandon or tinker with. Thus, while "the Internet" might disrupt everything, it itself should never be disrupted. It's here to stay—and we'd better work around it, discover its real nature, accept its features as given, learn its lessons, and refurbish our world accordingly. If it sounds like a religion, it's because it is. This very notion of "the Internet" is on display when Google's Eric Schmidt, for example, says that "policymakers should work with the grain of the Internet rather than against it," or when Rebecca MacKinnon, a prominent commentator on digital politics, notes that "without a major upgrade, [our] political system will keep on producing legal code that is Internet-incompatible." It's the same notion of "the Internet" that popular technology blogger and author Jeff Jarvis invokes when, discussing Germans' complex feelings about privacy, he writes of a "nagging fear Germans harbor that their heritage is coming into fundamental conflict with internet culture—with the future." 23 To Save Everything, Click Here All these thinkers take "the Internet" to be fixed and unified, meaningful and didactic, powerful and unconquerable. And, as Jarvis puts it, it's "the future." In a similar vein, popular technology investor Paul Graham writes, "Web 2.0 means using the Web the way it's meant to be used. The 'trends' we're seeing now are simply the inherent nature of the Web emerging from under the broken models that got imposed during the Battle." "The Internet," thus, is believed to possess an inherent nature, a logic, a teleology, and that nature is rapidly unfolding in front of us. We can just stand back and watch; "the Internet" will take care of itself—and us. If your privacy disappears in the process, this is simply what the Internet gods wanted all along. Perhaps one last example of this quasi-religious sentiment about "the Internet" would suffice. David Post, one of the early champions of the idea that "the Internet" represents a unique and unprecedented stage in human history, argues that "the Internet" might be propelled by laws and regulations as firm as those of nature. Rejecting Lessig's reasonable claims that "the Internet" has no inherent nature or purpose and that we should try to avoid an "is-ism" mentality whereby we believe that "the Internet" will always be as free as it is now, Post sees "the Internet" as something preternatural and autonomous. (Curiously, Lessig's point here is the exact opposite of what he wrote in the New Republic about the network "not going away," but this is hardly surprising: Lessig's academic self knows that there's nothing fixed about "the Internet," but his activist self also knows that claiming that it's here to stay will make his advocacy much easier). This is what Post actually writes: There are laws of Nature. . . . There are laws of growth, and scale, and organization, reasons why website visits, Internet connectivity, the population of cities, and the frequency of words all follow the same pattern, reasons why the one global network is the one with end-to-end design and distributed routing, though we probably understand those laws . . . not very well. And they matter. . . . We can shake our fists at the law of gravity all we like, but if we don't pay close attention to it when building our bridges, they will all fall down. . . . 24 The Nonsense of "the Internet"—and How to Stop It It is not, as Lessig would have it, "is-ism" to keep looking for them and trying to understand how they work. Eric Schmidt's "grain of the Internet," in other words, is real— for all we know, it might be as real as gravity—and we should keep looking for it by peeking deep inside "the Internet's" soul. How did we reach a point where "the Internet" is presumed to develop according to laws as firm and natural as those of gravity? That a serious legal academic can write this without anyone suspecting him of being slighdy delusional is one indication of just how uncritical discussions about "the Internet" have become. The Faux Didacticism of "the Internet" Does "the Internet" have a message to impart to humanity? Does it contain important lessons that we all need to heed and perhaps incorporate into our institutions? Does it help us rediscover long-forgotten truths about human nature? More and more people— not just ivory tower intellectuals but also regular soldiers in the Internet war, people who join Anonymous and vote for representatives of Pirate Parties in elections—are answering these questions in the affirmative. It's this propensity to view "the Internet" as a source of wisdom and policy advice that transforms it from a fairly uninteresting set of cables and network routers into a seductive and exciting ideology—perhaps today's uber-ideology. Science and technology writer Steven Johnson has offered perhaps the sharpest summary of this ideology in Future Perfect. For Johnson, "the Internet" is much more than just a cheap way of sending Skype messages or adding hilariously unfunny captions to photos of cats. Rather, it's an intellectual template for how society itself should be reorganized; it's not "the solution to the problem, but a way of thinking about the problem." Thus, writes Johnson, "one could use the Internet directly to improve people's lives, but also learn from the way the Internet had been organized, and apply those principles to help improve the way city governments worked, or school systems taught students." Not surprisingly, he believes that in their political significance, major developments in Internet 25 To Save Everything, Click Here history are comparable to, say, the French Revolution or the fall of the Berlin Wall. Hence, "the creation of ARPANET and TCP/IP . . . should also be seen as milestones in the history of political philosophy." To Hobbes and Rawls, now we must add ARPANET and TCP/IP. Why? Well, Johnson believes that sites like Wikipedia and Kickstarter, a popular fund-raising platform for aspiring artists and geeks, work because they embed the decentralizing spirit of "the Internet"—the same spirit that runs through and regulates its physical networks. And it's, of course, the spirit of victory: everything that "the Internet" touches automatically gets better, smarter, prettier. "Slowly but steadily, much like the creation of the Internet itself, a growing number of us have started to think that the core principles that governed the design of the Net could be applied to solve different kinds of problems—the problems that confront neighborhoods, artists, drug companies, parents, schools," writes Johnson. What does this mean in practice? Take just one example: Johnson thinks that a site like Kickstarter offers a much better model of funding arts than, say, the National Endowment for the Arts (NEA); in fact, he thinks it's just a matter of time before Kickstarter overtakes the NEA "The question with Kickstarter, given its growth rate, is not whether it could rival the NEA in its support of the creative arts. The new question is whether it will grow to be ten times the size of the NEA." Elsewhere in the book, Johnson writes that he doesn't want to scrap the NEA, only to make it work more like Kickstarter; what's most interesting about his argument, however, is that he doesn't spell out why the NEA should become like Kick-starter and what makes the latter's model superior. Perhaps, Johnson simply doesn't have to, as his audience can anticipate the argument that is implied: the Kickstarter approach is simply better because it comes from "the Internet." This odd and shortsighted claim focuses on the mechanics of the platform rather than on the substance of what institutions like the NEA actually do. Kickstarter works as follows: creators—they can be start-ups that want to build a cool app or new gadget or artists who want to make a music video—post their fund-raising appeals on the site; if and when enough people chip in, the creators 26 The Nonsense of "the Internet"—and How to Stop It get the money to embark on their project. Many projects don't meet the fund-raising target and get no money; those that do sometimes fail to deliver what was expected (Kickstarter's most famous failed alumnus is Diaspora, an ill-fated start-up that wanted to take on Facebook and offer users better privacy; started in April 2010, the project had collapsed by August 2012, with one of its co-founders committing suicide). Some projects do deliver, but most are at the mercy of "virality"; if the online crowd finds their proposal appealing, money does pour in—often much more than was asked for originally. Now, this is a very different model from the top-down hierarchical model of the NEA, in which a bunch of artsy bureaucrats make all the decisions as to what art to fund. But the fact that Kick-starter offers a more efficient platform for some projects to raise more money more effectively—bypassing the bureaucrats and increasing participation—does not mean it will yield better, more innovative art or support art that, in our age of cat videos, might seem old-fashioned and unnecessary. Sites like Kickstarter tend to favor populist projects, which may or may not be good for the arts overall. The same logic applies to other governmental and quasi-governmental institutions as well: if the National Endowment for Democracy worked like Kickstarter, it would have to spend all its money on funding projects like the highly viral Kony 2012 campaign, which, all things considered, may only be of secondary importance to both democracy promotion and US foreign policy as a whole. Besides, it's not at all obvious whether this new system will promote fairness and justice. Contrary to what most Internet cheerleaders think, virality is hardly ever self-generated and self-sustaining. Memes are born free, but everywhere they are in chains—those of PR agencies and freelancing solo artists. Both have perfecdy adapted to this new digital world and found ways to reverse engineer virality by manipulating the economics of social media. They know how to feed the right stuff to bloggers to generate buzz on important, even if niche, platforms, and since so many in the professional media now read sites like Gawker and The Huffington Post, extending their reach far beyond social media. Thus, while Kickstarter might give us the illusion of more efficient distribution of arts funding than the NEA, it would be naive 27 To Save Everything, Click Here and very shortsighted not to take note of the fact that we'll also get— and this is much more important than the efficiency of the platform—very different art. How so? Danish academic Inge Ejbye Sorensen has studied how crowdfunding has affected documentary filmmaking in the United Kingdom. Britain stands out among other countries in that most of its documentaries are produced and fully funded by one of its four main broadcasters (BBC, ITV, Channel 4, and Channel 5), which dictate the terms to the filmmaker. In this context, crowd-funding and Kickstarter seem liberating, even revolutionary. But, as Sorensen points out, this revolution has a few mitigating circumstances. First, Kickstarter might produce many new documentaries, but the odds are that they will be of a very particular kind (this critique also applies to other sites in this field, like indiegogo.com, sponsume.com, crowdfunder.co.uk, and pledgie .com). They are likely to be campaign-and issue-driven films in the tradition of Super Size Me or An Inconvenient Truth. Their directors seek social change and tap into an activist public that shares the documentary's activist agenda. A documentary exploring the causes of World War I probably stands to receive less online funding— if any—than a documentary exploring the causes of climate change. Second, some films have significant start-up costs (think drama documentaries or history movies) or involve considerable legal risks that may be hard to price and account for. Say you are making a film that includes an undercover investigation of the oil industry. When you have the BBC's lawyers backing you up, you'll probably take many more risks than if you are relying on crowdfunding. But if Kickstarter is your platform of choice, you'll probably forgo venturing into the thorny legal issues altogether. Both of these arguments show the danger of viewing the nimble and crowd-powered Kickstarter as an alternative (rather than a supplement!) to the behemoth that is the BBC, or in the American context, the NEA. This might fit quite nicely with David Cameron's rhetoric of the "Big Society"—whereby individuals take on the roles formerly performed by public institutions—but it would be a mistake to treat the two approaches as producing the same content through different means. Some content is simply unlikely to get crowdfunded. 28 The Nonsense of "the Internet"—and How to Stop It Johnson, however, does not want to make his case for reforming the NEA on aesthetic grounds; for him, Kickstarter is better because it's more Internet-like and more participatory. That these may be irrelevant considerations when it comes to funding art does not seem to bother him. This is Internet-centrism at work: the putative values of "the Internet"—be it openness or participation— become the prized yardstick for assessing every field of human endeavor, regardless of its own goals and standards. But there's one more problem. Defining Internet values is notoriously tricky. Take someone like Internet pundit Jeff Jarvis, who in his first book, What Would Google Do?, argues that other institutions—both for-profits and nonprofits—should copy Google's business philosophy. His reasoning goes like this: "The Internet" seems open, public, and collaborative. Google seems so, too, and it's prospering. Hence its values are openness, publicness, and collaboration; these are also Internet values, and they bring profits and efficiency. So, reasons Jarvis, "the Internet" tells us something very important about Google, and Google tells us something very important about "the Internet." This logic is so circular, there's no way for pundits like him ever to be wrong. But as the last few years show, Google is not driven by an ideology of either openness or publicness; at this point it seems to care only about market competition. When it felt so far ahead of Face-book and Apple, it built open platforms and launched unprofitable but useful services. But those days are long gone: it has shut down many of the platforms celebrated by Jarvis and become much more cautious, now charging for some services and eliminating others altogether. In 2010 it all but gave up on its nominal commitment to "openness" when it struck a deal with Verizon regarding traffic management on mobile networks. It's true that for a very long time Google stayed out of the content business—positioning itself as a platform for accessing the content of others—but today it owns the restaurant guide Zagat and the travel guide Frommer's and actively serves its own content in search results. Is Google less of an Internet company today than it was when Jarvis published his book? Or could it be that it never actually had any genuine lessons to impart about "the Internet" and that such lessons are always transitory and in flux? 29 To Save Everything, Click Here Or take Wikipedia, which is easily the solutionists' favorite template for rebuilding the world; books with tides like Wikinomics and Wiki Government are a testament to the role this one website plays in solutionists' imaginations. The problem with using Wikipedia as a model is that nobody—not even its founder, Jimmy Wales— really knows how it works. To assume that we can distill life-changing lessons from it and then apply them in completely different fields seems arrogant to say the least. Worst of all, Wikipedia is itself subject to many myths, which might result in Wikipedia-inspired solutions that misrepresent its spirit. "The bureaucracy of Wikipedia is relatively so small as to be invisible," proclaims technology pundit Kevin Kelly, confessing that "much of what I believed about human nature, and the nature of knowledge, has been upended by the Wikipedia." But what did Kelly believe before Wikipedia? Kelly writes that "everything I knew about the structure of information convinced me that knowledge would not spontaneously emerge from data, without a lot of energy and intelligence deliberately directed to transforming it." What a reasonable thing to have believed! Only there's no reason to stop believing this today. Wikipedia, as it turns out, has a huge— not small—bureaucracy; its rules cover the most arcane issues (just consider WP:MOSMAC, which regulates how Wikipedia articles should discuss "the Republic of Macedonia and the Province of Macedonia, Greece"). One estimate from 2006 posited that discussions about Wikipedia's governance and editorial policies—the stuff of which bureaucracy is made—constituted at least one-quarter of the whole site. Its bureaucracy is anything but small—and to start applying Wikipedia's lessons before actually grasping them is a recipe for disaster. That it's invisible to the likes of Kelly only means that they are looking at it the wrong way; the task of sound technological analysis—which is not beholden to Internet-centrism— is to make the seemingly invisible visible. The best explanation of Wikipedia is what its own insiders like to say: Wikipedia works in practice but not in theory. It's a great line, and in addition to being funny, it also shows that we simply have no adequate theories to understand Wikipedia. Perhaps we shouldn't even strive for such theories, as they will inevitably gloss over the rich world of practices and mediators that make it tick. 30 The Nonsense of "the Internet"—and How to Stop It There's nothing wrong with being humble and acknowledging the limitations of our understanding. Obviously, that something doesn't fit a grand theory of "how the Internet works" does not make it ineffectual, as the Wikipedia example shows so well. Given just how limited our knowledge of Wikipedia is, to expect that we can magically "pull a Wikipedia" whenever confronted with a burning issue is dangerously naive. If Internet Theorists Were Bouncers Internet-centrism has found its way into regulatory thinking as well. One of the most attractive contemporary theories of Internet regulation, advanced by Harvard's Jonathan Zittrain, revolves around the idea of generativity. It starts from the premise that openness of the platform is the main reason why "the Internet" has unleashed so much innovation. On "the Internet," no one has to ask for permission to start a new service. Google could build a search engine without negotiating with ISPs. Wikipedia could build an encyclopedia without negotiating with the likes of Microsoft or AOL. Skype could build its impressive software without negotiating with AT&T. As an explanation of what has happened in the last two decades, Zittrain's is a very elegant and pithy theory. However, generativity also prescribes how things should be done in the future: if we want this great wave of innovation to continue, the argument goes, we should maintain—even proactively defend—the openness of "the Internet." Any development that introduces a set of gatekeepers into "the Internet's" ecosystem—like the recent fascination with apps for smartphones and tablets—is to be scrutinized and, in most cases, resisted, for the new gatekeepers, greedy as they are, might not have "the Internet's" best interests in mind. In my book, a hallmark of a good theory of technological change and innovation is whether it can predict—or at least anticipate—how incumbent technology itself would be disrupted. It doesn't seem extraordinary to expect that theorists of innovation would at least be prepared for the eventual possibility that whatever incumbent technology they are celebrating at the moment might itself get undone by the very same forces of disruption that made it incumbent 31 To Save Everything, Click Here in the first place. In other words, if we were to travel back in history and apply what we know about "the Internet" to write better rules for regulating the telephone industry, we would probably put more emphasis on the possibility that the telephone would not be around forever. The same goes for the telegraph, the radio, and even television. If they had a second chance, good theorists of innovation in each of those industries would spend far more time trying to anticipate the death of their object of inquiry—be it the telegraph, the radio, or television—rather than articulating criteria and conditions that could allow those objects to live forever. This, at any rate, is what follows if one assumes that innovation should be platform independent and that maximizing it across all platforms— including future, unanticipated ones—should be the ultimate goal of effective regulation. But the theory of generativity doesn't preoccupy itself with the thorny subject of how "the Internet" itself will die—not least because Zittrain, under the sway of Internet-centrism, badly wants "the Internet" to be eternal. His theory is a recipe for how "the Internet" can live forever. This, of course, is never expressed directly, for Zittrain assumes—quite correctly—that his geeky audience shares his desire to make its fetish object immortal. However, we shouldn't mistake our infatuation with "the Internet" for a genuine theory of innovation. Any robust theory, instead of treating "the Internet" like a permanent gift to civilization, would find a way to compare the innovation potentials of many different platforms and technologies, including those that might eventually threaten to supplant "the Internet." Of course, there may be other strong social, political, and even aesthetic concerns about the challenge that the rise of apps presents to digital "forms of life." However, to claim that Apple—one of Zittrain's culprits—is bad for innovation because it's bad for "the Internet" is like claiming that "the Internet" is bad for innovation because it's bad for the telephone. Well, it might have been bad for the telephone—but when did the preservation of the telephone become a lofty social goal? Such teleological Internet-centrism should have no place in our regulatory thinking. But, alas, the preservation of "the Internet" seems to have become an end in itself, to the great detriment of our ability even to imagine what 32 The Nonsense of "the Internet"—and How to Stop It might come to supplant it and how our Internet fetish might be blocking that something from emerging. To choose "the Internet" over the starkly uncertain future of the post-Internet world is to tacitly acknowledge that either "the Internet" has satisfied all our secret plans, longings, and desires—that is, it is indeed Silicon Valley's own "end of history"—or that we simply can't imagine what else innovation could unleash. The irony is that Zittrain's theory of generativity, while very critical of gatekeepers like Apple, is itself a gatekeeper. While generativity green-lights good, reliable, and predictable innovation, the kind that promises to stay within the confines of "the Internet" and leave things as they are, it frowns upon—and possibly even blocks—the unruly and disruptive kind that might start within "the Internet" but eventually transcend, supplant, and perhaps even eliminate it. Zittrain attempts to universalize what he takes to be the operating principles of "the Internet" and present them as objective, eternal, and uncontroversial foundations on which innovation theory itself could run from now on. Thus, if openness has supposedly been one of the defining features of "the Internet," it gets magically transformed into an objective benchmark for the future of innovation. Aggressive expansion into other domains is one of the hallmarks of Internet-centrism; it colonizes entire theories and domains, imposing its own values—openness, transparency, disruption—on whatever it touches. However, if we put the well-being of "the Internet" aside, absolutely nothing about Apple's hands-on approach to running its app store or controlling its gadgets suggests that it's bad for innovation. Its approach may not be "open"; it may not even be "Internet compatible." But these criteria only make sense in a world where the well-being of "the Internet itself is the alpha and omega of everything, the summum bonum. This may even be a world in which Jonathan Zittrain and many other geeks would actually want to live; ideologies do have a tendency to present other worldviews as irrelevant or impossible. In reality, though, control and centralization are not inherently antithetical to innovation; if we have come to believe the opposite, then "the Internet" is partly to blame. Woody Allen once wrote a hilarious satire titled "If the Impressionists Had Been Dentists," written in the form of a letter 33 To Save Everything, Click Here from Vincent van Gogh to his brother ("Theo . . . Mrs. Sol Schwimmer is suing me because I made her bridge as I felt it and not to fit her ridiculous mouth! . . . She claims she can't chew! What do I care whether she can chew or not!"). The world of Internet theory still awaits its Woody Allen, but an analogous satire— something along the lines of "If Internet Theorists Had Been Nightclub Bouncers"—would be quite useful. If Jonathan Zittrain's theory is any indication, his Apple nightclub would be run as an oasis of openness—what does he care that some patrons show up drunk or carrying drugs and weapons?—and this openness would make everyone inside the club happy. It's a nice theory, but there is a reason why real clubs don't preach the ideology of radical openness: it spoils the clubbing experience. But, Internet theorists might counter, what do we care about the clubbing experience if there's such a great atmosphere of openness inside the club? Well, good luck to them. Zittrain's thought is a manifestation of a broader paradox that has become ubiquitous in our Internet debates. Rare is a reader of technology blogs or an attendee of technology conferences who has not heard the admonition that some dark, evil force—Hollywood, the National Security Agency, China, Apple—is about to "break the Internet." Technologists and geeks—the group that spends the greatest amount of time philosophizing about "the Internet" and its future—constantly remind us that "the Internet" is unstable and might fall apart. Save for the occasional proclamation that the world will stay as it is minus all the fun and convenience, no one seems to know what awaits us once "the Internet" does break. But break it will—unless some drastic change is taken to maintain its current state. Hence the greatest irony of all: one day we are told that "the Internet" is here to stay, and we should reshape our institutions to match its demands; another day, we are told that it's so fragile that almost anyone or anything could deal it a lethal blow. It would be tempting to write this paradox off as a mere contradiction in geek logic. Or, as in Lessig's case, it might be just a rhetorical trick, a clever ruse that bolsters some important activist cause—say, copyright reform, net neutrality, or opposition to surveillance and censorship—while also nudging our seemingly obsolete political and legal institutions to experiment with technology 34 The Nonsense of "the Internet"—and How to Stop It and innovation. Such an interpretation certainly seems plausible. But it's also plausible that we have become utterly confused about "the Internet" and its presumed nature, that we are dead wrong about its finality, that the very idea of "the Internet" has impoverished our thinking about the world, and that we are worshiping false gods and ideologies. So, which is it? Of Epochs and Epochalisms Before examining Internet-centrism's corrosive influence on so-lutionism, a few words about its origins are in order. Even though its leading proponents may not be aware of it—being too young or inexperienced with books—Internet-centrism, for all its boasting about the truly revolutionary and exceptional nature of the modern era, overlaps with and feeds on several earlier fetishes and discourses about technology, information, innovation, and digi-tality. Plenty of books on the inevitable arrival of the information age and the postindustrial era, on the virtues and perils of automation, and on the transformational potential of cybernetics and artificial intelligence have all prepared the grounds—and our minds—for the current discussion. To present the discourse on Internet exceptionalism as exceptional would itself be a great error, for it's anything but. Technological amnesia and complete indifference to history (especially the history of technological amnesia) remain the defining features of contemporary Internet debate. As British historian of technology David Edgerton points out, "When we think of information technology we forget about postal systems, the telegraph, the telephone, radio and television. When we celebrate on-line shopping, the mail-order catalogue goes missing. Genetic engineering, and its positive and negative impacts, is discussed as if there had never been any other means of changing animals or plants, let alone other means of increasing food supply." Only a hopelessly brave and optimistic soul would conclude that as "the Internet" comes to dominate and overtake many of these earlier debates, our respect for historical detail will somehow magically increase. If anything, the "Internet turn" in the technology debate will only aggravate this forgetfulness. 35 To Save Everything, Click Here Of course, if one's knowledge of history is reduced to tweet-length CliffsNotes, it's natural to feel triumphant and unique, to believe one is living in truly exceptional times—an intellectual fallacy I call "epochalism." It's not a preserve of Internet optimists only; the pessimists love epochalism as well. After all, their criticisms matter only if the phenomena they are criticizing are seen as unprecedented. Thus, a self-proclaimed Internet pessimist like Andrew Keen can proclaim starkly that the growth of social media is "the most wrenching cultural transformation since the Industrial Revolution" without bothering to produce much evidence. Keen simply presumes that the unprecedented scale of today's transformations is self-evident—a hallmark assumption of epochalism. By presuming that we are living through revolutionary times, epochalism sanctions radical social interventions that might otherwise attract a lot of suspicion and criticism. But in truly revolutionary times, everything goes; why not model politics on Wikipedia after all? All this talk about revolutions is just a clever way of legitimizing radical agendas that few would accept in normal times. The paralyzing influence of epochalism induces passivity and limits our responses to change, for the unfolding trends are perceived to be so monumental and inevitable that all resistance seems futile. It blinds us to the banal and highly fleeting nature of the "revolutionary" trends under consideration. After all, it's much easier to proclaim yet another digital revolution—and to coin a requisite buzzword—than to wait and see if the observed change, instead of being a complete overthrow of established practices and principles, is just a shift in order and magnitude. But the trickery doesn't stop here, for the novel buzzword— coined only because we are apparentiy on the brink of a new era—is fed back into the system as a definite proof that the era is indeed new. Such circularity—whereby "the Internet" is seen as revolutionary because of Factor X, but Factor X is seen as revolutionary because of "the Internet"—is silly, but in an era of profound and revolutionary change, this passes for deep insight. Take the fake novelty of a term like "crowdsourcing"— supposedly, one of the chief attributes of the Internet era, an idea that gave us that great source of didactic knowledge, Wikipedia. "Crowdsourcing" is certainly a very effective term; calling some of 36 The Nonsense of "the Internet"—and How to Stop It the practices it enables as "digitally distributed sweatshop labor"— for this seems like a much better description of what's happening on crowdsource-for-money platforms like Amazon's Mechanical Turk—wouldn't accomplish half as much. But effective euphemisms come with trade-offs; they don't always capture the historical complexity of the processes they purport to describe. Didn't the British government turn to "crowdsourcing"— in 1714!—to solve the "longitude problem" and solicit proposals for how to better navigate at sea? Didn't the Smithsonian Institution— in 1849!—turn to a network of over six hundred volunteer observers (in Canada, Mexico, Latin America, and the Caribbean) to submit monthly weather reports (published in 1861 as the first of a two-volume compilation of climactic data)? Didn't Toyota hold a contest—in 1936!—to redesign its logo, only to receive 27,000 entries in return? Didn't Zagat turn to a form of "crowdsourcing" to generate its restaurant reviews long before Yelp made online reviews fashionable? Granted, today it's much easier and cheaper to do such things, but a revolution in knowledge gathering it isn't— not if we want the word "revolution" to retain any meaning at all. This message, however, is lost on our Internet pundits, who think that "the Internet" has fundamentally altered how knowledge is produced—nay, it has even altered what counts as knowledge. This, at any rate, is what David Weinberger of Harvard's Berkman Center argues in his recent book Too Big to Know. Like Eric Schmidt, Weinberger has seen the grain of "the Internet" and never looked back since. "Knowledge is taking on the shape of the Net—that is, the Internet," he proclaims, with unabashed enthusiasm. "Knowledge now lives not just in libraries and museums and academic journals. It lives not just in the skulls of individuals. Our skulls and our institutions are simply not big enough to contain knowledge. Knowledge is now property of the network, and the network embraces businesses, governments, media, museums, cu-rated collections, and minds in communication." This is heady stuff, but it couldn't be more wrong. It seems that Internet-centrism turns our most insightful analysts into Martians, who have just landed on Earth and have a hard time imagining how things are run over here. So, in their doomed quest to understand these quirky humans, they venture into a modern university, 37 To Save Everything, Click Here where they encounter professors, who spend hours coauthoring papers with strangers on other continents, browsing academic journals housed on servers miles away, giving Skype presentations at international conferences. "Ah," say the Martians, "we get it: this Internet thingy is the network that generates all your knowledge. Let's drink to that!" Poor Martians: they'd never understand that the real knowledge-generating networks lie elsewhere—they tie together scholars, universities, conferences, computer servers, books, norms and practices, the phenomena they study and the tools and laboratories that allow them to do so. "The Internet" may be strengthening and occasionally weakening some of these networks—and it is certainly creating conditions for new networks to emerge—but it Aozsrit fundamentally change anything about what counts as knowledge or how it's made. By Weinberger's logic, we can also say that knowledge used to be the property of the airport or the post office—those did facilitate its production in the past—but that would be an insight far more trivial than the role Weinberger fashions for "the Internet." Contrary to his claim that "knowledge is now property of the network," knowledge has always been property of the network, as even a cursory look at the first universities of the twelfth century would reveal. Once again, our digital enthusiasts mistake impressive and—yes!—interesting shifts in magnitude and order with the arrival of a new era in which the old rules no longer apply. Or, as one perceptive critic of Weinberger's oeuvre has noted, he confuses "a shift in network architecture with the onset of networked knowledge per se." "The Internet" is not a cause of networked knowledge; it is its consequence—an insight lost on most Internet theorists. But even more disturbing about Weinberger's account is that it seeks to carve "the Internet" from the complex sociotechnical relations that it embodies and to analyze it on its own terms, as if it were a widget that fell from the sky and hence has no history or connections of any sort. This is a recurring feature of modern Internet discourse—yet another instance of vulgar McLuhanism—for it allows its practitioners to juxtapose "the Internet"—in optimistic accounts, it's the avatar of everything modern and progressive; pessimistic accounts usually hold the very opposite—against some social force or group, be it the mainstream media, Hollywood, dictators, or 38 The Nonsense of "the Internet"—and How to Stop It dissidents. Only by severing "the Internet's" ties with its context and presenting it as a McLuhanesque "medium" does any kind of simplistic score keeping—that never-ending game of trying to determine whether "the Internet" is good or bad for one thing or another—become possible. It's time to put an end to this score-keeping game, for it generates nothing but confusion. It enables Weinberger to write, "At the very same time [that "the Internet" is blamed for all sorts of problems], sites such as Politifact.com are fact-checking the news media more closely and publicly than ever before." Score one for "the Internet"? After all, Politifact.com is a site, and a site is something that belongs on "the Internet's" side of the ledger. This is plain silly: Politifact.com might be a website, but it's also a project of the Tampa Bay Times, a venerable newspaper operation. Yes, we should talk about the new forms of fact checking made possible by new technologies, but to imagine that somehow Politifact.com tells us something of interest about the nature of "the Internet"— assuming, for a moment, that such a nature exists—is dead wrong. With Models Like This . . . Weinberger's commitment to Internet score keeping points to one of the great dangers of relying on "the Internet" as a causal explanation. Once commentators know what they want to say about the universe—that the world is flat, that knowledge is no longer contained in books, that Apple is bad for innovation, that dictatorships are crumbling everywhere, that no one reads serious fiction anymore—"the Internet" can always be invoked to provide a quick and easy (and invariably wrong) explanation. However, the ready availability of such Internet-driven explanations in itself needs to be explained. For both tech boosters and tech critics alike, "the Internet" is like George Soros in Glenn Beck's diagrams: once you plug it in, the great conspiracy suddenly makes sense. Fiction-wise, it's a brand new genre all in itself: a Webdunit. Worse still, what many take to be original Internet theory— that is, a brave attempt to explain the world by accounting for the role of "the Internet" in it—is often just a derivative mishmash that borrows from some of the stalest, most banal approaches of 39 To Save Everything, Click Here modern political science and economics. If these approaches were not served with the tasty sauce of Internet-centrism, the explanations they generate would be questioned, opposed, and dismissed far more often. Alas, the conceptual novelty of "the Internet" as a field of inquiry, combined with the irresistible pull of Internet-centrism, renders the highly problematic areas of the underlying theoretical frameworks almost invisible. Take Clay Shirky's Here Comes Everybody, which enjoys a cult status in geek circles as a seemingly original argument about the falling costs of collaboration. For much of his theoretical apparatus, Shirky draws on two sources: Susanne Lohmann's explanation of the 1989 protests in East Germany by means of rational-choice theory (from which Shirky borrows the notion of information cascades) and Ronald Coase's theory of the firm (from which Shirky borrows the notion of transaction costs). Alas, neither of them is an unambiguously good or neutral guide to understanding digital technologies once we liberate ourselves from Internet-centrism. Like most scholars in the rational-choice tradition, Lohmann— whom Shirky misidentifies as a historian (she's a political scientist)— doesn't explain collective action of East Germany by attending to historical and cultural factors or tracing the emergence of new attitudes or ideologies. Such analysis requires far more extensive on-the-ground knowledge than most political scientists can boast. They have been trained to use data to build models—very much like those that failed to predict the collapse of the Soviet Union or anything of even minor significance in the last few decades—so their case studies are stripped of much local color by design. Thus, in order to explain the 1989 protests, Lohmann comes up with a comprehensive and mostly context-independent theory of information signals and incentives that allow people to synchronize their behavior; since the people in Lohmann's models are one-dimensional and ahistorical characters, a theory of information cascades works as well in Calcutta as it does in Cairo (which is to say that, beyond offering some banal generalizations, it mostly doesn't work at all). Thus, the theory goes, if people see other people who are already protesting in the streets, they will be inclined to join in, but only after the protests reach a certain calculable high point. 40 The Nonsense of "the Internet"—and How to Stop It That such a bland approach might help us explain the revolutions of 1989—or of 2011, for that matter—is highly disputed, not least because by focusing on the information strategies of social movements, such analysis inevitably pays short shrift to the dynamics of the state institutions they were opposing. As Steven Kotkin and Jan Gross, two distinguished historians of Eastern Europe, note about Lohmann's work, "Generalizing about social movements from the Communist experience can be hazardous because of the nature of the Communist state." Shirky, however, doesn't just generalize; he uses Lohmann's theory of information cascades to explain the political effects of "the Internet" everywhere: "the Internet" makes for better information signaling; as such, citizens should be expected to rebel more often. One of Shirky's pithy sayings—that "behavior is motivation that has been filtered through opportunity"—is as good a slogan that the bland theorists of rational-choice theory are ever likely to get. How well does rational-choice theory explain political change? After several decades, the jury is still out, but its promoters have little to boast of. As Donald Green and Ian Shapiro note in their devastating critique of rational-choice theory—which also takes aim at Lohmann's work—its leading proponents "share a propensity to engage in method-driven research, and . . . this propensity is characteristic of the drive for universalism." In other words, since model building is their hammer and their only tool, the proponents of rational-choice theory see everything as a nail—and so they attempt to explain any kind of behavior, no matter complex or culturally specific, using the dry talk of incentives and opportunities. It's no wonder that Clay Shirky can explain the behavior of anorexic girls, open-source communities, revolutionaries in East Germany, and rebellious teenagers in Belarus through one clean theory of information cascades. It's a theory that can explain everything—but in its generality and disregard for details, it actually ends up explaining nothing. As Green and Shapiro point out, there's more to political behavior than just incentives and opportunities. In fact, "it may be shaped by enthusiasm for the collective objectives, attitudes toward leaders and prominent symbolic figures in the movement, and feelings of personal adequacy and obligation to participate." The choice of the explanatory model depends on what 41 To Save Everything, Click Here needs to be explained; it can't just follow from one's preference for building models or studying incentives and opportunities—even if digital platforms and technologies offer plenty of data on what one's opportunities and incentives may have been. To criticize Lohmann for her explanation of the 1989 protests or Shirky for his explanation of the political protests of the last few years is not to deny the importance of technology, let alone to question the need for protests, but to point out that another, richer, more intellectually stimulating way of discussing the same set of events is possible. To quote Green and Shapiro again, rational-choice theory turns "a dispassionate search for the causes of political outcomes into brief-writing on behalf of one's preferred theory. If one is committed—in advance of empirical research—to a certain theory of politics, then apparent empirical anomalies will seem threatening to it and stand in need of explaining away." This is perhaps the best summary of what's so wrong with much of contemporary Internet-biased theorizing about politics. The models that Shirky and his disciples rely on, while nominally about "the Internet," do smuggle in a "certain theory of politics"—a theory of citizens responding to incentives and clinging together if they get the right signals and have the right tools—which is awfully simplistic to account for political developments in much of the world. Nowhere does Shirky allude to the heavy intellectual baggage that comes with his methods; in fact, he just recasts Lohmann as a historian, so a theory of information cascades becomes something of a legitimate historical narrative rather than a reductionist model of human behavior. Any anomalies that do turn up—the findings that dictators are extremely smart in using the same technologies, or that people don't always respond to incentives, or that forces like nationalism and religion are exerting a profound and unpredictable influence on how people behave and are themselves transformed by technology—are simply dismissed as technophobic pessimism. In a true Hegelian dialectic spirit, Internet-centrism sustains itself through the binary poles of Internet pessimism and Internet optimism, presenting (and eventually consuming) any critique of itself as yet another manifestation of these two extremes. To challenge this ideology and this way of talking and thinking is to be imme- 42 The Nonsense of "the Internet"—and How to Stop It diately dismissed as too pessimistic or optimistic, as if no other type of critique were even conceivable. It's one of the hallmarks of Internet-centrism—at least as it manifests itself in the popular debate—that it brooks no debates about methodology, for it presumes that there's only one way to talk about "the Internet" and its effects. Shirky's veneration of Ronald Coase's theory of the firm—and its accompanying discourse on transaction costs—may seem harder to dismiss, not least because Coase is a Nobel Prize-winning economist. References to Coase pop up regularly in the work of our Internet theorists; in addition to Clay Shirky, Yochai Benkler also draws heavily on Coase to discuss the open-source movement. There is nothing wrong with Coase's theories per se; in the business context, they offer remarkably useful explanations and have even helped spawn a new branch of economics. But here is the problem: thinking of a Californian start-up in terms of transaction costs is much easier than pulling the same trick for, say, the Iranian society. While it seems noncontroversial to conclude that cheaper digital technologies might indeed lower most so-called transaction costs in Iran, that insight doesn't really say much, for unless we know something about Iran's culture, history, and politics, we know nothing about the contexts in which all these costs have supposedly fallen. Who are the relevant actors? What are the relevant transactions? In the absence of such knowledge about Iran, the natural reflex is to opt for the simplest possible model: imagine a two-way split between the government and the dissidents and then think through how their own transaction costs may have fallen thanks to "the Internet." This seems like a rather perfunctory way of talking about a rather complex subject. Cue Don Tapscott, a popular Internet pundit, proclaiming that "the Internet not only drops transaction and collaboration costs in business—it also drops the cost of collaboration in dissent, rebellion and even in insurrection." Okay— but is no one else in these countries collaborating or engaging in transactions? Is it just the dissidents? Are the dissidents united? Or do they all have different agendas? Internet-centric explanations, at least in their current form, greatly impoverish and infantilize our public debate. We ought to steer away from them as much as possible. If doing so requires 43 To Save Everything, Click Here imposing a moratorium on using the very term "Internet" and instead going for more precise terminology, like "peer-to-peer networks" or "social networks" or "search engines," so be it. It's the very possibility that the whole—that is, "the Internet"—is somehow spiritually and politically greater than the sum of these specific terms that exerts such a corrosive influence on how we think about the world. Hype and Consequences Ahistorical thinking in Internet debates is too ubiquitous and persistent to be written off as ignorance or laziness. It's not that history books are not consulted because our Internet theorists are lazy; rather, it's that history itself is deemed irrelevant, for "the Internet" is seen as representing a distinct rupture with everything that has come before—a previously unreachable high point of civilization. Such "rupture talk"—an essential ingredient without which epochalism would be impossible—itself has a history. For example, University of Michigan historian Gabrielle Hecht sees similar themes and undertones in debates surrounding the advent of nuclear weapons and nuclear electric power in the 1950s, adding that both epochal discoveries were seen as marking "a historical break, the dawn of a new era—here, 'the nuclear age'—in which everything, everywhere, was forever different." Under closer scrutiny, "rupture talk" appears everywhere in our Internet discourse. One can't think of a better example than the remark from Jonathan Zittrain at a 2011 conference on Internet governance in Toronto. Noting the challenges facing states, Internet companies, and their users, Zittrain asserted that there was a special reason for the audience to debate the issues of "the Internet," for, back in the day, "we wouldn't really have a conference here about electricity and the ways in which it could be used for good and evil." It's hard to think of a sentiment that better captures the naivete of Internet triumphalism and the utter contempt in which it holds the history of technology. The debates over electricity were, in fact, as dramatic and bizarre as the debates we are currently having about "the Internet," its democratic potential, and its effect on our brains. How else to ex- 44 The Nonsense of "the Internet"—and How to Stop It plain the publication of a book like The Silent Revolution, or the Future Effects of Steam and Electricity upon the Condition of Mankind—in 1852!—which promised "social harmony of humanity" on the basis of a "perfect network of electric filaments." Or what to make of the fact that Patrick Geddes, Petr Kropotkin, and other nineteenth-century thinkers believed that electricity would usher in a brand-new age of neotechnics, where, to quote French historian Armand Mattelar, "town and country, work and leisure, brain and hand" would be reconciled? Or what to do with Nazi engineers like Franz Lawaczeck, a founding father of the National Socialist engineers' association, who believed that the Third Reich could promote small farms and businesses, thus encouraging a decentralization of society, by generating an abundance of cheap electricity? This is not to mention the complex and controversial history, itself full of protracted battles and rancorous debates, over the physical infrastructure that made electricity widely available. Only by papering over and suppressing such history can we see "the Internet" as unique and exotic. It's not that our Internet thought leaders are insincere or inclined only to say things that will secure them better consulting projects— even though, occasionally, this seems like a factor. Rather, they themselves believe their own epochalist rhetoric. This, as we'll see later in the book, explains both the religious zeal with which they embark on and justify their quest to ameliorate the human condition as well as their lack of empathy for industries and institutions that are currendy in crisis. Ruptures, after all, often involve sacrifices— or, as Clay Shirky likes to say, "it's not a revolution if nobody loses." In order to be valid, any declaration of yet another technological revolution must meet two criteria: first, it needs to be cognizant of what has happened and been said before, so that the trend it's claiming as unique is in fact unique; second, it ought to master the contemporary landscape in its entirety—it can't just cherry-pick facts to suit its thesis. Under these conditions, very few of the contemporary declarations about the profound revolutionary impact of "the Internet" would survive close scrutiny. The examples are numerous, but perhaps one will suffice. Like many other commentators on how the young people use technology, Don Tapscott and Anthony Williams, authors of 45 To Save Everything, Click Here Wikinomics and its sequel, Macrowikinomics, argue that "the Internet" has produced an entirely new generation—the so-called digital natives. Tapscott and Williams find so much to love about these chaps! They are "bringing a new ethic of openness, participation, and interactivity to workplaces, communities, and markets." Moreover, "rather than being passive recipients of mass consumer culture, the Net generation spend time searching, reading, scrutinizing, authenticating, collaborating, and organizing (everything from their MP3 files to protest demonstrations)." They are "a generation of scrutinizers" who are "more skeptical of authority as they sift through information at the speed of light by themselves or with their network of peers." Best of all, "today young people are authorities on the digital revolution that is changing every institution in society." How much of this is true? It's hard to say. Several studies show that, of all age groups, young people tend to be least informed about many aspects of digital culture. For example, a 2010 study that investigated what users know about online privacy found that "among all age groups, higher proportions of the eighteen-twenty-four-year-olds had the poorest understanding of the meaning of the privacy policy label and the right of companies to sell or share their data with other firms." As one of the study's authors put it, "The online savvy many attribute to younger individuals (so-called digital natives) doesn't appear to translate to privacy knowledge." In other words, it's not that young people don't care about privacy—they do—they just don't have the digital savvy that Tapscott and Williams attribute to them. These conclusions are echoed in a recent study from the European Commission that also found young people lacking in many digital competencies. A 2009 empirical study of students at five British universities found that "it is far too simplistic to describe young first-year students born after 1983 as a single generation. . . . [They are] not homogenous in [their] use and appreciation of new technologies and . . . there are significant variations amongst students that lie within the Net generation age band." But conducting such studies, of course, is not as sexy as musing on "the digital revolution that is changing every institution in society." The latter probably pays better, too. 46 The Nonsense of "the Internet"—and How to Stop It Gutenberg in the Kingdom of Geekistan If what we are witnessing today is not an "Internet revolution," does that mean the changes we are observing are trivial and unimportant? This, after all, is the charge that Internet cheerleaders levy against their opponents: the curmudgeons, we are told, must be blind, for they keep denying the importance of faster and cheaper communications, despite the self-evident benefits. Alas, these cheerleaders fail to notice that, while there is only one way to deny the importance of latest technologies, there are multiple ways to acknowledge it. A quest to tell a different story, composed of different characters and accents, requires no curmudgeonly passion to proceed. Do historians of science who challenge the popular accounts of the scientific revolution deny that the discoveries of Newton and Galileo had something important to contribute to humanity? They certainly do not; instead, they acknowledge them in a different, subtler manner, pointing out that continuities between, say, seventeenth-century natural philosophy and its medieval predecessors were much more numerous than discontinuities. As historian of science Steven Shapin argues, "The past is not transformed into the 'modern world' at any single moment: we should never be surprised to find that seventeenth-century scientific practitioners often had about them as much of the ancient as the modern." Our contemporary framing of those changes as an event or series of events— as a well-contained "revolution" with start and end dates—is a relatively recent phenomenon; the very phrase "scientific revolution" was probably coined by philosopher Alexandre Koyre in 1939. Or consider historians of medicine who refuse to entertain the notion that the numerous changes that happened in the science of bacteriology in the second half of the nineteenth century constitute a "bacteriological revolution"—still a popular term in the discipline. It would be silly to deny that important changes did happen in that period, but rejecting a label like the "bacteriological revolution" means acknowledging them in a different manner. For example, historian Michael Worboys, writing of the supposed bacteriological revolution in 1880s Britain, identifies four interlinked changes often invoked to support its existence. Having 47 To Save Everything, Click Here closely studied evidence for all four of these changes, Worboys concludes that "historians have read into the 1880s changes that occurred over a much longer period, and that while there were significant shifts in ideas and practices over the decade, the balance of continuities and changes was quite uneven across medicine." Note that Worboys doesn't deny the importance of contributions made by Robert Koch or Louis Pasteur (well, "Pasteur" is probably more like it)—he just points out that the actual way in which these discoveries transformed the medical practice was much more convoluted; it was anything but predetermined or inevitable. Such subtle accounts that seek to acknowledge important changes without falling into the epochalist mode are very hard to find in Internet studies. Perhaps it's time to turn the tables on Internet pundits; instead of having them explain "the Internet," we must try to understand why they explain digital technologies in this particular way, with constant invocations of "the Internet" and its inherent nature. Why do rupture talk and revolutionary rhetoric tend to displace all other forms of analysis? Why do we label old activities as new, imagine incompetent youngsters as possessing complete mastery over technology, and believe that nothing matches "the Internet" in terms of the complexity of the debate that it triggers? Isn't it time to inquire into what we are not talking about when the debate itself—that is, the issues it attends to and the questions it formulates—is constructed in revolutionary terms? No one exemplifies the temptations and limitations of the rupture talk better than Clay Shirky, so perhaps it's worthwhile to return to his theories. Shirky sees the digital revolution everywhere, but it's especially pronounced in the media business. "When someone demands to know how we are going to replace newspapers, they are really demanding to be told that we are not living through a revolution. . . . They are demanding to be lied to," he declares. Revolutions, according to Shirky, are unpredictable— they can only be diagnosed, in real time. Hence, "the more serious you are about believing something is a revolution, the more you are confessing that you can't predict the future. That if it's a revolution it can't be predictable. And if it's predictable it can't be a revolution." It's a curious admission that insulates our techno-futurists from criticism. If they get things wrong—which they do 48 The Nonsense of "the Internet"—and How to Stop It all the time—they can write off such mistakes as the cost of doing business in our hyperrevolutionary times. But Shirky, who also works as a consultant, knows the mantra of his trade: every crisis is to be recast as an opportunity. Thus, we hear that "nothing will work, but everything might. Now is the time for experiments, lots and lots of experiments." This, however, is Shirky the good cop—the one who thinks resistance is not futile. Shirky the bad cop, however, is not so sure and often succumbs to a weird form of digital fatalism, which borders on digital defeatism: "There is never going to be a moment when we as a society ask ourselves, Do we want this? Do we want the changes that the new flood of production and access and spread of information is going to bring about?" For Shirky the bad cop, everything has already been determined by the information gods; all we can do is accept the inevitable and enjoy the revolutionary ride. These days there's so much anxiety in so many industries that Shirky, using his bad cop/good cop routine, provides just the right mix of flagellation and counseling. But something else makes his style of rupture talk so appealing. Oddly enough, it's his clever use of history—in a debate that is traditionally ahistorical—in order to establish some kind of equivalence between the invention of the printing press and the advent of "the Internet." And it's not just fake history of East Germany, which is actually just rational-choice theory in disguise. References to the printing press are also ubiquitous in Shirky's writings. He dedicates several pages of his Cognitive Surplus to drawing an explicit analogy between Gutenberg's invention and the proliferation of social media. Elsewhere, he notes, "We're collectively living through 1500, when it's easier to see what's broken than what will replace it." He argues, "It is too early to tell whether the Internet's effect on media will be as radical as that of the printing press. It is not too early to tell that there is nothing that happened between 1450 and now that comes close." But why not? Take the very opposite perspective—that "the Internet" changes nothing. As historian Marshall Poe puts it: "It's not much of an exaggeration to say that the Internet is a post office, newsstand, video store, shopping mall, game arcade, reference room, record outlet, adult book shop and casino rolled into 49 To Save Everything, Click Here one. Let's be honest: that's amazing. But it's amazing in the same way a dishwasher is amazing—it enables you to do something you have always done a little easier than before." This seems to downplay some of the structural changes that have happened in the last few decades, but it's not self-evident why the Shirky-style triumphalist explanation offers a more accurate interpretation than Poe's. The shifts triggered by the proliferation of digital technologies must be investigated through a careful empirical and historical analysis; we can't just claim that some glorious event in the past— whether it's the invention of printing or the revolutions of 1989 or 2011—is functionally equivalent to the contemporary situation. Still, the idea that "the Internet" is the new printing press seems to have hijacked the public imagination. It's one of the core precepts of Internet-centrism. Thanks, in part, to Clay Shirky, Gutenberg's invention has now become one of the original myths of "the Internet"—never mind the more than five hundred years in between. Two recent books—neither written by a historian—explicitly present Gutenberg as the geek extraordinaire. John Naughton, a technology columnist for the Observer, has penned a book with the self-explanatory tide From Gutenberg to Zuckerberg. Gutenberg, we learn, "must have been an archetypal geek," who had to deal "with the early stirrings of venture capitalism, an experience at least as traumatic as anything encountered by Silicon Valley hopefuls five centuries later because it left him without ownership of the thing that he had created." Not surprisingly, Naughton concludes that "by looking in more detail at the transformations that printing brought about, we can perhaps get an idea of where we should be looking for the longer-term impact of the Net." Another book in this vein is a Kindle single by Jeff Jarvis with the even more self-explanatory tide Gutenberg the Geek. According to Jarvis, Gutenberg, "possibly the world's first technology entrepreneur," should be seen as "the patron saint of Silicon Valley, for he used technology to create an industry—perhaps the genesis of industrialization itself—and to improve his world." There's more: "Gutenberg—just like a modern-day startup—depended on exploiting new efficiencies, achieving scale, reusing assets, dividing specialized labor, and setting standards." In fact, "the parallels between his enterprise and those of Silicon Valley startups today is 50 The Nonsense of "the Internet"—and How to Stop It [sic] striking. He faced similar challenges and grappled with apparently timeless business dynamics. He, too, operated in a climate of disruption and, like his entrepreneurial descendants, caused profound change of his own." Navigating the bogs of contemporary Internet hype, one has to be careful not to assume that such hype is itself unique to "the Internet." The printing press, for example, has long been useful to technology boosters—not least because we know how the print story ends: literacy, science, progress. Observe Daniel Boorstin, America's most overrated historian, writing in the late 1970s: "The democratizing impact of television has been strikingly similar to the historic impact of printing." Once Boorstin makes this dubious statement— have you watched television lately?—the rest follows quite naturally, with the kind of bombast that one could expect from Clay Shirky or Jeff Jarvis: "The era when television became a universal engrossing American experience, the first era when Americans everywhere could witness in living colors the sit-ins, the civil rights marches, was also the era of a civil rights revolution, of the popularization of protests on an unprecedented scale, of a new era for minority power, of a newly potent public intervention in foreign policy, of a new, more publicized meaning to the constitutional rights of petition, of the removal of an American President." The shorter Boorstin: move over, Martin Luther King Jr.—it was television, the natural successor of the printing press, that gave us the Civil Rights Act! There may be other reasons why Gutenberg is in such high demand. Geeks and technologists have a very soft spot for the Protestant Reformation; the printing press plays an extremely important role in that narrative. Christopher Kelty, an anthropologist at the University of California, Los Angeles, who has studied geek cultures inside various open-source communities, has written brilliantly about this tendency in Two Bits: The Cultural Significance of Free Software. According to Kelty, many geeks, united by their common struggle against the Microsofts of this world, regularly exploit various usable pasts, myth-like stories that draw on historical events, not in order to remember the past but, rather, to make sense of the present and the future. The story of the Protestant Reformation—with its allegorical battles between Catholic and Protestant churches, laity, clergy, 51 To Save Everything, Click Here and high priests and the accompanying images of control and liberation—is one such usable past. Kelly notes that "the Protestant Reformation makes for good allegory because it separates power from control; it draws on stories of catechism and ritual, alphabets, pamphlets and liturgies, indulgences and self-help in order to give geeks a way to make sense of the distinction between power and control, and how it relates to the technical and political economy they occupy." This is why, in many a geek debate, the state is recast as the monarchy, large corporations as the Catholic Church, startups and programmers as Protestant reformers, and the laity as "lus-ers" and "sheeple." Kelty believes that such stories are popular with geeks because they "explain a political, technical, legal situation that does not have ready-to-narrate stories." From Bad Book History to Bad Blog History But while Kelty's insights might explain why Gutenberg stories spun by Shirky and Jarvis appeal to technologists and geeks, it's still not clear why such stories are misleading or inappropriate or why they circulate beyond the Kingdom of Geekistan. Neither Jarvis nor Shirky is a historian, so in discussing the impact of the printing press—which they think is comparable to the impact of "the Internet"—both turn to the same source: Elizabeth Eisenstein's landmark two-volume study The Printing Press as an Agent of Change, first published in 1979. Without understanding the limitations of Eisenstein's highly disputed account of the "revolution" that followed the invention of the printing press, it's impossible to make sense of contemporary claims for the significance of "the Internet," not least because the stability that her account lends to "the Internet" makes her a favorite source of Internet optimists and pessimists alike (Nicholas Carr draws on Eisenstein's work in The Shallows). Much like with rational-choice theory, what many fellow scholars believe to be rather problematic scholarship is presented as universally admired and entirely uncontroversial. To use Eisenstein as our guide to "the Internet" is to commit to a very particular way of thinking about digital matters. Drawing heavily on the work of Marshall McLuhan, Eisenstein argues that the importance of printing in triggering all the subsequent social transformations had not yet been sufficiendy credited 52 The Nonsense of "the Internet"—and How to Stop It (hence she clubbed it the "Unacknowledged Revolution"). But while trying to do justice to the role of the printing press in history, Eisenstein embraces a rather limiting view of print media, overemphasizing what she believes to be the inherent qualities of this technology: fixity (i.e., its ability to preserve texts that might otherwise get lost or badly damaged), ease of dissemination, and the tendency toward standardization. According to Eisenstein, the very technology of print endows texts with these new qualities—and the rupture is so significant that she elevates those qualities to the status of "print culture." The latter gives us the Reformation, the scientific revolution, the Big Mac, Steve Jobs, and LOLCats. Many scholars have noted the limitations of Eisenstein's approach, which are extremely pertinent to the contemporary Internet debate. The first to ring alarm bells—in 1980, just a year after the book was published—was intellectual historian Anthony Grafton, who berated Eisenstein for pulling "from her sources those facts and statements that seemed to meet her immediate polemical needs." More problematic, in Grafton's view, was the fact that Eisenstein, in her quest to emphasize the radical nature of the break between the age of scribes and that of printers, minimized "the extent to which any text could circulate in stable form before mechanical means of reproduction became available." In other words, all these efforts to draw sharp distinctions between different cultures and ages smack of unabashed epochalism; many features of the "print culture" were in place—even if on a smaller scale—well before this culture sprang up out of nowhere. More recently, literary scholar Mark Warner and historian Adrian Johns have offered much more devastating critiques of Eisenstein's account. Warner, in his The Letters of the Republic (1990), argues that the technology of printing should not be seen as lying outside of culture or history. It certainly didn't come equipped with its own "logic" or "nature"; the "inherent" characteristics identified by Eisenstein were hardly universal and were not there from the start. Wherever they did appear, these features were the product of complex negotiations and contingent historical processes, not the natural attributes of printing technology. "No hard fact of technology dictates what counts as printing," notes Warner. In a somewhat Oakeshottian vein, he adds, "We know what we mean when 53 To Save Everything, Click Here we talk about printing, but we know that because we are in a tradition; we have a historical vocabulary of purposes and concepts that gives identity to printing, and meaningfully distinguishes for us between books that have been impressed with types and those that have been impressed with pens." Thus, Eisenstein's account holds only if one accepts a sharp separation between technology on the one hand and society and culture on the other—and then assumes that the former shapes the latter, never the other way around. The way Eisenstein inquires about the historical effects of print on society automatically brackets out the question of how society and culture made print what it is politically, materially, and symbolically. For Eisenstein, "print culture" just happens; it comes already prepackaged in its crustacean cage, its "inherent" characteristics intact and ready for immediate deployment. As Warner observes, by affording the print culture such a mysterious role—remember, she is a McLuhan disciple, after all—Eisenstein loses far more than she gains. "Politics and human agency disappear from this narrative . . . and culture receives an impact generated outside itself. Religion, science, capitalism, republicanism, and the like appear insofar as they are affected by printing, not for the way they have entered into the constitution and meaning of print in the first place." Johns, writing in his The Nature of the Book (2000), is even more scathing. " [Eisenstein's] press is something 'sui generis' . . . lying beyond the reach of conventional historical analysis. Its 'culture' is correspondingly placeless and timeless. It is deemed to exist inasmuch as printed texts possesses some key characteristic, fixity being the best candidate, and carry it with them as they are transported from place to place. The origins of this property are not analyzed." As a result, notes Johns, Eisenstein tends to invoke the print culture or one of its characteristics to fill whatever gaps open up in her analysis or the history itself. Thus, Eisenstein's approach "identifies as significant only the clearest instances of fixity. It regards instances when fixity was not manifested as exceptional failures, and even in the successful cases it neglects the labors through which success was achieved. It identifies the results of those labors instead as powers intrinsic to texts. Readers consequently suffer the fate of obliteration: their intelligence and skill is reattributed 54 The Nonsense of "the Internet"—and How to Stop It to the printed page. To put it brutally . . . Eisenstein's print culture does not exist." This is much more than an arcane debate between historians of the book. At stake here is how the history of the printing press— and of technology more broadly—should be done. Eisenstein's approach is to treat technology and its qualities as fixed, ahistorical, and unproblematic—and, by operating with such an impoverished notion of technology, to trace its effects on culture, society, and history. It's the same McLuhanesque approach that Nicholas Carr employs when he writes about "the Net" and compares it to "the book." Warner and Johns propose something quite different; instead of placing technology outside society, we can study how technology and society shaped each other, accounting for any local variations, tracing subde shifts in the meanings that different communities attached to different technologies, exploring how those differences emerged, explaining how communities went about exploiting these technologies, and so on. This is not a matter of denying that the printing press matters—only of doing its history in a different, more informative and intellectually stimulating manner. It's an attempt to go beyond medium centrism—be it the book-centrism of Eisenstein or the Internet-centrism of Shirky—in order to achieve a richer, more accurate view of book history and Internet history. As far as the contemporary debate goes, then, the discussion should focus on whether Shirley's Eisenstein-inspired account of "the Internet"—which is really an account of the presumed social effects of "Internet culture" rather than of the underlying physical infrastructure—is the best way to describe and acknowledge the role that these technologies are playing in the world at large. In other words, if we tried hard enough, we might find another way of talking about these technologies that would provide more nuance and not paper over important local differences. Of course, Shirky and Jarvis show no sign that their accounts of "the Internet," for all their ostensible historicism, might ultimately be based on bad history. Both recast their critics as pessimists, conservatives, and curmudgeons, as people simply opposed to change—the most typical way in which Internet-centrism sidesteps criticisms of itself. Thus Shirky writes, "There is no intellectually coherent conservative position with regard to the printing press. 55 To Save Everything, Click Here Most of the defenders of current culture don't even try to explain why it was OK that the printing press destroyed scribal production, but not OK that the internet threatens newsprint, or why a proliferation of new creators and experimentation with new forms was good in 1508 but bad in 2008. It is simply assumed that revolutions in the past were good but those in the future are bad." Of course, there is no coherent conservative position with regard to the printing press—this would also be an antiliteracy position—but as the numerous critiques of Eisenstein show, there are many alternative ways of talking about the printing press and its effects (one of Johns's many rejoinders to Eisenstein is tided "How to Acknowledge a Revolution"). If one doesn't see events described by Eisenstein as a "revolution," then perhaps one will also be less inclined to draw false equivalences between them and whatever is happening today. Jarvis goes even further in recasting this important debate over how to talk about technology as a black-and-white, pessimism-versus-optimism battle, with the implication, of course, that anyone who suspects that Eisenstein-like accounts might be limiting our debate is out of touch with the modern world. Here is how he summarizes Adrian Johns's challenge to Eisenstein: " [Johns] accuses . . . Eisenstein ... of giving too much credit to the printing press. . . . I'm a befuddled [sic] over the roots of the curmudgeons' one-sided debate. Why do they so object to tools being given credit? Are they really objecting, instead, to technology as an agent of change, shifting power from incumbents to insurgents? Why should I care about their complaints? I am confident that these tools have been used by the revolutionaries and have a role. What's more interesting is to ask what that role is, what that impact is." Jarvis's fundamental mischaracterization of Johns's critique of Eisenstein is only part of the problem. The Jarvis-Eisenstein view of the world presumes tools are fixed. They lie outside culture and history—an approach that characterizes much of Jarvis's writings about "the Internet" itself. Jarvis's contemporary revolutionaries invariably turn to "the Internet," but "the Internet" they find is unproblematic and unchanging, its democratic nature fixed in stone. In presenting important methodological critique as techno-phobia, Jarvis and Shirky are doing their best to hide the fact that 56 The Nonsense of "the Internet"—and How to Stop It a very different debate about "the Internet"—one that wouldn't assume a revolution and wouldn't cut corners with clever buzzwords— is both possible and badly needed. Their notion of "the Internet" is far too broad, fixed, and abstracted from local context. This is the overlap between the Internet debate and the printing press debate. But there's also a crucial difference between the two: how the former debate is resolved will have far more influence on our future; its ramifications will extend far beyond the community of historians who have been battling with Eisenstein. Moreover, how we choose to resolve the unfolding Internet debate will determine how future historians will study it as well. Too much ink has been spilled in the last few decades to correct for Eisenstein's inaccuracies; we don't want future historians to take the same lengthy detour with "the Internet." Recycle the Cycle If Eisenstein's print culture is an example of how clumsily history can be appropriated to frame the present debate about "the Internet," the traffic occasionally goes in the other direction as well— as in when our Internet commentators start with contemporary anxieties and travel back in history to show how many of the modern debates associated with "the Internet" are themselves just a subset of much greater, longer debates about networks, information, and technology. There is nothing wrong with their mission per se—some might even argue that this is what history is for—but most such accounts are peculiar in that, in their quest to tell a certain story about "the Internet," they misrepresent and badly mangle the past, leaving us with an impoverished reading of history and a confused game plan for the future. This should make us pause to ponder if Internet-centrism— whatever its own origins in bad history—might be nudging us to rewrite the history of other, pre-Internet periods with one simple purpose: to establish a coherent teleological account of how all other technologies paved the way for "the Internet" and how their own governance failed to embrace "Internet values" and may have delayed the arrival of this "network of all networks." This is the ideology of Internet-centrism at its purest: it suggests what kinds 57 To Save Everything, Click Here of questions we could and should be asking of the past. As an ideology, it has no need to dictate the answers, for we already know what we need to find in order to complete the grand narrative of "the Internet" itself. A troubling example of what Internet-centrism does to history— in terms of both mangling the content and giving a second life to arcane, long-forgotten methodologies—can be found in Tim Wu's much-acclaimed The Master Switch. Wu, a legal scholar who coined the term "net neutrality," is a leading contributor to unfolding debates about "the Internet"; The Master Switch is his attempt to explore the history of other technologies—the telegraph, telephone, radio, cinema, television—and illuminate what those technologies can tell us about our current predicaments. This sounds like a noble mission, but anyone undertaking it should be aware of the immense difficulty of engaging with the past on its own terms. At worst, an attempt to illuminate the present by studying the past can turn into a fishing expedition, where the past becomes just a giant toxic aquarium, storing enough factoids and exotic characters to buttress any interpretation of virtually any contemporary trend or phenomenon. Wu's argument in The Master Switch goes like this: There's something peculiar about information industries, for they tend to be dominated (and intellectually ravaged) by "information emperors"—Steve Jobs-like personalities who strive for absolute control. The dictatorial rule of such emperors and several structural qualities of their information empires usually lead to what Wu calls "the Cycle," which is the inevitable closing of the once open and innovative industries. It happens either because the information emperors are clever but ruthless businessmen or because they co-opt the government into giving them protection from competition. This is how we got Hollywood's studio system, which exercised unprecedented control over what films to make and what issues to censor; a closed telephone network, where AT&T banned users from plugging in their own devices, thereby potentially delaying the advent of "the Internet"; and, more recently, Apple's world of apps, in which a politburo sitting somewhere in Cupertino reviews and approves the apps it likes and deletes those it doesn't. 58 The Nonsense of "the Internet"—and How to Stop It Wu's proposed solution to this problem is to prevent companies in the information business from integrating vertically—that is, to prohibit companies that create information from owning or creating infrastructure for its dissemination and vice versa. But the government's involvement would end there: Wu's reading of history suggests that government involvement has been mostly detrimental to the growth of information industries. His ideal is to keep both big government and big business out of the information industries; this, according to Wu, is how all successful information industries have developed, including "the Internet," and this is how it should be in the future. Amen. This might seem like an appealing and elegant argument, but in reality it's just an attempt to come up with one of those "theories of everything." In this instance, "everything" is to be explained by a fixed set of concerns—in Wu's case, concerns over openness and innovation—that have come to dominate our thinking about "the Internet." First of all, Wu conveniently leaves aside those information industries—like book publishing—in which no dominant information emperor has emerged. The Cycle doesn't go there; it's too crowded. Curiously, one such emperor might emerge very soon—his name is Jeff Bezos, and he runs a small start-up called Amazon—but Wu himself seems to be enamored of Amazon and the price efficiencies it brings. Second, by limiting his history only to America—and why would "the Cycle," if it were real, unfold in America only?—he misses many foreign cases in which information emperors have done much good. Wasn't André Malraux, France's powerful minister of cultural affairs under Charles de Gaulle and the godfather of New Wave cinema, one such emperor, albeit perhaps of a public-service variety? Zooming in on Malraux's career would reveal that the success of the French film industry in the 1960s was the direct consequence of the government's eagerness to subsidize risky low-budget films and support maisons de la culture, where such films could be shown. It's not a story of market-led innovation; quite the opposite. Information emperors don't have to be seen as evil (perhaps they don't have to be seen at all; Internet-centrism, in Wu's hands, has miraculously resuscitated the much discredited "great-man-of-history" 59 To Save Everything, Click Here style of narrating the past). Likewise, governments, despite the many conspiratorial suspicions that geeks harbor about them, can be powerful and benevolent players in the information industry. One doesn't have to travel to France to see that; in fact, a more comprehensive look at the history of information empires in America reveals as much. As Paul Starr has shown in his devastating review of The Master Switch in the American Prospect, even a cursory look at the history of the post office—a communications network created by the government to foster free expression—is enough to disprove many of Wu's theories. The post office was conceived of as a monopoly, and it's been extremely successful in its mission. According to Starr, "The government didn't invite rival postal firms to compete; in fact, it created a monopoly. That monopoly, however, was conducive to free expression because of the policies Congress adopted, which subsidized the circulation of newspapers irrespective of their viewpoint and spread postal service throughout the country." But on "the Internet," no one likes monopolies— they smack of Microsoft and IBM—so this chapter of telecommunications history simply gets thrown overboard. Internet-centrism tolerates no competing hypotheses. As Starr points out, had the US government followed Wu's dictum that "government's only proper role is as a check on private power, never as an aid to it," it "would not have created the Post Office or fostered the rapid development of newspapers, and American democracy would have suffered. More recently, the United States would not have developed the Internet or public broadcasting"—both of which required massive public financing. Such strong antigovernment sentiment—that it's always a parasite on innovation—is a recurring feature of the geek mentality, which is partly responsible for the disgust many geeks feel toward politics. As Starr notes, "Government policy, in Wu's distorted recounting, is mostly a record of regulatory capture and craven mistakes that Americans should be ashamed of-—even though, strangely enough, the United States has for much of its history been a leader in communications, partly because of the constructive role government has played." Is it really that surprising, then, that a recent column on the technology site Info World was titled "Why Politicians 60 The Nonsense of "the Internet"—and How to Stop It Should Never Make Laws about Technology"? If geeks learn their history from Tim Wu, this sentiment follows quite naturally. Methodologically, Wu's treatment of information industries is very close to Eisenstein's treatment of print culture: he starts by simply projecting the qualities he associates with "the Internet" back into the past and assuming that the industries and technologies he studies have a nature, a fixed set of qualities and propensities, then proceeds to celebrate selectively those examples that support those qualities and discard those that don't. So Wu starts with the hunch that the openness of "the Internet" is under threat, travels back in history to find trends that suggest all information industries have experienced similar pressures, and returns to the present to announce that history reveals that openness is indeed under threat on "the Internet." That this is the very premise on which he starts his intellectual journey doesn't much matter in the end because such history has a very clear activist bend; the goal is not to understand the history of technology but to find enough historical arguments in order to—just like in Jonathan Zittrain's case—make "the Internet" live forever. Such Internet-centrism would be bad in itself, but it is also exerting a very unhealthy influence on technology and media history, where everything that transpired before "the Internet" is now reexamined according to its benchmarks. Historical accounts inspired by Internet-centrism are simply bad history, even if they occasionally make for effective policy advocacy on issues like net neutrality. That Internet-centrism makes us blind to this reality is a reason to worry, not celebrate. *** So our survey of Internet-centrism paints a rather depressing picture. The very idea of "the Internet" has not merely become an obstacle to a more informed and thorough debate about digital technologies. It has also sanctioned many a social and political experiment that tries to put the lessons of "the Internet" to good use. It has become the chief enabler of solutionism, supplying the tools, ideologies, and metaphors for its efficiency crusades. Internet-centrism has rendered many of us oblivious to the fact that a number 61 To Save Everything, Click Here of these efforts are driven by old and rather sinister logics that have nothing to do with digital technologies. Internet-centrism has also mangled how we think about the past, the present, and the future of technology regulation. It has erroneously convinced us that there are no other ways to talk about these issues without downplaying their importance. Internet-centrism has been tremendously helpful for activist purposes—it has rekindled (and occasionally created) geek religious movements that have been crucial to opposing government regulation of digital technologies. But what has been gained in activist efficacy has been lost in analytical clarity and precision. Internet-centrism's totality of vision, its false universalism, and its reductionism prevent us from a more robust debate about digital technologies. Internet-centrism has become something of a religion. To move on, we need, as French media scholar Philippe Breton put it, "a 'secularization' of communication." Such secularization can no longer be postponed. We need to find a way to temporarily forget everything we know about "the Internet"—we take too many things for granted these days—roll up our sleeves, and work to ensure that technologies do not just constrain human flourishing but also enable it. The chapters that follow apply this secularized approach to contexts as different as politics and crime prevention not just to illustrate what happens once solutionism meets Internet-centrism but also to think through a more productive civic use of technologies so beloved by solutionists. 62