Above and beyond the text of the primal book that serves as its staging point, the processed book has at least five aspects, which may overlap; and some of these aspects are more developed today than others:
A. The book as portal. This is the aspect of ebooks that most people are familiar with. A book becomes a specialized portal by encouraging readers to click through to other sources of information. The most primitive example of this is a book with a built-in dictionary. Every word in an ebook can be linked to its definition, pronunciation, etymology, etc., which augment the reading experience. Some ebooks link to proper names or Web sites where background information on the primary topic can be found. With hyperlinked footnotes an ebook can point a reader to its sources, including in some instances the full text of those sources. The ebook thus becomes a window on a bigger, interpretively supportive world of data.
Can't you do this with even our lowly primal book? You can do some of it. Printed books can have footnotes and bibliographies; they may have other metadata as well, such as an author's preface, an afterword by a scholar, or even a collection of critical essays (see, for example, the excellent Norton Critical Editions). Publishers have done great things with print, and they have every reason to be proud. But the primal book breaks down in the face of the microprocessor much as a horse would in a race with an SUV. The printed book has a footnote, but the processed book can have the full text of the citation. A bibliography in a processed book can be tantamount to a library dedicated to a particular subject. You can read a printed book with a good dictionary at your side, but with the processed book you can look up every entry in Webster's Third New International Dictionary with a mere click or two; and the lucky members of some academic institutions can now read a word's entire history in the OED.
But the electronic portal goes far beyond even this, connecting readers to specialized databases of information and online services. A bibliography in an ebook can link to the online catalog of a nearby university library, where you can determine if a particular book is in the collection. Or readers of Books in Print, a huge reference work marketed mostly to publishing-industry professionals, can look up any title they desire and then check the inventory status of that book with two of the nation's leading book wholesalers. This particular reference book, in other words, has become a "front end" to a business process by which booksellers can restock their stores. The processed book has the potential to make the contents of a book actionable, not merely readable.
One aspect of the book as portal is that it undermines the reading experience even as it augments it. Reading is linear and requires concentration. A portal link takes the reader away from the author's linear design and focuses his or her attention on other text. While that text may enrich the meaning of the original book, it also distracts the reader, who then must reorient him or herself upon returning to the primary material. As authors become increasingly aware of the potentialities of the processed book, we should expect that they will begin to write with these jumps in attention in mind. Perhaps they will encourage leaping, perhaps not; or perhaps they will learn to accommodate this aspect of the medium, just as audiobook publishers have learned to give their listeners special cues to help them with the transition from print to sound ("This is Moby-Dick by Herman Melville, cassette four, side two").
What will be especially interesting to see in the years ahead is whether authors will begin to regard their own work as portals and begin to write with an eye toward extending the book beyond its own contours. They could never do this in the printed form; it would be peculiar to ebooks. Perhaps some authors will be more open to including obscure references, knowing that the reader can obtain a gloss with a simple click. One wonders what T. S. Eliot would say if he were alive today and could view Ray Parker's online annotated version of The Waste Land (at http://world.std.com/~raparker/exploring/thewasteland/explore.html). On the other hand, it may be that some authors will resent the ease with which obscure references can be glossed electronically. For example, part of the meaning of the notoriously elliptical poetry of Ezra Pound lies in its very obscurity, in the sheer difficulty of catching all the allusions to other works. (For those unfamiliar with Pound, think of the allusive music of Elvis Costello or Smashmouth.) As a processed book, Pound's Cantos would lose some of its aesthetics of difficulty as every allusion is presented to the reader with helpful commentary. If Pound had been aware of the possibility of the processed book, he might have written a different kind of poetry altogether. I am inclined to think, though, that the processed book will make all writing, from serious literature to notes to the baby-sitter, more Pound-like for everyone—except Pound's disciples, who, perversely, will seek to distinguish themselves by the clarity and completeness of their expression. Ebooks are likely to become increasingly compressed as the need to spell everything out in the primary text is lessened by the one-click availability of explanatory texts. Writing, in other words, becomes not simple expression but also computer-assisted calculation. It is not too much to say that Pound, with his enormous influence on Modernist aesthetics, helped create the intellectual pressure that made the development of electronic publishing necessary.
B. The book as self-referencing text. Books consist of words that are organized in a particular way. Change the organization and you change the book's meaning, but some changes reveal some of the original meaning that previously had been obscure. You can try this with a printed nonfiction book with an index. Read the book through without looking once at the index. Now turn to the index and read all of the headwords. New patterns will appear. By taking the distillate of the book, an index becomes an interesting heuristic device.
It goes without saying that computers can do this better. The processed book can index a book in any number of ways, and each method will highlight a different aspect of the primary text. This capability is beautifully spoofed in Italo Calvino's If on a Winter's Night a Traveler, where a literary critic proclaims that she no longer reads literature, preferring instead to study a computer print-out of word-frequency counts. The processed book can show us word frequencies; it can map such frequencies against a statistically determined dictionary of "normal" usage, note the standard deviation, and output the result visually; it can associate certain words with specific characters; it can identify webs of metaphors that even the most attentive of readers may have missed. By identifying these patterns (or, it would be more accurate to say, by revealing these patterns for us to see and interpret), the processed book is doing some of the reader's work. A reader of The Scarlet Letter notes that there is a red "A" sewn on Hester Prynne's bosom, and later notes a passage where Hester walks in a garden of red roses: the connection noted, interpretation is possible. A computer can pick up this connection and many, many more, potentially providing us with some of the richness that we usually associate with rereading. The processed book is spatial: it takes the linear progression of a book and makes events from different times spring to mind simultaneously. It takes the primary book and makes it comment upon itself.
While this particular aspect of the processed book is generally unavailable in the current generation of commercial ebooks, it has been in use in research institutions for several decades. (Indeed, many of the features of the processed book mentioned in this essay will not become widespread for years.) I first saw word-frequency counts used in the study of literature over twenty years ago at an exhibit at the Modern Language Association annual convention. There was an alphabetized list with corresponding frequencies of every term in Joyce's Ulysses—one way to help to understand a notoriously difficult book. Dictionary-makers now routinely search through large content databases in this way, seeking to isolate new words and new meanings for old terms. (The latter—new meanings—requires a bit of artificial intelligence to be effective.) Dictionaries can also be made to "read" themselves, a good way to check for spelling errors and to make sure that every word that appears in a definition also is given its own headword, making the text of a good dictionary into a closed hermeneutic circle. For the most part, when fiction writers, as in the example from Calvino above, study this self-referencing characteristic, they treat it humorously. For example, a character, a writer, in a novel by David Lodge gets writer's block when he sees one of his own books processed in this way. He simply couldn't bear the self-consciousness that comes with knowledge. In this case, humor is the revenge of the primal upon the processed.
C. The book as platform. There is a simple and a complex form of the book as platform. The simple form is where commentary is heaped upon the poor unsuspecting text of the original work. This is not peculiar to electronic books, of course; hardcopy books often feature extensive critical commentary. To some extent this simple form of platform overlaps with the book as portal, in that the commentary is often found by clicking through the primary text. The commentary need not be restricted to formal criticism; it could include such things as a student's highlighted text or notes provided by an instructor. There are a number of companies exploring the simple platform now. We should expect this technical capability to become widespread, especially (at least initially) in higher education, where many students are required to have laptop computers and many instructors supplement classroom activity with online communication. The complex form of the book-as-platform, however, may be a bit obscure. Currently, it is much less developed than the processed book's portal and self-referencing qualities. It is the opposite of the book as portal. As a portal, a processed book points to other things; as a platform, a book invites other things to point to it.
Books want to be pointed to for the same reason that people want to be the center of attention. In a hardcopy book this desire may take the form of crafting curmudgeonly aphorisms, which lend themselves to quotation. In scientific work, the production of primary data can place a particular publication at the center of a huge web of citations. This is an extraordinary aspect of the processed book and bears some reflection. ISI publishes quantitative reports on how often particular scientific articles have been cited by other reports. A high score is presumed to indicate a good paper, which redounds favorably on the author. Well, do more citations mean that a paper is better? And what do we mean by "better" anyway? Or is it simply that we have thrown up our hands at the really hard question, the determination of value (an artifact, we should note, of the primal book), and have chosen instead to use a computer-assessible mechanism, a simple count, as a proxy for the hard question? This is not to say that this measure of the processed book is wrong; it simply isn't exactly right. We don't use it because it gives us the right answer, we use it because it gives us an easily derived answer—not unlike the old joke about a boy who loses a quarter on one end of a dark street, but chooses to look for it on the other end, because the light is better.
A platform is a specific and important thing in the computer industry; Bill Gates owes his wealth to having overseen the development of one of the most significant, the Windows operating system. A platform is what other things rest upon. Those other things (called applications) draw on or "call" the resources of the platform to perform certain tasks. So, for instance, software developers don't have to teach computers how to output color on the display; they simply invoke the platform's capability to display in color.
In the world of books, reference books most readily lend themselves to being reinvented as platforms. Dictionaries are now being created with software tools to allow them to be "called" from any word displayed on the screen: highlight the term and click and the definition appears. Encyclopedias are being used as platforms as well, though the implementation is generally limited to manually inserting hyperlinks in the primary text, links that then "call" the encyclopedia database. We shouldn't limit our thinking about platforms to natural-born backgrounders like reference books, however. For example, some books, in their primal form, come to be thought of as seminal. On the Internet something that is seminal can be instantiated as a parent text that links to all its offspring. When a book achieves seminal status, the publisher may then provide tools to make it easier for other works to link to it, converting it from a primal text to a platform. The ultimate book-as-platform is the Bible, which serves as a platform for a large swath of Western civilization. The Bible has yet to be published as a platform, however, though it has been published as a limited portal.
To publish the Bible as a platform means not only getting the content "right" (which in this context means having content that other people want to build upon) but also providing tools for other developers, whom we are likely to call authors or publishers, that make it easy to build on that platform. Metatags, information that helps to define the components of the documents that they are attached to, are such tools; they can identify such things as graphical categories ("this is a picture"; "this is a paragraph indentation"), rhetorical categories ("this is a paragraph"; "this is the beginning of a chapter"), and topical categories ("this passage is about cats"; "this passage is about dogs"), even when obvious keywords ("cats" and "dogs") are missing ("this is a passage about household pets"). Metatags can be weighted, which means that their importance can be ranked. This paragraph, for example, includes the keyword Bible, but the passage is not about the Bible, which should be given a low weighting. A metatag for literary theory would be given a higher ranking, even though neither term appears here; and a metatag for a consultant's marketing tool would get the highest ranking of all. At the risk of pushing the metaphor too far, the publisher of a book-as-platform needs to "expose the API," the application program interface, allowing other authors and publishers to write to the platform. The content of the platform is then conceived of as information objects, defined and discernible modules that can be invoked by other works.
The book-as-platform strains the traditional sense of what a book is, making it hard to reinvent or resuscitate a traditional book for platform work. For this reason, some of the work currently being done to create content platforms is original to the Internet, though the business prospects for the entities in this area are still uncertain. One venture is producing a set of reference data keyed to news items. For example, a reader who comes upon a reference to Ariel Sharon can click to a brief article about this figure. Similarly, a text reference to petroleum will link to an article about oil and the oil industry. The content created by this venture differs from traditional reference works in that it was designed with the Web in mind from the outset. The articles are short and can be easily displayed in a window on a computer screen, and the article topics are generated by scanning items that actually appear in the news (unlike a traditional encyclopedia, many of whose entries may be obscure to people who only read newspapers). A related venture has chosen not to create new content for a platform but has developed a database of Web site entries. So, when a reader comes upon a reference to Sharon, he or she can link to a small group of Web sites that contain information about Sharon, rather than to a specific article. As more and more reference information is published on the Web, the split between content that is made for the Web and content that is made with another medium primarily in mind will close.
There is, I believe, a very large business opportunity in creating books-as-platforms, especially by concentrating on reference material in specialized markets. General reference works—a new version, say, of Encyclopaedia Britannica, but with much more extensive coverage and an atomistic, short-entry editorial strategy—are tempting, but the cost of creating and maintaining such a work is staggering and the economic prospects discouraging. Better to work in vertical markets, whether for consumers or professionals. A definitive online encyclopedia of garden flowers, organized as information objects, would be a good project, but even better would be a highly technical encyclopedia of the genomes of garden flowers, including the genetic maps of each plant and flower, with downloadable files of data for simulation of genetic engineering. From a business point of view, as a rule, the narrower, the better; the more technical, the better; and if the data can be made instrumental—how things work—as opposed to interpretive—what things mean—better yet.
Publishers will devise various means to monetize their investments in books-as-platforms, but finding the right economic model (that is, the one that provides the highest return on capital) will be a process of trial and error. Publishers with seminal content may charge other publishers for the right to "call" the seminal property, but, on the other hand, if the costs are perceived to be too high, the seminal work may not be able to generate a substantial network around it, thereby undermining its seminal status. There is a trade-off here between short-term and long-term economic gain—which is another reason that publishers will continue to get bigger and bigger, as only large organizations can finance a long-term vision. An analogous situation exists today in the library world, especially the public library segment, where hardcopy trade or consumer publishers have always had ambivalent feelings. Most trade books are sold through retail outlets (bookstores, discount clubs, and online), but a book placed in a library's collection has the potential to cannibalize retail sales. On the other hand, there is strong evidence that library collections serve as a marketing mechanism to stimulate retail sales. Publishers therefore support public library sales provided that they don't become a substitute for retail distribution. This meets with a paradox: If we ever developed a fully-funded public library system in the United States, where everyone looked to their local libraries as the centerpiece of civic life, publishers would stop selling books to them.
I suspect that a split will develop between marketing books-as-platforms and books-as-applications (that is, mere books). Books-as-platforms, seeking ubiquity and determined to keep their transactional costs down, are likely to be marketed to pre-existing communities such as the faculty and students of a university, the employees of a corporation, or a special-interest group (the local chess club licensing an encyclopedia of chess openings). Books that draw on these content platforms will be marketed both on a community basis and to individuals. This two-track marketing structure will encourage communities to take a larger role in their members' informational needs, which in turn will encourage closer community involvement. This is not a "one world" scenario, but rather one of tribal associations born of common economic interests.
It is worth noting a curious aspect of the book-as-platform, namely, that books that are created with this quality in mind are something of a self-fulfilling prophecy. A book-as-platform announces its availability to be invoked by other books in part through the suite of tools it makes available to third parties. A good book with no tools will not get invoked often. A bad book with good tools will not, one hopes, be invoked at all. But a good-enough book with good tools is likely to get invoked more often than the tool-less good book. Certain network effects—things external to a particular product or service that tend to support and even reinforce the original product, as the huge quantity of third-party software supports the Microsoft Windows platform—may then kick in, which will tend to strengthen the platform aspirations of the good-enough book. The Google search engine, for example, ranks Web sites in part by counting the number of links other sites have to the primary site; and since search engines, of which Google is the current leader, are a major source of traffic to Web sites, a large number of inbound links can result in even more inbound links. This means that the creation of a successful book will increasingly involve an awareness of what tools are necessary to inspire invocation. It is not enough to say something; it must be said in a way that others will choose to say it as well.
But how about the outstanding book, the book whose felt force is so great that it demands that we pay attention, despite an absence of platform tools and inept publication? Provided that we understand that the club for outstanding books is a small one indeed, the exceptional book will foster its own followers, who will assemble the network of processing tools around it. Despite its efforts, the processed book cannot ultimately do away with the exceptional primal book, whose very intensity exposes the limitations of computing.
D. The book as machine component. We have been spoiled by books. We believe that they have been written for us to read, that their ultimate goal is to reach us, that as readers we occupy a central place in the drama of culture. If the processed book attempts to separate the author from the text of his or her own work, we should not be surprised that the reader will soon fall under attack. One aspect of the processed book is to create books that are intended to be read by machines and embedded within machine processes. It is only a matter of time before books will be created with a machine-audience in mind. Considering the slow growth of the publishing industry today, the future of publishing may be to serve this new constituency.
Research into the use of aspects of human culture—books, for instance—as parts of computer algorithms has been going on for decades; some examples of this work are now finding their way into consumer devices and services. We commonly encounter text-to-speech synthesis (TTS) technology, for example, when we dial an information operator and are greeted with a robotic voice. TTS works by developing a collection of sounds that are mapped onto the letters and words of the text in question. While there are only forty such sounds (phonemes) in English, most TTS engines generate more sounds than that in order to reduce the choppiness of pronouncing one letter at a time; indeed the technical sophistication that "sits behind" what would seem to be a simple sound is dazzling and tends to overshadow the lexical content that it generates. Millions of personal computers now come with this technology built in. You can have your e-mail read to you in a robotic voice, if you want to, which may not make much sense for someone sitting at a computer, but is a great convenience for someone driving a car who can't or shouldn't take his or her eyes off the road; such mobile TTS is now available. One Silicon Valley company has developed a TTS tool for reading books to the blind, which is a wonderful addition to the world's media, as only a small portion of published books are ever recorded as audiobooks (and even then, for reasons of cost, mostly in abridged format). TTS will eventually find its way to all books, giving the reader of a processed book the choice of reading or listening. (This, by the way, will diminish or even destroy the $US2 billion audiobook business as we know it today, as the rights for a book's text intended to be read and the rights for audio will converge.)
The reverse of TTS is voice-recognition technology, though this technology is not as far along as TTS. A voice-recognition system incorporates a dictionary, which helps identify the words being spoken. Some of the current systems require that a particular speaker "train" the system for a period of time to make it work effectively, but even so, the principle of embedding a dictionary is unchanged. One way to improve the accuracy of voice recognition is to restrict the vocabulary of the system. This is what is going on when you are talking to an automated voice mail system, which may tell you something like: "At the tone please say your Social Security Number or you may key it in using your phone's dialpad." Creating such a restricted vocabulary is the equivalent of making an abridged paperback version of a dictionary, except that the requirements of voice-recognition technology are largely determined empirically, by studying the words users actually employ and expanding the vocabulary if users effectively demand it. This feedback mechanism, which is peculiar to the processed book, can take place quickly, even instantly. The processed book, in other words, "learns" and adapts itself to the actual circumstances of its use. Traditional books, on the other hand, like diamonds, are forever.
It may appear that reference books and dictionaries in particular have an advantage over other books in becoming machine components, but in fact all books aspire to the perfection that is a machine. It may require a multistep process, however. Let's take romance fiction, for example. How could we possibly make a machine want anything to do with a romance novel? The first step is to convert the novel into a collection of indexes through text analysis, much as described above in the section on the self-referencing book. Such indexes are intermediate documents that could be of value to marketers, who might extract word-frequency lists (or some other underlying textual pattern) to assist in crafting copy for advertising. But the processed book can do more. Why one romance novel when you can have one thousand? And let's link the indexes of each novel to a number of fields of metadata such as author, date of publication, rate of sale, and the geographical distribution of sales. Let's also capture data at the point of sale, directly from the cash register, and update our metadocument in real-time. Now we have a dynamic database that can tell us how the moods and tastes of a particular market segment are changing minute by minute. (We could get even better information if these books were being read online, where each page view could be assessed.) We may as well disintermediate the copywriter and have the processed data tweak the Web site of the client company; or we could have the dynamic data feed a digital printing press, where last-minute changes to a marketing brochure can be made.
Outside the world of books, something like this is already underway. The Benetton organization captures data on every garment it sells, data that is then sent to a database and evaluated for trends in fashion. These evaluations are then moved to the production line, where they can influence the dye lots. It is quite possible that the color of shirts in the Benetton inventory could change from one week to the next as a direct response to the information being fed to the factory floor from the cash register. The introduction of the processed book to such a system will represent a refinement, the addition of a culturally based weighting mechanism to optimize the effectiveness of the inventory management and merchandising system.
E. The book as network node. The primal book is a discrete item. The processed book is a node on a network. Now we know what has happened to the primal book: it resides as a node, linked to other nodes, many of which themselves are primal books. Compare this to DNA: Each individual has his or her own unique DNA, but this DNA has much in common with that of all other people, living and dead—and, for that matter, not yet born. All men are cousins. And this is true of books as well: By being placed within a network, where it is pointed to and pointed from, where it is analyzed and measured and processed and redistributed, a book reveals its connections to all other books. When Hemingway remarked that all American literature can be traced back to Huckleberry Finn, he was acknowledging kinship. When a text analysis program determines that writers from one region use more dependent clauses than writers from another region, it is defining kinship.
The relationships between the nodes of the network can be multiple. One node can be used as a machine component and aid in creating another node, which serves as a platform for a third, which supports the first. The network map—how one node connects to another—is a portrait of the processed book, showing its ancestry, its descendants, and the relationships between the entire family. This map is itself a document—may we call it a book? — or metadocument, which derives from the very field it comments on and in turn influences that field, much as consciousness influences the behavior of a human being.
This is all pretty abstract for someone whose ambition is to write a simple, self-contained text such as a memoir or a category novel (mystery, romance, science fiction, etc.). The problem is that the idea of a self-contained text is a product of the fixed medium of print on paper. The challenge that the processed book puts to writers is that of working with a double consciousness, as primal authors looking over their own shoulders as they see the book being processed even as it is written. The primal book lives under surveillance. It is hard to imagine many authors whose work will not be influenced by the fact of being observed by a camera. And it is important to note that this is not a matter of choice. While some romantic writers will try to bat away the intrusions of processed media, those who embrace the network will be the most successful, success being determined by the survival and "pointability" of the text or node.
It is worth noting that the nodal aspect of the processed book has very important business implications, which are likely to reshape the publishing industry in the years to come. The threat to copyright, for example, may be pushed back. Publishers have been watching the tribulations of their brethren in the music industry and fear copyright piracy like nothing else. Piracy is not peculiar to digital books, of course, as any publisher who has travelled in Asia can tell you, but it is much more extensive when copies of books can be sent around the world on the Internet. While Napster, the first mass-market file-sharing service, has been clipped by the music industry, file-sharing still takes place on Napster's underground successors (LimeWire, Aimster, etc.) and within the universe of UseNet. Many books are now being pirated on these services, which has led not a few publishers to steer clear of distributing digital copies of their products, for fear that they will end up in the file-sharing underground.
A processed book, however, can be published as a node in a network, with connections to other books, commentary, online library card catalogues, teachers' recommendations, and so forth. If the network is usefully developed (and this is an important "if," for links and other connections for their own sake can be a distraction), the value of the book-as-node is greatly enhanced by being part of it. Pirated copies of the primary book, the node, would not have all the network connections, making the pirated copies less valuable. This would serve to bring readers back to the nodal book, not for its primal value (because the content is elsewhere available for free in pirated copies) but for its processed, networked value. This would embolden more publishers to make their books available electronically, provided that they had the means to plug the book into a network quickly.
Something like this is already going on in the world of academic journals. As noted earlier, Reed Elsevier, the leading publisher of journals, has built a powerful search engine for its collection of academic research. This is a shrewd move, and it may be that Reed's customers, primarily research libraries, don't yet see what is going on. There is a growing movement in the research community for changes in the way research is published, with not a few people arguing that all academic research should be made available online for free to anyone with an Internet connection. Reed has been the whipping boy for this movement, perhaps with some reason, as it has pushed through aggressive price increases on many of its journals. Many articles are now appearing in various pre-publication forms on the Internet, which could undermine the value of Reed's journals, and there is a move afoot (see, for example, the writings on this subject by Stevan Harnad) to have researchers self-archive their work prior to submitting it to a journal. This would give a publisher pause. If an article is self-archived on the Internet, then anyone with a Web browser can read that article for free. Why then would librarians continue to purchase journals, especially if the prices continue to rise?
Reed's answer is to create a processed book, though, of course, it is never phrased this way, nor is the underlying strategy ever expounded on. Reed's search engine adds value to the journals indexed and searched; the extensive links add more value. To the database of journals are added many public domain documents, all of which can be searched at the same time. The database gets bigger, thus the need to have a good search engine becomes greater. Now the value of any one document is significantly augmented by virtue of its being part of the network that Reed has created. If copies of these articles are placed in a self-archive, what value they have is theirs alone, assuming that anyone can find them; but placed within the network, the value of the nodes rises. The inclusion of public domain documents is particularly crafty. Reed has migrated the value from the public domain documents themselves to the search engine, the dynamic metadocument, which helps a reader find the underlying documents. In some respects Reed is coopting the public domain. So who needs copyright? The economic challenge for content creators and publishers is to create content that requires its incorporation into a network and to make sure that the network's domain-specific search capability is always a step ahead of general-interest search engines such as Google.
Can this work for books as well as journals? It can and it will. By the time we get to the twenty-third title in the Harry Potter series, ebooks may be ubiquitous. The new title will be published electronically and will have built into it such things as links to key passages and characters from the previous books, a Harry Potter dictionary, connections to Web sites for Harry Potter clubs, and much, much more. There will be a temptation to pirate the text, but the pirate won't get the built-in links to the trailer to the next movie—not cool, as any kid will tell you. Piracy will be kept in check by reinventing the highly primal Harry Potter titles as processed books. The economics of publishing will demand it.