Teoria e forme del testo digitale
Roma, Carocci, 2019, pp. 232
€ 24,00 (paperback)
Progressives against conservatives, innovators against traditionalists. The attitude of the community with regard to any kind of novitas seems to have always been oriented towards two different directions: those who tend to welcome change by emphasizing its merits and those who, instead, show a skeptical attitude, highlighting the threats hidden behind a subversion of the status quo.
The paradigm shift produced by the advent of the digital medium in the world of humanities seems to have equally fragmented not only the scientific and academic community, but also novelists, poets and simple readers. In this context, Michelangelo Zaccarello, Professor of Philology of Italian Literature at the University of Pisa, is cautious towards those who analytically investigate the pros and cons of this revolution aiming at reaching a constructive conclusion that is able to resolve all extremist attitudes. His book Teoria e forme del testo digitale, published by Carocci in 2019, includes ten essays by leading experts in the field of editorial theory and textual scholarship that are translated by Greta Mazzaggio. It is preceded by an unpublished contribution by the same author and followed by an afterword by Wayne Storey. The mindful order in which the essays are placed within the volume allows a swift journey through what, according to Zaccarello, are some of the less popular themes of digital textuality. These essays are not only about philology, ecdotics and textual criticism, but also involve the world of law and economics.
Copyright, social editions and born digital texts
The reader interested in the aforementioned topic will find some interesting ideas in the sixth chapter, in which Maurizio Borghi and Stavroula Karapapa reflect on the issue of the introduction of copyrighted works into the analog world without any authorization by the rightsholders. This operation is linked to Google Books, which “is and remains a profit-making initiative” (2019, 99). The essay then comes to sketch a worrying picture of users’ freedom related to the risk of a monopoly on knowledge access performed by increasingly large and powerful companies. What could be the extreme consequences of entrusting our cultural heritage, for centuries kept more or less safe within established institutions (archives and libraries), to new subjects whose purposes are essentially commercial?
The other end of the spectrum is what Peter Robinson calls “social ecdotics”, whose purpose is to create “social editions” produced by a community of members in the collaborative space of the web platform. Although the “democratic” ideology underlying such projects is praiseworthy, their greatest weakness is the total lack of a decision-making component, which is usually linked to the author’s authority.
The atmosphere seems to be darkening again in the contributions by Diana Kichut and Paul Conway, which any scholar, researcher or student should tackle. The theme is precisely the possible poor quality and accuracy of text files coming from the electronic conversion of images through Optical Character Recognition (OCR) software located in digital archives or libraries. Acknowledging such a risk does not lead, however, to forms of pure skepticism or abstention from judgement. A well-structured pars destruens, in fact, is followed by a confident pars costruens. While the first scholar concludes the essay hoping for a fruitful collaboration between technology and man so that human proofreaders can reduce the error rate from text files, Paul Conway talks about the challenge of a project conducted by Besiki Stvila, namely “the creation of an efficient tool for the revision of specific volumes and the evaluation of the latter in terms of more or less significant errors” (2019, 194).
A further curious and interesting investigation conducted by Matthew Kirshenbaum concerns the impact of word processing on contemporary writers, spectators and actors, who are possibly unaware of a revolution that looks like the one that affected authors in the period of transition from manuscript to print, and then to the use of typewriters. The need for the acknowledgment of a “literary history of word processing” should sooner or later be imposed on the public of target specialists who will have to “start […] from today’s complex scenario of writing (and rewriting), in which the text is distorted and transformed over the media passages that characterize almost every phase of the process of composition and publication” (2019, 94).
Between tradition and innovation: the role of philology
The wide range of themes is resolved in a coherent and homogeneous structure whose pivot can be identified in the problem of conservation, accessibility and fruition of our book heritage. The volume, therefore, does not certainly dissatisfy humanists (digital and not), especially over the first chapters. In particular, Susan Hockey and Paul Eggert are interested in the problems connected to text encoding in relation to the Text Encoding Initiative (TEI). Both scholars emphasize the shortcomings of such markup system, which requires documents syntactically defined as an ordered hierarchy of content and is partly faulty in the representation of complex texts such as literary ones, in which different elements overlap. Acknowledging this limit, however, does not imply any discouragement, but brings about concrete proposals for improvement.
The volume opens and closes in a sort of ring structure with the name of Jerome McGann, a pioneer of digital humanities. His contributions appeal to key themes not only in the field of the digital environment, but also of philology tout court. In the first chapter, the scholar questions the validity of the author’s last will as a parameter of choice made by the curator in the edition of modern printed texts. This conclusion derives from the analysis of some specific cases, including that of Lord Byron’s Windsor Poetics, which collects a series of writings conceived for private and manuscript circulation. It is, however, in the tenth and final chapter that the ultimate goal of Zaccarello’s entire volume seems to be revealed, namely the wish to return to philology intended as knowledge that preserves memory.
The crucial point is that philological attention continues to be applied even when it is recognized that the value of what is preserved will never again be reconstructed. This never again is very important: for the philologist, primary materials are preserved because their very existence attests that they once had a value (…). For the philologist, the dead and the traces of their memory are precious and honorable in themselves (…). This is the knowledge to which philological science is consecrated: it is – I believe – the foundation on which all human science should be rooted.”Jerome McGann, Ritorno alla filologia. La memoria del passato nel contesto digitale, in Teoria e forme del testo digitale, a cura di Michelangelo Zaccarello, Roma, Carocci, 2019, p. 207.
To learn more
- Italia, Paola (2020). Editing Duemila. Per una filologia dei testi digitali. Roma: Salerno Editrice.
- Mancinelli, Tiziana; Pierazzo, Elena (2020). Che cos’è un’edizione scientifica digitale. Roma: Carocci.
- Shillingsburg, Peter (2017). Textuality and Knowledge. Essays. Pennsylvania (USA): Penn State University Press.
- Zaccarello, Michelangelo (2019). Teoria e forme del testo digitale. Roma: Carocci.
The English translation of this article has been revised by Francesca Masiero.