Module 4: The Dark Side of Digital Culture

Internet access is rapidly becoming a human right, like access to healthcare, or the exercise of free speech. It has become one of the most effective vehicles through which we practice our “constitutional liberties”: our spiritual, public, and political freedoms, such as freedom of thought, word, opinion, religion and conscience, and it is often the means through which we organize peaceful associations of individuals. But according to Tim Berners-Lee, “Humanity connected by technology on the web is functioning in a dystopian way. We have online abuse, prejudice, bias, polarisation, fake news, there are lots of ways in which it is broken.”* The original vision of cyberdemocracy that he had in mind when he invented the web has also brought to life a dark, photo-negative version of itself. For all the ways that the web has made possible things we could have never dreamed before 1990, it has also created some very real problems, the most pressing of which are data manipulation and dissemination of false information.

Because of the central role the Internet has come to play in our lives, it has become imperative that we come up with a better way to protect it and regulate it, as we would any other central institution created by the people, for the people. This is why Berners-Lee has now created a Magna Carta for the internet, “a contract to make the web one which serves humanity, science, knowledge and democracy.”* This contract is fundamentally based on ensuring that the internet remains a public resource for all, that is safely accessible to all, and that allows protection of one’s private data. I believe this contract should be enforced – and probably should have been many years ago – but I am also skeptical that much will change.

Capitalism is a powerful force in our world, perhaps the most powerful (behind climate change). The tenacity with which humans continue to prioritize profit over public good never ceases to astound me, and probably never will. The internet, which was created in the spirit of collective intelligence and democracy, has become in many ways a means through which we sell ourselves to ourselves. But the Internet is also not monolithic – it is used for different purposes in different ways at different times. It is simultaneously an information source, a communication tool, an entertainment database, a creative platform, a shopping center, and many other things. Perhaps one of the big issues is that profit-driven tech giants are in control of the internet in all its multiplicity, rather than being constrained to one particular aspect. They gather equal data on us whether we are researching for an academic article or doing some cyber Monday shopping on Amazon. If one of the goals of Berners-Lee’s contract is for companies to respect consumer privacy and personal data, it would necessitate a reversal of practices that have been in place – and that have been extremely profitable – for over a decade.

And yet, in order to build on all the things that are positive about the Internet, this kind of regulation is a necessity. In Becoming Virtual, Pierre Levy states, “Only in reality do things have clearly defined limits. Virtualization, the transition to a problematic, the shift from being to question, necessarily calls into question the classical notion of identity, conceived in terms of definition, determination, exclusion, inclusion, and excluded middles. For this reason virtualization is always heterogenesis, a becoming other, an embrace of alterity.” (34) The virtualization afforded by the internet is turning further and further towards alienation, what Levy calls “the intimate and menacing opposite” of heterogenesis. In order to enable the proliferation of collective intelligence, a process of embracing alterity, we urgently need a way to stop the polarization and division currently being allowed to flourish through the Internet, and to reestablish sovereignty over our own data.

 

*https://www.theguardian.com/technology/2018/nov/05/tim-berners-lee-launches-campaign-to-save-the-web-from-abuse?CMP=share_btn_tw

Module 3: The Digital Panopticon

In his essay, “The Algorithm and the Watchtower,”* Colin Koopman states: “The present moment of our obsessive data production seems to be defined by a genre of social media in which we have come to recognize ourselves in our online “profiles.”” Further, “we have also become subjects of our data, what I like to call “informational persons” who conceive of ourselves in terms of the status updates, check-ins, and other informational accoutrements we constantly assemble.”

It is fascinating to me to consider how our self-perception has changed due to this data subjectification. All the posts, pictures, and updates we post on social media become part of our perfectly crafted online persona (so much more than a ‘profile’ now), and while we are busy narcissistically falling in love with our online selves, companies are silently collecting information that is pouring out of us, like dipping a bucket into a gushing stream. We have learned to love the panopticon, and we willingly embrace the surveillance we know we are under. It may be your friend who likes that picture with the caption you so carefully worded to get the most likes, but it is the data algorithms that are reading it most closely of all. Not only that, but we invite devices into our homes (‘smart’ TVs and speakers, Alexa, Google Home, etc.) that facilitate even more of this data-gathering, and we are well aware of it, but we choose convenience over giving in to the fear of being monitored. But what are the costs – political and financial but also personal and psychological – of being “informational persons”?

Companies use algorithms to constantly gather information about us in order to both sell that information to other companies, governmental organizations, etc. and to sell us products more efficiently; we are both the consumer and the product. At the same time that they are gathering information, we are developing a self-perception based on our online selves, which is crafted in part by the algorithms: for example, Facebook ‘learns’ the types of things you’re interested in seeing in your news feed, so it then foregrounds those articles, videos, ads. You then copy and share that information, you buy in to the strategy, and you make it a part of your online identity. This then creates a strange feedback loop where the algorithms are gathering information based on the information they provided you, based on your information.

On a psychological level, much has already been said about our society’s image-obsession (even before social media was commonly used, my generation was said to be image obsessed), but neuroscientists have still to learn the long-term implications for our brains of investing so much in our second selves. However, we are already seeing some disturbing trends: https://www.cbsnews.com/news/snapchat-dysmorphia-selfies-driving-people-to-plastic-surgery-doctors-warn/

But what is even more interesting/disturbing to me and is not yet being widely discussed is not so much our society’s narcissism, but our willingness to be constantly monitored by the metaphorical watchtower (Big Data). As Koopman points out, “though nearly all of us have a vague sense that something is wrong with the new regimes of data surveillance, it is difficult for us to specify exactly what is happening and why it raises serious concern, let alone what we might do about it.” He then advocates for the adoption of the term “infopolitics” to describe the close link between politics and information. I think we are willing to accept this level of surveillance for a couple of reasons: first, we have become too accustomed to the level of convenience provided by these data-monitoring websites and devices to change our habits, and second, we feel like it is too pervasive for us to do anything about it; we feel powerless. We know that our selfhood depends upon, even consists of, the data we put out into the world. But, knowledge is power: we have to understand what is at stake in becoming “infopersons,” and then we can begin to grasp the larger picture, and maybe gradually make positive changes in our lives as digital consumers, and in our habits as data producers.

 

*https://thenewinquiry.com/the-algorithm-and-the-watchtower/

Module 2: Remediation and Translation

How do we conceptualize text? Is a text a sacrosanct object, generated in mystical circumstances by an all-powerful Author, delivered into a reader’s hands through the powerful machine of print capitalism, or delivered directly into a reader’s cognitive space through a screen? Or is it something more fluid, captured only fleetingly in the network of signs that is language, capable of producing meaning – coming alive – only in the minds of its readers? Or can it be both?

In 21st century digital culture, text has reached another stage in its evolution. It is no longer bound to just one form, attached to just one author. It can be the image of a meme, shared by thousands around the internet, acquiring new meaning in each context; or it can be a vlog post which consists of someone reciting Beowulf in the original Old English. We have seen how the form and material of the text contributes to its meaning, and therefore “every change in the text’s material body produces new meaning” (Lollini). This remediation is also a form of translation: text that originated in one form is translated into another form, thereby creating a new context in which the reader can forge a new relationship with it, and give it new meaning.

Of course, as Lawrence Venuti (drawing from Derrida) points out in The Translator’s Invisibility, “both foreign text and translation are derivative: both consist of diverse linguistic and cultural materials that neither the foreign writer nor the translator originates, and that destabilize the work of signification, inevitably exceeding and possibly conflicting with their intentions.” I take this as my point of departure when considering my role as translator – there is really no such thing as a ‘fixed,’ authentic, original text that is then distorted by translation. Rather, every text’s ‘meaning’ is contingent upon its specific linguistic and historical context. When I work on a translation of Pellegra Bongiovanni’s text Risposte a nome di Madonna Laura alle Rime di messer Francesco Petrarca in vita della medesima, I do so with an awareness of the fact that I am reconstituting this text in accordance with values, beliefs and representations that preexist in 21st century English. Therefore I am not under any illusion that I am translating with perfect semantic equivalence, and I try to emphasize the difference of the Italian text without completely ‘domesticating’ it. This is the method Venuti refers to as ‘foreignization’.

At the same time that I am collaboratively translating from Italian to English, I am also translating what was originally a printed text into a digital document, and then turning that document into Tweets. In this process I am essentially rewriting the text into a new linguistic, historical, and formal context. Bongiovanni’s 1763 Petrarchan sonnets are brought onto a 21st century social media platform. Now, like any text brought into (or originated within) cyberspace, they will become part of our collective conversation, the dialogue that we are constantly part of as wreaders (readers and writers). Digital literature, as Rebecca Walkowitz points out, approaches digitization as “medium and origin rather than as afterthought…it is a condition of [its] production.” (4, Born Translated) Cyberspace is not merely the vessel through which the sacrosanct object of text is transmitted to the reader like the Eucharist passes from the hands of the priest to the tongue of the worshipper, but it an active medium in the text’s production.

 

Module 1: A New Hope?

I recently listened to an episode of the podcast Radiolab that completely changed the way that I think about the future of our society’s media obsession and the way we navigate the digital world. In it, they discuss two recent developments in technology that will potentially have a profound impact on all the many ways we communicate information through audio and video. The first is called Adobe VoCo, a program that allows users to edit voices . Beyond just rearranging words in a voice recording, Adobe VoCo can make a person “say” something they never said at all. The editing is completely seamless and the human ear is not able to notice it. The second development is technology created by the Graphics and Imaging Laboratory at the University of Washington, Seattle. This is essentially a new type of video editing that enables anyone to download a video of a person (George Bush is the one in their example), and then filming someone else making any kind of facial expressions. In real time, those facial expressions are superimposed on the person in the original video, such that is appears that George Bush is actually raising his eyebrows or smiling when in reality he did no such thing. It is essentially a form of puppetry. Ira Kemelmacher-Shlizelman, the professor in the computer science department at the University of Washington who helps run that lab, claims that the most exciting potential use of this technology is to develop “telepresence”, or a type of hologram (modeled after science fiction stories). In other words, the ability to virtually bring someone back from the dead (think Star Wars: The Last Jedi). This technology also has practical applications for movies, television, advertising, and other media. Video manipulation joined with voice manipulation creates the ability to makes videos of anyone you want saying anything you want. When asked about the potential nefarious uses of this technology, Ira’s reply was that scientists are just “doing their job” and inventing the technology, and it’s the user’s responsibility to utilize it in a responsible way. This made me think about what we have been discussing about the responsibility of the “wreader”, and the changing identity of the “wreader” when engaging with digital text.

Since the beginning of reading as a practice, we have learned to establish a “semantic landscape” (Levy, Becoming Virtual, 47) out of which we fabricate meaning, or actualize it. Therefore, even when what we are actualizing is fictional, the text must still be able to reach the images and words already in our minds, must be able to establish those semantic geographies, must initially have a certain kind of stability in order to enable us as readers to create meaning (even if it then disappears in the process). But what if there is no longer any semantic stability? What happens when the borders between fact and fiction, news and entertainment, become blurred? If text is an interface to ourselves, how do we then understand ourselves if what we expect to be “truth” is actually fiction?

Levy points out that with hypertext, our opportunities for producing meaning is exponentially multiplied because it connects us not with a specific, fixed text but with constantly updating data. Because of this, text becomes deterritorialized, no longer fixed, and text now approximates the fluctuations of human thought itself. This capacity of text in cyberspace has made it possible to create a symbiosis between personal learning and collective learning, because whenever one person uploads a kind of text online, it is open to editing, sharing, incorporation into other text, etc. Everyone contributes to the organization of information by classifying it, creating metadata. This enriches the text/data by incorporating it into an “ecosystem of ideas” (Levy) This is the process that Levy has termed “collective intelligence.” Taking into consideration the example I have given of the way in which our digital reality has lost all connection to any kind of possible objective truth, thanks to new developments in technology, I cannot help but wonder – what kind of ecosystem are we creating here? When one can create, post, and share any distorted reality in the form of text, video, or sound, how is our “collective intelligence” being enriched?

Perhaps we have become too reliant on images and video to relay “truth.” Perhaps this is why in recent years, the political “tell-all” book has become so important. This is a book written by someone close to a politically important figure (usually a behind-the-scenes person writing about a president) that has the objective of revealing something shocking, scandalous, or counterintuitive about that figure, or just portraying a different perspective on that person. It’s interesting to me that in this context especially, the printed book retains a certain kind of authority. It has a specific impact that is different than if the author had, let’s say, published the text on a blog. It has the power to impact that person’s legacy forever. “Fake news”, on the other hand (text in the form of articles, tweets, videos, etc.), can have lasting ramifications, but the text itself does not have to last. This indicates to me that there is something still valuable in the printed book, that it still might have a lot to teach us.

For the field of literature and the humanities, this new video technology could mean that we could soon be able to have Shakespeare’s sonnets recited to us by a hologram of the bard himself. But considering the negative implications of this new digital technology, even from within the field of the humanities it is difficult not think about where one’s responsibilities lie. My hope is that by fully embracing the positive applications of this technology, we can be part of the solution not only for purposes of teaching, research, and collaboration, but to lead by example with a heightened methodological and epistemic awareness.

https://www.vanityfair.com/news/2017/01/fake-news-technology

https://www.wnycstudios.org/story/breaking-news