What We Leave Behind.
Earlier this year, I read Nicholas Carr's book, The Shallows, for the first time. A book that was first published in 2010, and reflects on the growing impact of the internet, hyperlinks, and accessibility of information. It has surprised me that over the past few months, this is the book I keep coming back to, the one that I keep referencing, to help make sense of the ever shifting landscape of technology and what is to come. I have read more recent books on the evolution of machine learning and LLMs, like Mustafa Suleyman's 'The Coming Wave', watched keynotes from industry leaders, and worked to adapt these new tools into my own work in order to stay relevant as the sands seem to shift from day to day underneath our industry. But as we are standing in front of a wall of the world's collected knowledge in the internet, modern books seem to miss something that this novel from 2010 encapsulated so well for me. What we are leaving behind.
I will say from the start that I spent an embarrassing amount of time debating on whether or not to use ChatGPT to help me write an outline for this piece. I wanted to have a section for writing on my website not just to push out more content (I am well aware most of these posts have very low impact), but to have a space for me to reflect. Using ChatGPT to write these posts is completely counter productive to that goal. But what about just an outline? Something to help me structure my thoughts. A framework to build within. But I kept coming back to a quote from The Shallows (the first of many I will share here): "...the tools we use to write, read, and otherwise manipulate information work on our minds even as our minds work with them." Maybe that is just part of the path humanity is on. After all, I am writing this on a laptop, and that quote is specifically talking about Nietzsche using a typewriter, so I am aware of the hypocrisy. However, I am writing this as a way to try and distill my own thoughts on this shifting technology, so using that same technology to frame my own thinking on the piece seems not only counter-intuitive, but counter-productive. With that said, my apologies in advance for the wandering nature of the writing that follows, but I do believe there are some key thoughts buried in here that we would do well to reflect on as a society, depending on the world that we want to build for the generations that come next.
I guess as good a place as any to start would be with how the value of information is changing in relation to our accessibility to it. Of course, this book was written about the growth of the internet, but I am guessing you will agree with me in that the emergence of tools like ChatGPT, Claude, Gemini, and others accelerate this already troubling trend to exponential levels.
“Once I was a scuba diver in the sea of worlds. Now I zip along the surface like a guy on a Jet Ski.”
I like the imagery of that quote, but a quote on the next page perhaps illustrates the point a bit clearer. " I should be reading a lot - only I don't. I skim. I scroll. I have very little patience for long, drawn-out, nuanced arguments, even though I accuse others of painting the world too simply." I know there are people that will land on both sides of the argument of whether or not this is a 'good thing', and I have had lots of discussions with people whom I very much disagree with on this. I have had discussions with people who genuinely believe that we shouldn't need to think about certain subjects like history, literature, or art in detail because that information is ready available at the click of a button. The one that really stuck with me was when someone told me that it was useless to spend the time reading Victor Hugo when he could have ChatGPT summarize it for him in seconds. I remember that moment vividly because of how completely it devastated me. I understand the appeal of that idea. After all, Hugo's work takes time to read, a lot of time. Time that seems ever more fleeting in this fast paced world that we live in. After all, after Hugo, there is Dumas. Then Montaigne. Dickens. Huxley. Tolstoy. Dostoevsky. Lermontov. Sartre. Camus. I think the point is clear. Even with all of the time in the world, there is too much to take in. And that is only literature. There is architecture. Art. Film. How can we be expected to be that scuba diver exploring the worlds captured in a painting by Rothko when you could just as easily have an LLM tell you artist's purpose behind them? We are expected, almost forced to be on the jet ski, zipping between everything, just repeating what we are told is the purpose behind them. What we gain in our ability to find information quickly, we lose in our ability to understand it.
“The Net grants us instant access to a library of information unprecedented in its size and scope, and it makes it easy for us to sort through that library - to find, if not exactly what we were looking for, at least something sufficient for our immediate purposes. What the Net diminishes is [a] primary kind of knowledge: the ability to know, in depth, a subject for ourselves, to construct within our own minds the rich and idiosyncratic set of connections that give rise to a singular intelligence.”
Singular intelligence was the term that stuck with me when I read that section. I thought on it for days after reading that, bouncing around in my head. Right now, in this world before the wave crests, we are billions of people, all with our own singular intelligence. The biggest danger, or sadness, of externalizing our thinking to an LLM is that our different 'singular intelligences' become THE singular intelligence. One outside of our control. I think it is of the utmost importance that we not lose sight of the face that all of these tools are made by companies with the aim of either generating profit or collecting data. LLMs are just machines that generate information from a cloud of training data based on weights that are set by these companies. Those weights can be changed. If you are running an LLM locally, you can change those weights yourself. This can be very useful as an individual trying to get personalized style from machine responses, or even more useful to a company trying to subtly shift public perception and opinion. So what happens when we collectively stop reading Orwell or Huxley, and depend on a company to tell us what their books are about? What happens when we stop learning history because we can just ask an LLM, and the owners of that model decide to obscure or change the information it provides? I understand that at some level, this control of information has been in place for as long as mediums of distributing information have existed. But think for a second about the impact that radical thinkers have had throughout recent history. Let's take Marx and Engels as a case study. Two minds that have been demonized, praised, feared, and loved ever since their thoughts have entered the public sphere. Wars have been fought, countries razed, others created, all because of the ideas contained on their pages. All of this took place because the information was impossible to control. Books were infectious. They were open to anyone who had the time to read the words contained on their pages. But now, over the last thirty years, that impact has been degrading - slowly at first, and now with accelerating speed. First with the introduction of the internet and blogs. All of a sudden, anyone with an opinion could take the time to send those thoughts out into the world for everyone to read. Some with greater skill and reach than others, but still, the process of publication was democratized, and the high bar to reach in order to get your voice heard was dropped instantly. But then, social media came and shifted everything to a much greater extent. Now, you didn't need to write a blog, just a few words, and your ideas were contributing to the public discourse. More than that, as algorithms became more and more advanced, the information that you were even able to see was shifting. Maybe you would not have seen Marx and Engels work at all, or possibly only served up with streams of information calling them demons, or saviors. But the edge we stand on now is even more dangerous. With a few tweaks of the weights in their LLM, any company can erase any mention of certain books or authors. They could change how it responds to questions about them. They can paint those ideas in a way that suits them at that moment in time. The possibility for radical discourse, challenging the status quo, thinking about new possibilities... it all goes away when we decide to divert the burden of what is true to a program in the ownership of a private entity.
I should probably point out that I don't think that this shift is necessarily done with malice. The currents of innovation are strong, and sometimes, we look at a problem to solve just as simply as that - a problem. Not considering the impact it will have on the long term (and in the interest of complete fairness, I don't think we can ever know with certainty what those outcomes will be).
“The intellectual ethic of a technology is rarely recognized by its inventors. They are usually so intent on solving a particular problem or untangling some thorny scientific or engineering dilemma that they don't see the broader implications of their work.”
The thing that gives me pause, makes me fearful for what is to come lies not with the technology that is being created, but in the speed and willingness that we are deciding to give up our abilities of free thought. The more we use LLMs to tell us what art is supposed to be communicating, what the themes and meanings of great novels are, what philosophers of the past were trying to tell us in their convoluted phrases, the more we lose our ability to form new perspectives of our own. And I have heard the response hundreds of times now about how we as humans are only doing the same thing - taking in information and regurgitating it in a slightly different way - no different than ChatGPT. I think that is patently false, and quite a sad way to look at humanity. We are the sum of the different information that we take in, yes. But it is more than that as well - we are the sum of the people we know, their habits, their inflexions in speaking, the way they walk. We are influenced by emotion, irrational and rational. How we may interpret a certain piece of information today is informed by how we slept, if we are happy or sad, our age, our surroundings. LLMs are trained on vast amounts of data, but once they are trained, that is it. The model is made. This makes them of great use for objective tasks, but thinking, reflecting, creating, in these tasks, it will be forever stagnant.
I have read Descartes's discourse around 10 times in my life. The first time was when I was 19 years old and living in Nashville. One of the most recent times, I was 28 and living in Copenhagen. My experience with that text between those two times was vastly different. My level of maturity, my lived experience, my current situation and environment, my state of mind, all influenced how I interpreted those words written almost four centuries before. I know there will be those who disagree with me on this, but I believe that even if an author has a specific goal in mind with their text, it can still deliver a message that is varied and different for each individual. That wide reach, wide interpretation, goes away when there is one, central understanding of what has been written. But it goes beyond individual meaning in my mind, it extends to who we are as individuals, who we become as a group. When difference in interpretation and learning are swept away, we lose the ability to progress, challenge, and develop new areas of our shared culture.
“The offloading of memory to external data banks doesn't just threaten the depth and distinctiveness of the self. It threatens the depth and distinctiveness of the culture we all share.”
Culture is a tricky one to talk about as it exists both as an abstract concept and a clearly understood one. What is absolutely clear is that it is central to what binds us. Local cultures, sub cultures based on shared interests, regional and even global cultures all exist. But that is something that evolves, is renewed with each growing generation. The thing about AI that truly does make me sad is not that I fear it will erase our culture. Of course, that is a ridiculous premise. But it will absolutely change it. Unify it. Smooth out the rough edges. Defer to the median.
“Culture is more than the aggregate of what Google describes as "the world's information."It's more than what can be reduced to binary code and uploaded onto the Net. To remain vital, culture must be renewed in the minds of the members of every generation. Outsource memory, and culture withers.”
More than the concern over a withering culture, I approach this reflection from a very personal worry - the loss of struggle and adversity. I recently became a parent, and as a consequence of my personality, my life immediately became filled with worry. Worry about the state of the world. Worry about my own job and financial security for my family. Worry about how best to teach my son how to grow into a good man. But more than anything else recently, worry about the type of world he will grow up in and if there will still be space for failure, for exploration. It seems like common knowledge from parents I have talked to that you have to let your children fail, to make mistakes and learn from them. That same failure is seemingly deemed 'unnecessary' in this new world we are building. Why try and paint a picture if you can just have Midjourney generate a perfect one for you in a few clicks? Why try and write an essay or letter if ChatGPT can generate one for you quickly and cleanly? If we are currently telling ourselves that 'AI art' is true art, that 'AI music' is really music, and 'AI stories' are valuable because we can't expect everyone to take the time and learn to paint, or play an instrument, or write... what are we going to be teaching our children?
“How sad it would be, particularly when it comes to the nurturing of our children's minds, if we were to accept without question the idea that "human elements" are outmoded and dispensable.”
Those human elements, the ones that break from the norm, that break the rules and regulations to make something that most regard as failure only to later emerge as masters, that doesn't exist in a fully AI world. An AI doesn't come up with Finnegan's Wake. It doesn't make the jump from Renaissance Art to Impressionism. It can replicate all of those things, sure, but only because they were explored, dared to be created by a human element. And sure, maybe it is overly defeatist and overblown to say that a world without artist will be here tomorrow, I know it won't. But it is important to recognize that the more we devalue the work of artists, writers, and musicians, the more we take away their ability to make a living from their art, and most importantly, the if we decide to fill our world to the absolute brim with generated fluff and consume nothing but even, safe, sterile art, a world in which truly groundbreaking art can thrive will cease to exist.
“In the world of 2001, people have become so machinelike that the most human character turns out to be a machine. That's the essence of Kubrick's dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.”
LLMs are magnificent tools - of course. They can help in a truly staggering amount of tasks with unprecedented efficiency. But it is important to remember that “Every tool imposes limitations even as it opens possibilities.”
Before we commit as a society to giving over our understanding of literature, our creation of art, our abilities to parse and comprehend information to a piece of software, I think we would do well to have a full understanding of what those limitations are. No technology is purely good or evil, it is agnostic. Indifferent. But as we can look back and understand now about social media, sometimes the consequences are so well masked by the excitement of the convenience that they have a chance to root and degrade elements of use before we ever recognize the danger. As these technologies become faster spreading, more widely available, their impacts will continually be multiplied.
When we constrict our capacity for reasoning and recall, or transfer those skills to a machine or a corporation, we sacrifice the ability to turn information into knowledge. We get the data but lose the meaning.
We get the data but lose the meaning. That is what is at stake. That is what we leave behind.