Why AI Is A Philosophical Rupture – Noema Magazine
![](https://netquick.ch/wp-content/uploads/2025/02/card-display-tobias.png)
Published
by the
Berggruen
Institute
TopicsSearchThe symbiosis of humans and technology portends a new AIxial Age.Tobias Rees is the founder of limn, an R&D studio located at the intersection of philosophy, art and technology. He is also a senior fellow of Schmidt Sciences’ AI2050 initiative and a senior visiting fellow at Google.Tobias Rees, founder of an AI studio located at the intersection of philosophy, art and technology, sat down with Noema Editor-in-Chief Nathan Gardels to discuss the philosophical significance of generative AI.Nathan Gardels: What remains unclear to us humans is the nature of machine intelligence we have created through AI and how it changes our own understanding of ourselves. What is your perspective as a philosopher who has contemplated this issue not from within the Ivory Tower, but “in the wild,” in the engineering labs at Google and elsewhere?Tobias Rees: AI profoundly challenges how we have understood ourselves.Why do I think so?We humans live by a large number of conceptual presuppositions. We may not always be aware of them — and yet they are there and shape how we think and understand ourselves and the world around us. Collectively, they are the logical grid or architecture that underlies our lives.What makes AI such a profound philosophical event is that it defies many of the most fundamental, most taken-for-granted concepts — or philosophies — that have defined the modern period and that most humans still mostly live by. It literally renders them insufficient, thereby marking a deep caesura.Let me give a concrete example. One of the most fundamental assumptions of the modern period has been that there is a clear-cut distinction between us humans and machines.Here humans, living organisms; open and evolving; beings that are equipped with intelligence and, thus, with interiority.There machines, lifeless, mechanical things; closed, determined and deterministic systems devoid of intelligence and interiority.This distinction, which first surfaced in the 1630s, was constitutive of the modern notion of what it is to be human. For example, almost the entire vocabulary that was invented between the 17th and 19th centuries to capture what it truly is to be human was grounded in the human/intelligence-machine/mechanism distinction.Agency, art, creativity, consciousness, culture, existence, freedom, history, knowledge, language, morals, play, politics, society, subjectivity, truth, understanding. All of these concepts were introduced with the explicit purpose of providing us with an understanding of what is truly unique human potential, a uniqueness that was grounded in the belief that intelligence is what lifts us above everything else — and that everything else ultimately can be sufficiently described as a closed, determined mechanical system.The human-machine distinction provided modern humans with a scaffold for how to understand themselves and the world around them. The philosophical significance of AIs — of built, technical systems that are intelligent — is that they break this scaffold.What that means is that an epoch that was stable for almost 400 years comes — or appears to come — to an end.Poetically put, it is a bit as if AI releases ourselves and the world from the understanding of ourselves and the world we had. It leaves us in the open.I am adamant that those who build AI understand the philosophical stakes of AI. That is why I became, as you put it, a philosopher in the wild.Gardels: You say that AI is intelligent. But many people doubt that AI is “really” intelligent. They view it as just another tool like all previous human-invented technologies.Rees: In my experience, this question is almost always grounded in a defensive impulse. A sometimes angry, sometimes anxious effort to hold on to or to re-inscribe the old distinctions. I think of it as a nostalgia for human exceptionalism, that is, a longing for a time when we humans thought there was only one form of intelligence, us.AI teaches us that this is not so. And not just AI, of course. Over the last two decades or so the concept of intelligence has multiplied. We now know that there are lots of other kinds of intelligence: from bacteria to octopi, from Earth systems to the spiral arms of galaxies. We are an entry in a series. And so is AI.To argue that these other things are not “really” intelligent because their intelligence differs from ours is a bit silly. That would be like one species of birds, say Pelicans, insisting that only Pelicans “really” know how to fly.It is best if we get rid of the “really” and simply acknowledge that AI is intelligent, if in ways slightly different from us.Gardels: What is intelligence?Rees: Today, we appear to know that there are some baseline qualities to intelligence such as learning from experience, logical understanding and the capability to abstract from what one has learned to solve novel situations.AI systems have all these qualities. They learn, they logically understand and they form abstractions that allow them to navigate new situations.However, what experience or learning or understanding or abstraction means for an AI system and for us humans is not quite the same. That is why I suggested that AI is intelligently slightly different from us.Gardels: AI may be another kind of intelligence, but can we say it is, or can be, smarter than us?Rees: For me, the question is not necessarily whether or not AI is smarter than us, but whether or not our different intelligences can be complementary. Can we be smarter together?Let me sketch some of the differences I am seeing.AI can operate on scales — both micro and macro — that are beyond human logical comprehension and capability.For example, AI has much more information available than we do and it can access and work through this information faster than we can. It also can discover logical structures in data — patterns — where we see nothing.Perhaps one must pause for a moment to recognize how extraordinary this is.AI can literally give us access to spaces that we, on our own, qua human, cannot discover and cannot access. How amazing is this? There are already many examples of this. They range from discovering new moves in games like Go or Chess to discovering how protein folds to understanding whole Earth systems.Given these more than human qualities one could say that AI is smarter than us.However, human smartness is not reducible to the kind of intelligence or smartness AI has. It has additional dimensions, ones that AI seems to not have.The perhaps most important of these additional dimensions is our individual need to live a human life.What does that mean? At the very least it means that we humans navigate the outside world in terms of our inside worlds. We must orient ourselves by way of thinking, in terms of a thinking self. These thinking selves must understand, make sense of, and be struck by, insights.No matter how smart AI, is it cannot be smart for me. It can provide me with information, it can even engage me in a thought process, but I still need to orient myself in terms of my thinking. I still need to have my own experiences and my own insights, insights that enable me to live my life.That said, AI, the specific non-human smartness it has, can be incredibly helpful when it comes to leading a human life.The most powerful example I can think of is that it can make the self visible to itself in ways we humans cannot.Imagine an on-device AI system — an AI model that exists only on your devices and is not connected to the internet — that has access to all your data. Your emails, your messages, your documents, your voice memos, your photos, your songs, etc.I stress on-device because it matters that no third parties have access to your data.Such an AI system can make me visible to myself in ways neither I nor any other human can. It literally can lift me above me. It can show me myself from outside of myself, show me the patterns of thoughts and behaviors that have come to define me. It can help me understand these patterns and it can discuss with me whether they are constraining me, and if so, then how. What is more, it can help me work on those patterns and, where appropriate, enable me to break from them and be set free.Philosophically put, AI can help me transform myself into an “object of thought” to which I can relate and on which I can work.The work of the self on the self has formed the core of what Greek philosophers called meletē and Roman philosophers meditatio. And the kind of AI system I evoke here would be a philosopher’s dream. It could make us humans visible to ourselves in ways no human interlocutor can, from outside of us, free from conversational narcissism.You see, there can be incredible beauty in the overlap and the difference between our intelligence and that of AI.Ultimately, I do not think of AI as a self-enclosed, autonomous entity that is in competition with us. Rather, I think of it as a relation.Gardels:What is specifically new that distinguishes deep learning-based AI systems from the old human/machine dichotomy?Rees: The kind of AI that ruled from the 1950s to the early 2000s was an attempt to think about the human from within the vocabulary provided by machines. It was an explicit, self-conscious attempt by engineers to explain all things human from within the conceptual space of the possibility of machines.It was called “symbolic AI” because the basic idea behind these systems was that we could store knowledge in mathematical symbols and then equip computers with rules for how to derive relevant answers from those symbolic representations.Some philosophers, most famously Herbert Dreyfus and John Searl, were very much provoked by this. They set out to defend the idea that humans are more than machines, more than rule-based algorithms.But the kind of AI that that has risen to prominence since the early 2010s, so called deep learning systems or deep neural networks, are of an altogether different kind.Symbolic AI systems, like all prior machines, were closed, determined systems. That means, first, that they were limited in what they could do by the rules we gave them. When they encountered a situation that was not covered by the rules, they failed. Let’s say they had no adaptive, no learning behavior. And it means as well that what they could do was entirely reducible to the engineers who built them. They could, ultimately, only do things we had explicitly instructed them to do. That is, they had no agency, no agentive capabilities of their own. In short, they were tools.With deep learning systems, this is different. We do not give them their knowledge. We do not program them. Rather, they learn on their own, for themselves, and, based on what they have learned, they can navigate situations or answer questions they have never seen before. That is, they are no longer closed, deterministic systems.Instead they have a sort of openness and a sort of agentive behavior, a deliberation or decision-making space, that no technical system before them ever had. Some people say AI has “only” pattern recognition. But I think pattern recognition is actually a form of discovering the logical structure of things. Roughly, when you have a student who identifies the logical principles that underlie data and who can answer questions based on these logical principles, wouldn’t you call that understanding?In fact, one can push that a step further and say that AI systems appear to be capable of distinguishing truths from falsehoods. That’s because truth is positively correlated with a consistent logical structure. Errors, so to speak, are all unique or different. While the truth is not. And what we see in AI models is that they can distinguish between statements that conform to the patterns that they discover and statements that don’t.So in that sense, AI systems have a nascent sense of truth.Simply put, deep learning systems have qualities that, up until recently, were considered possible only for living organisms in general and for humans in particular.Today’s AI systems have qualities of both –– and, thereby, are reducible to neither. They exist in between the old distinctions and show that the either-or logic that organized our understanding of reality –– either human or machine, either alive or not, either natural or artificial, either being or thing –– is profoundly insufficient.Insofar as AI escapes these binary distinctions, it leads us into a terrain for which we have no words.We could say, it opens up the world for us. It makes reality visible to us in ways we have never seen before. It shows us that we can understand and experience reality and ourselves in ways that lie outside of the logical distinctions that organized the modern period.In some sense, we can see as if for the first time.Gardels: So, deep-learning systems are not just tools, but agents with a degree of autonomy?Rees: This question is a good example to showcase that AI is indeed philosophically new.We used to think that agency has two prerequisites, being alive and having interiority, that is, a sense of self or consciousness. Now, what we can learn from AI systems is that this is apparently not the case. There are things that have agency but that are not alive and that do not have consciousness or a mind, at least not in the way we have previously understood these terms.This insight, this decoupling of agency from life and from interiority, is a powerful invitation to see the world — and ourselves — differently.For example, is what is true for agency — that it doesn’t need life and interiority — also true for things like intelligence, creativity or language? And how would we classify or categorize things in the world differently if this were the case?In her essay in Noema, the astrophysicist Sarah Walker said that “we need to get past our binary categorization of all things as either life or not.”What interests me most is rethinking the concepts we have inherited from the modern period, from the perspective of the in-betweenness made visible to us by AI.What is creativity from the perspective of the in-betweenness of AI? What language? What mind?Gardels: Karl Jaspers was best known for his study of the so-called Axial Age when all the great religions and philosophies were born in relative simultaneity over two millennia ago — Confucianism in China, the Upanishads and Buddhism in India, Homer’s Greece and the Hebrew prophets. Jaspers saw these civilizations arising in the long wake of what he called “the first Promethean Age” of man’s appropriation of fire and earliest inventions.For Charles Taylor, the first Axial Age resulted from the “great dis-embedding” of the person from isolated communities and their natural environment, where circumscribed awareness had been limited to the sustenance and survival of the tribe guided by oral narrative myth. The lifting out from a closed-off world, according to Taylor, was enabled by the arrival of written language. This attainment of symbolic competency capacitated an “interiority of reflection” based on abiding texts that created a platform for shared meanings beyond one’s immediate circumstances and local narratives.Long story very short, this “transcendence” in turn led to the possibility of general philosophies, monotheistic religions and broad-based ethical systems. The critical self-distancing element of dis-embedded reflection further evolved into what the sociologist Robert Bellah called “theoretic culture,” to scientific discovery and the Enlightenment that spawned modernity. For Bellah, “Plato completed the transition to the Axial Age,” with the idea of theoria that “enables the mind to ‘view’ the great and the small in themselves abstracted from their concrete manifestations.”The big question is whether the new level of symbolic competence reached by AI will play a similar role in fostering a “New AIxial Age” as written language did the first time around, when it gave rise to new philosophies, ethical systems and religions.Rees: I am not sure today’s AI systems have what the modern period came to call symbolic competence.That is related to what we’ve already discussed.There was, ever since John Locke, the idea that we humans have a mind in which we store experiences in the form of symbols or symbolic representations and then we derive answers from these symbols.Let’s say this conceptualization was understood throughout the modern period to be the basic infrastructure of intelligence.In the late 19th century, philosophers like Ernst Cassirer gave this a twist. He suggested that the key to understanding what it is to be human is to see that we humans invent symbols or meaning and that symbol-making or meaning-making is what sets us apart as a species from everything else.Deep learning, in general, and generative AI in particular, have broken with this human-centric concept of intelligence and replaced it with something else: The idea that intelligence is pretty much two things: learning and reasoning.Essentially, learning means the capacity to discover abstract logical principles that organize the things we want to learn. Whether this is an actual data set or learning experiences that we humans make, there is no difference. Call it logical understanding.The second defining feature of intelligence is the capacity to continuously and steadily refine and update these abstract logical principles, these understandings, and to apply them –– by way of reasoning –– to situations we live in and that we must navigate or solve.Deep learning systems are most excellent at the first part –– but not so much the second. Basically, once they are trained, they cannot revise the things they have learned. They can only infer.Be that as it may, there is nothing much symbolic here. At least not in the classical sense of the term.I am emphasizing this absence of the symbolic because it is a beautiful way to show that deep learning has led to a pretty powerful philosophical rupture: Implicit in the new concept of intelligence is a radically different ontological understanding of what it is to be human, indeed, of what reality is or of how it is structured and organized.Understanding this rupture with the older concept of intelligence and ontology of the human/the world is key, I think, to understanding your actual question: Are we entering what you call a new AIxial age, where AI will amount to something similar to what writing amounted to roughly 3,000 to 2,000 years ago?If we are lucky, the answer is yes. The potential is absolutely there.But let me try to articulate what I think the challenge is so we truly can make this possible.Let’s take the correlation between the emergence of writing, the birth of a vocabulary of interiority, and the rise of abstract or theoretical thought as our starting point.I will do what I tried to do in my prior responses: Reflect on the historicity of the concepts we live by, point out how recent they are, that there is nothing timeless or universal about them, and then ask if AI challenges and changes them.There is a beautiful book by Bruno Snell called “Die Entdeckung des Geistes” or, in an excellent English translation, “The Discovery of the Mind.”The work’s central thesis is that what we today call “mind,” “consciousness” and “inner life” is not a given. It is nothing that has always existed or was always experienced. Instead, it is a concept that only gradually emerged.In beautiful, captivating prose Snell traces the earliest instances of the birth of what I think of as “a vocabulary of interiority.”For example, he shows that in Homer’s works, there is no general, abstract concept of “mind” or “soul.” Instead, there is a whole flurry of terms that are very difficult to translate. For example, thymos, which is perhaps best articulated as a passion that overcomes and consumes one, or noos, which originally meant sensory awareness and psyche, is a term that Homer and his contemporaries most often meant “breath” or that which animates, but not what we would call psyche today.Simply put, there is absolutely no vocabulary of interiority in Homer. Or in Hesiod.This changes at the turn from Archaic to Classical Greek. We begin to see the birth of a vocabulary of interiority and increasingly sophisticated ways of describing inner experience. The most important reference here is probably Sappho. Her poetry is among the very first explorations of what we today would call subjective experience and individual emotion.I do not want to derail us by retelling the whole of Snell’s book. Rather, what interests me is to convey a sense of the possibility that we discussed earlier: We humans have not always experienced ourselves the way we do today. Every form of experience and thinking or understanding is conceptually mediated. This is also true, perhaps particularly so, for the idea of interiority and inner life.Snell’s book is so wonderful because he shows the discontinuous, gradual emergence of new concepts that amount to the idea that there is something like an interiority and that this interiority — a kind of inner landscape — is where a single, self-identical “I” is located.Now, what is crucial, is that the introduction of writing, which probably began right at the time of Homer, was key for the emergence of a conceptual vocabulary of interiority.Snell touches on this only in passing, but later works, especially by Jack Goody, Eric Havelock and Walter Ong, have attended to this explicitly and all have more or less come to the same conclusion: The practice of writing created new possibilities for analytical thinking that led to increasingly abstract, classificatory nouns and to a form of systematic search and production of knowledge that was not seen anywhere in human history before.These authors also made clear that the only unfortunate thing about Snell’s work is his use of the term “discovery” in his title. The mind was not discovered. It was constituted, invented, if you will. That is, it could have been constituted differently. And that is what Goody, Ong and others have amply shown. What mind is, what interiority is, is different in other places.Let me summarize this simply by saying that the technology of writing had absolutely dramatic consequences for what it is to be human, for how we experience and understand ourselves as humans. Among the two, perhaps, most important of these consequences was the systematic emergence of self-reflection and abstract thought.Can AI play as transformative a role in what it means to be human as it did for writing?Can AI mark the beginning of a whole new, perhaps radically discontinuous chapter for what it is to have a mind, to have interiority, to think? Can it help us think thoughts that are so new and so different that however we understood ourselves up until now become obsolete?Oh yes, it can! AI absolutely has the potential to be such a major philosophical event.The perhaps most beautiful, most fascinating and eye-opening way to show this potential of AI is what engineers call “latent space representations.”When a large language model learns, it gradually distills ever more abstract logical principles from the data it is provided with.It is best to think of this process as roughly similar to a structuralist analysis: The AI identifies the logical structure that organizes — that literally underlies — the totality of the data it is trained on and stores or memorizes it in the form of concepts. The way it does this is that it discovers the logic of the relations between different elements of the data. So, in text, roughly, that would be the words: What is the closeness between the different words in the training data?If you will, an LLM discovers the many different degrees of relations between words.
Read Noema in print.
Fascinatingly, what emerges from this learning process is a high-dimensional, relational space that engineers call latent — in the sense of hidden — space.First, this means that something grows on the inside of an LLM during training. A hidden map of the logic of relations between words that the AI successively discovers. I say on the inside because we humans cannot observe this map from the outside.The second thing it means is that this map is not just a list but a spatial arrangement.Imagine a three-dimensional point cloud where each point stands for a word and where the distance between points reflects how close or far words are from one another in the training data.It is just, and this is the third thing, that this spatial map doesn’t have only the three dimensions — length, width, depth — our conscious human mind is comfortable operating in. Instead, it has many, many more dimensions. Tens of thousands and with the latest models, perhaps millions.That is, the understanding an LLM has formed is a spatial architecture. It has a geometry that literally determines what, for an LLM, is thinkable.It is literally the logical condition of possibility — the a priori — of the LLM.For all we know, human brains also create latent space representations. The neurons in our brain work in a very similar fashion to how neurons work in a neural network.Yet, despite this similarity, it appears that the latent space representations that a human brain produces and the latent space representations that an AI can produce are different from one another.The two latent space representations likely overlap but they also differ significantly in kind and quality because of AI’s far greater dimensional scope.Now imagine we could build AI so that the logic of possibility that defines the human brain gets extra latent spaces.Imagine we built AI to add to our human mind logical spaces of possibility that we humans could travel but not produce on our own. The consequence would be that we humans could discover truths and think things that no human could have ever thought before AI. In this case, no one knows where the human mind might end and AI might begin.We could take any theme and approach it from whole new perspectives. Imagine what this kind of co-cogitation between humans and AI would do to our current concept of interiority! Can you imagine what it would do to how we understand terms like mind, thought, having an idea or being creative?As I outline this vision, I can hear the critical voices. They tell me that I make AI sound like a philosophical project while the companies building AI have very different motives.I am entirely aware that I am giving AI philosophical and poetic dignity. And I do so consciously because I think AI has the potential to be an extraordinary philosophical event. It is our task as philosophers, artists, poets, writers and humanists to render this potential visible and relevant.All this certainly has the makings of a new pivotal age.Gardels: To grasp how deep learning through what AI scientists call backpropagation — the feeding of new information through the artificial neural networks of logical structures — could lead to interiority and intention, it might be useful to look at an analogy from the materialist view of biology about how consciousness arises. The core issue here is whether disembodied intelligence can mimic embodied intelligence through deep learning.Where does AI depart from, and where is it similar to the neural Darwinism described here by Gerald Edelman, the Nobel Prize-winning neuroscientist? What Edelman refers to as “reentrant interaction” appears quite similar to “backpropagation.”According to Edelman, “Competition for advantage in the environment enhances the spread and strength of certain synapses, or neural connections, according to the ‘value’ previously decided by evolutionary survival. The amount of variance in this neural circuitry is very large. Certain circuits get selected over others because they fit better with whatever is being presented by the environment. In response to an enormously complex constellation of signals, the system is self-organizing according to Darwin’s population principle. It is the activity of this vast web of networks that entails consciousness by means of what we call ‘reentrant interactions’ that help to organize ‘reality’ into patterns.The thalamocortical networks were selected during evolution because they provided humans with the ability to make higher-order discriminations and adapt in a superior way to their environment. Such higher-order discriminations confer the ability to imagine the future, to explicitly recall the past and to be conscious of being conscious.Because each loop reaches closure by completing its circuit through the varying paths from the thalamus to the cortex and back, the brain can ‘fill in’ and provide knowledge beyond that which you immediately hear, see or smell. The resulting discriminations are known in philosophy as qualia. These discriminations account for the intangible awareness of mood, and they define the greenness of green and the warmness of warmth. Together, qualia make up what we call consciousness.”Rees: There are neural processes happening in AI systems that are similar — but not the same — as in humans.It seems likely that there is some form of backpropagation in the brain. And we just talked about the fact that both biological neural networks and artificial neural networks build latent space representations. And there is more.But I do not think that makes them have interiority or intentionality in the way we have come to understand these terms.In fact, I think the philosophical significance of AI is that it invites us to reconsider the way we previously understood these terms.And the close connection between backpropagation and reentry that you observe is a great example of that.The person who did perhaps more than anyone to make the concepts of backpropagation accessible and widely known was David Rumelhart, a very influential psychologist and cognitive scientist who, like Edelman, lived and worked in San Diego.Both Rumelhart and Edelman were key people in the connectionism school. I say this because I think the theoretical impulse between reentry and backpropagation is almost identical: the effort to develop a conceptual vocabulary that allows us to undifferentiate the biological and artificial neural networks in order to understand the brain better and in order to build better neural networks.Some have suggested that the work of the connectionists was an attempt to think about the brain in terms of computers –– but one could just as well say it was an attempt to think about computers or AI in terms of biology.At base, what matters was the invention of a vocabulary that didn’t need to make distinctions.There is a space in the middle, an overlap.It is very difficult to overemphasize how powerful this kind of conceptual work has been over the last 40 years.Arguably, the work of people like Rumelhart and Edelman has led to a concept of intelligence that can be described in a substrate-independent manner. And these concepts are not just theoretical concepts but concrete engineering possibilities.Does this mean that human brains and AI are the same thing?Of course not. Are birds, planes and drones all the same thing? No, but they all make use of the general laws of aerodynamics. And the same may be true for brains. The material infrastructure of intelligence is very different — but some of the principles that organize these infrastructures may be very similar.In some instances, we likely will want to build AI systems similar to human brains. But in many cases, I presume, we do not. What makes AI attractive, in my thinking, is that we can build intelligent systems that do not yet exist — but that are perfectly possible.I often think of AI as a kind of very early-stage experimental embryology. Indeed, I often think that AI is doing for intelligence what synthetic biology did for nature. Meaning, synthetic biology transformed nature into a vast field of possibility. The number of things that exist in nature is minuscule compared to the things that could exist in nature. In fact, many more things have existed in the course of evolution than there are now, and there is no reason why we can’t combine strands of DNA and make new things. Synthetic biology is the field of practice that can bring these possible things into existence.The same is true for AI and intelligence. Today, intelligence is no longer defined by a single or a few instances of existing intelligences but by the very many intelligent things that could exist.Gardels: Back in the 1930s, much of philosophy from Heidegger to Carl Schmitt was against an emergent technological system that alienated humans from “being.” As Schmitt put it back then, “technical thinking is foreign to all social traditions; the machine has no tradition. One of Karl Marx’s seminal sociological discoveries is that technology is the true revolutionary principle, besides which all revolutions based on natural law are antiquated forms of recreation. A society built exclusively on progressive technology would thus be nothing but revolutionary; it would soon destroy itself and its technology.” As Marx put it, “all that is solid melts into air.”Does the nature of AI make Schmitt’s perspective obsolete, or is it simply a fulfillment of his perspective?Rees: I think the answer — and I take that to be very good news — is yes, it makes Schmitt’s perspective obsolete.Let me first say something about Schmitt. He was essentially apocalyptic in his thinking.Like all apocalyptic thinkers, he had a more or less definite, ontological and in his case also religious, worldview. Everything in his world had a definite, metaphysical meaning. And he thought the modern, liberal world, the world of the Enlightenment, was out there to destroy the timeless, ultimately, divine order of things. What is more, he thought that when this happened, all hell would break loose, and the end of the world would begin to unfold.The lines that you quote illustrate this. On the one hand the modern, Enlightenment period, the factory, technology, substanceless, the relativizing quality of money, etc. — and, on the other hand, social, that is, racially defined national traditions, images and symbols.Schmitt was worried that the liberal order would de-substantize the world. Everything would become relative. And at least if we go by his writings, he thought that Jews were one of the key driving forces of this de-substantification of the world. Famously, Schmitt was a rabid antisemite.He was so worried about the end of the world that he aligned himself with Hitler and the Nazis and their agendas.From today’s perspective, of course, it is obvious that the ones who embraced modern technology to de-substantize humans, to deprive them of their humanity and to murder them on an industrial scale, were the Nazis.It is difficult to suppress a comment on Heidegger here, who sought to “defend being against technology.” That said, I think there are important differences between the two.But let me go to the second part of my reply, why I think AI renders his world obsolete.AI has proven that the either-or logic at the core of Schmitt’s thinking doesn’t hold. One example of this is provided by Schmitt’s curious appropriation of Marx.Famously, Marx described the rise of industry enabled by the combustion engine as a dehumanizing event. Before capitalists discovered how they could use the combustion engine to fabricate goods, most goods were made in artisanal sweatshops. Maybe these sweatshops were harsh places. But, or so Marx suggests, they were also places of human dignity and virtuosity.Why? Well, because at the center of these sweatshops were humans who used tools. As Marx saw it, tools are nothing in themselves. What one can do with a tool depends entirely on the imagination and the virtuosity of the human who uses it.With the combustion engine, everything changed. It gave rise to factories in which goods were made by machines rather than by artisans. However, the machines were not entirely autonomous. They needed humans to assist them. That is, what the machines needed were not artisans. What they needed was not human imagination and virtuosity. On the contrary, what was needed were humans that could function as extensions of the machine. That made these humans mindless and reduced them to mere machines.That is why Marx described the machine as the “other” of the human and the factory as the place where humans are deprived of their own humanity.Schmitt appropriated this for his own argument to juxtapose his kind of substance thinking with the modern, technical world. The net outcome is that you now have a juxtaposition timeless, substantive, metaphysical truth on the one hand — and, on the other, the modern world of machines, of technology, of functionality, of relativity of values, of substance-less humans.Hence, technology, for Schmitt, comes into view as an unnatural violence against the metaphysically timeless and true.Schmitt’s distinction was most certainly not timeless but intrinsic to the modern period and deeply indebted to its paradigm of the new machine versus the old human.The deep-learning-based AI systems we have today defy and escape the “either-or” distinction of Schmitt — or of Marx and of Heidegger and all those who come after them.AI clearly and beautifully shows us that there is a whole world in between these distinctions. A world of things, of which AI is just one, that have some qualities of intelligence and some qualities of machine — and that are reducible to neither. Things that are at once natural and built.AI invites us to rethink ourselves and the world from within this in-between.Let me say that I understand the wish to render human life meaningful. To render thought and intellectual insight critical and, so too, art, creativity, discovery, science and community. I totally get it and share it.But I think the suggestion that all these things are on the one side, and AI and those who build it are on the other, is somewhat surprising and unfortunate.A critical ethos grounded in this distinction reproduces the world it says it is against.The alternative to being against AI is to enter AI and try to show what it could be. We need more in-between people. If my suggestion that AI is an epochal rupture is only modestly accurate, then I don’t really see what the alternative is.Gardels: I’m wondering if there is a correspondence between your “in-betweenness” point and Blaise Agüeras y Arcas’ idea that evolution advances not only by natural selection but through “symbiogenesis” — the mutual transformation that conjoins separate entities into one interdependent organism through the transfer of new information, for example, DNA fragments carried by bacteria that are “copy and pasted” into the cells they penetrate. What results is not either/or, but something new created by symbiosis.Rees: I believe Blaise, like me, was influenced by an essay the American computer scientist Joseph Licklider published in 1960, called, called “Man-Computer Symbiosis.”This is how the essay begins:“The fig tree is pollinated only by the insect Blastophaga grossorun. The larva of the insect lives in the ovary of the fig tree, and there it gets its food. The tree and the insect are thus heavily interdependent: the tree cannot reproduce without the insect; the insect cannot eat without the tree; together, they constitute not only a viable but a productive and thriving partnership. This cooperative ‘living together in intimate association, or even close union, of two dissimilar organisms’ is called ‘symbiosis.’”Licklider goes on: “At present (…) there are no man-computer symbioses. The purposes of this paper are to present the concept and, hopefully, to foster the development of man-computer symbiosis by analyzing some problems of interaction between men and computing machines, calling attention to applicable principles of man-machine engineering, and pointing out a few questions to which research answers are needed. The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.”What does symbiosis mean? It means that one organism cannot survive without the other, which belongs to a different species. More specifically, it means that one organism is dependent on functions performed by the other organism. More philosophically put, symbiosis means that there is an indistinguishability in the middle. An impossibility to say where one organism ends and the other (or the others) begin.Is it conceivable that this kind of interdependence will in the future occur between humans and AI?The traditional answer is: Absolutely not. The old belief is that humans belong to nature and, more specifically, to biology, to living things that can self-reproduce. Computers, on the other hand, belong to a totally different ontological category, the category of artificial, the merely technical. They don’t grow, they are constructed and built. They have neither life nor being.Symbiosis, in that old way of thinking, is only possible within the realm of nature, between living things. In this way of thinking, there cannot possibly be a human-computer symbiosis.I think there was also a sense that what Licklider meant was an enrolling of humans into the machine concept. Perhaps like a cyborg. And as humans are supposedly more than or different from machines, that would mean a loss of that which makes us human, of that which sets us apart from machines.But as we have discussed, AI renders this old, classical modern distinction between living humans or beings and inanimate machines or things, insufficient.AI leads us into a territory that lies outside of these old distinctions. If one enters this territory, one can see that things –– things like AI –– can have agency, creativity, knowledge, language and understanding without either being alive or being human.That is, AI affords us with an opportunity to experience the world anew and to rethink how we have thus far organized things in the world, the categories to which we assigned them.But here is the question: Is human-AI symbiosis possible from within this new, still emergent territory — this in-between territory — in the sense of the indistinguishability just described?I think so. And I am excited about it. A bit like Licklider, I am looking forward to a “partnership” that will allow us to “think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.”When we can think thoughts we cannot think without AI, and when AI can process data in ways it cannot on its own, then no one can say where humans end and AI begins. Then we have indistinguishability, a symbiosis.Let me add that what I describe here — with Licklider — is not a gradual human dependency on AI, where we outsource all thinking and decision-making to AI until we are barely able to think or decide on our own.Quite the opposite. I am describing a situation of maximal human intellectual curiosity. A state where being human is being more than human. Where the cognitive boundary between humans and AI becomes meaningfully indistinct.Is this different, in an ontologically meaningful way, from fungi-tree relationships?Their relationship is essentially a communication, in which they cogitate together. Neither party can produce or process the information exchanged in this communication alone. The actual processing of the information — cognition — happens at the interface between them: Call it symbiosis.What, if any, is the ontological difference between human-AI symbiosis? I fail to see one.Gardels: Perhaps such a symbiosis of inorganic and organic intelligence will spawn what Benjamin Bratton calls “planetary sapience,” where AI helps us better understand natural systems and align with them?Rees: What if we linked AI to this fungi-tree symbiosis? AI could read and translate chemical and electrical signals from fungi-tree-soil networks. These signals contain information about ecosystem health, nutrient flows, stress responses. That is, AI could make the communication between fungi-trees intelligible to humans in real-time.We humans could then understand something — and possibly pose questions and thereby communicate — that we simply couldn’t otherwise, independent of AI. And simultaneously we can help AI ask the right questions and process information in ways it cannot on its own.Now let’s expand the scope: What if AI could connect us to large-scale planetary systems that are impossible to know without AI? In fact, what if AI would become something like a self-monitoring planetary system into which we are directly looped. As Bratton has put it, “Only when intelligence becomes artificial and can be scaled into massive, distributed systems beyond the narrow confines of biological organisms, can we have a knowledge of the planetary systems in which we live.”Perhaps in a way where — as DNA is the best storage for information we know — part of the information storage and the compute the AI relies on is actually done by mycorrhizal networks?If anything, I can’t wait to have such a whole Earth symbiotic state — and to be a part of this form of reciprocal communication.Gardels: What is the first next step to guiding us toward symbiosis between humans and intelligent machines that opens up the possibilities of AI augmenting the human experience as never before?Rees: Ours is a time when philosophical research really matters. I mean, really, really matters.As we have elaborated in this conversation, we live in philosophically discontinuous times. The world has been outgrowing the concepts we have lived by for some time now.To some, that is very exciting. To many, however, it is not. The insecurity and confusion are widespread and real.If history is any guide, we can assume that political unrest will occur, with possibly far-reaching consequences, including autocratic strongmen who try to enforce clinging to the past.One way to prevent such unfortunate outcomes is to do the philosophical work that can lead to new concepts that allow us all to navigate uncharted pathways.However, the kind of philosophical work that is needed cannot be done in the solitude of ivory towers. We need philosophers in the wild, in AI labs and companies. We need philosophers who can work alongside engineers to jointly discover new ways of thinking and experiencing that might be afforded to us by AI.What I dream of are philosophical R&D labs that can experiment at the intersection of philosophical conceptual research, AI engineering and product making.Gardels: Can you give a concrete example?Rees: I think we live in unprecedented times, so giving an example is difficult. However, there is an important historical reference, the Bauhaus School.When Walter Gropius founded the Bauhaus, in 1919, many German intellectuals were deeply skeptical of the industrial age. Not so Gropius. He experienced the possibilities that new materials like glass, steel and concrete offered as a conceptual rupture with the 19th century.And so, he argued –– very much against the dominant opinion — that it was the duty of architects and artists to explore these new materials, and to invent forms and products that would lift people out of the 19th and into the 20th century.Today, we need something akin to the Bauhaus — but focused on AI.We need philosophical R&D labs that would allow us to explore and practice AI as the experimental philosophy it is.Billions are being poured into many different aspects of AI but very little into the kind of philosophical work that can help us discover and invent new concepts — new vocabularies for being human — in the world today. The Antikythera project of the Berggruen Institute under the leadership of Bratton is one small exception.Philosophical R&D labs will not happen automatically. There will be no new guiding philosophies or philosophical ideas if we do not make strategic investments.In the absence of new concepts, people — the public as much as engineers — will continue to understand the new in terms of the old. As this doesn’t work, there will be decades of turmoil.
Published
by the
Berggruen
Institute
TopicsAboutFollow Us
Source: https://www.noemamag.com/why-ai-is-a-philosophical-rupture