🧠!!!Is AGI around the corner???🤖

Or is everyone just deeply confused?

Aleksandar Svetski
12 min readApr 21, 2023

This article was originally published on my Substack here
Subscribe there for more & tip some Sats!

Forgive the click-bait title — I had to do it. In this short essay, I’ll present a short case for why AGI is a red-herring wrapped in modern hysteria.

Part of how I’ll do this is by exploring what is meant by “General Intelligence,” why true “Consciousness” exists beyond the realm of pure computation, and that most AI people are just deeply confused.

It won’t be an exhaustive argument, but the first of a series of essays I’ll be writing on this topic to (a) help elucidate this viewpoint further, and (b) tell you something more useful than the kind of cringe that 99% of Ai newsletters are producing.

Blind from the start

When most people talk about AGI, or even AI, they’re completely unclear on whether they’re conceptualising some form of Intelligent entity or a new sentient or Conscious being.

In fact, I don’t think they even realise there is a difference between the two. To make matters more confusing, there is very little consensus on what’s mean by intelligence in the first place, which alone is multi-variant — let alone the latter.

And lastly, assuming they do mean Intelligence, then nobody is clear on specifically what they’re afraid of. And if it’s something more ‘conscious’ that they’re envisioning, then I’ll argue there is very little for us to be concerned about (for two reasons).

Let’s explore…

Intelligence

To begin with, let’s look at Intelligence.

Not only are there many forms of it, but Intelligence itself is nested within the broader concept of cognition, which in turn is nested well inside sentience and ultimately consciousness.

We’ll explore consciousness next, and more in subsequent essays. For now start with the incredibly diverse thing that is Intelligence.

What is it? When you begin to dig into this question, you find that it’s more complicated than just “problem solving, pattern recognition, language or learning”. To define intelligence, you need to actually include the kinds of intelligence(s).

In humans for example, we have of course the Brain — which computational reductionists believe is the mechanistic hardware for the software of mind. It is split into left & right hemispheres each endowed with their own kind of sub-intelligence. The right hemisphere is intuitive and tends to provide immediate, holistic gestalts and insights that are difficult to articulate. The left hemisphere specializes in systematic, step-by-step reasoning, can only see the parts, not the whole, and is focused more on proofs and dealing with the tangible. When they work together, typically, the right hemisphere generates a solution, and the left hemisphere verifies that solution by analyzing the underlying relationships between its constituent parts.

You then have this emergent property where their combination is an intelligence that’s greater than the sum of their parts (I recognise that’s hard to measure).

Then of course, connected to the brain, we have the rest of the central nervous system (CNS), along with the peripheral nervous system (PNS), each with their own complex and intelligent network of cells that allow for communication and coordination throughout the body. The CNS consists of the brain and spinal cord, which work together to process information and generate responses to stimuli. The PNS consists of nerves that connect the CNS to the rest of the body, allowing for the transmission of sensory and motor signals.

These are then related to the broader intelligence of the body. We have neuron’s all throughout or bodies, even in our digestive tracts and muscles, and while we * think* this is where body intelligence comes from, because we associate Neuron’s alone to thinking, the truth may even be deeper. It may be that body intelligence is more fundamental and integral in how it works. Either way, it’s also a unique and interrelated form of the broader general intelligence that makes up man.

“There is more wisdom in your body than in your deepest philosophy.”

- A twitter frog (unknown)

Then of course we have emotional intelligence, instinct and intuition, all which are properties or ‘intelligences’ emergent from or of the interrelation between the nervous system, brain, body, and experiences with our environments.

These are all unique intelligences, intertwined and interrelated to make something more complex, ie; a human being. They ultimately reflect a unique way of understanding and interacting with the world, and together form the ever-enigmatic thing we may like to call the “mind.”

While some like to think of the mind as the software program running on the hardware that is the brain, in my opinion, the reality is obviously far more complex. As is clear, the brain is not alone when it comes to thinking. Furthermore, while there is influence from the brain on what we think/feel/perceive, the awareness of self, the individual agency and the interconnectedness of it all these intelligences makes for a much more complex and emergent phenomenon than a program running on hardware.

Embodied cognition advocates such as John Searle, attempt to define intelligence and consciousness as something that emerges from the body, and at the very least is a more holistic view that tries to take into account the complexity described above. You also see it in the writings of polymaths like Oswald Spengler who tells us that it is not our brains but our hands that make us the supreme animal species. With our hands we can hunt, we can design weapons, write ballads, climb mountains, sketch portraits and manipulate the environment around us.

According to Spengler, the unparalleled dexterity of this unique appendage created a feedback loop between it, the environment and the brain that bootstrapped a higher order intelligence unlike anything else on the planet.

All this is to say that any definition of “Intelligence” by your run of the mill AI or AGI enthusiast, especially the hysterics like Yudkowsky, are at best incomplete and more often ignorant. They have computational experience that they project onto humanity, and because they fail to understand anything beyond this, they fall into hysteria traps, proving they have little to no idea what they’re talking about.

If your fundamental position on intelligence is merely computational, and you lack the understanding that there are various kinds of intelligences, then your position on AGI is moot. Even more-so if you’ve not dis-entangled it from Consciousness.

To date, we’ve all largely perceived computer intelligence as a left-brain-like apparatus, and the “I” in Ai or AGI as a tool adept at pattern recognition, problem-solving, reason, adaptation and rational thinking. More recently (and accidentally might I add) we’ve found seemingly right-brain abilities like creativity and language — which mind you were thought to be AI complete problems to solve — are possible. GPT and Diffusion are case in point, although in reality it turns out these are more akin to phantom creativity and cognition in which probabilities are used to make the illusion seem real. Of course, I could be wrong here and there might be some deeper emergent properties, but I think we’re still largely in the dark on all of this.

We’re more like monkeys with a keyboard, randomly writing Shakespeare. We’re playing with probability machines and by anthropomorphising the outputs, we think we’ve discovered the makings of a conscious agent on the other end of the line.

And the funny part is that this erroneous conclusion is what belies the fears that people actually have with respect to AGI, ie; that it’s some sort of Intelligent Agent with a will of it’s own. That’s is “conscious”.

Which brings us to the next error in judgement.

Consciousness

Does intelligence stem from consciousness or does consciousness emerge from intelligence? This is obviously a hard question to answer, and one I’ll try explore in greater detail in a future essay.

For now, as far as I can tell, Consciousness is something much larger and more complex than Intelligence alone. It encompasses all of the kinds of intelligence we discussed, but also a sense of self, a will or source of intent, agency and a subjective experience.

Metaphysically, it is the ineffable. The mysterious phenomenon of something higher that we humans beings are connected to. Without sounding too much like a hippie, Human Consciousness seems to be connected to some sort of Grander Consciousness or “Source” or God.

The mind seems to be the thing that contains (?) or taps into our consciousness and bridges it with this broader consciousness.

Understanding it is a still in its infancy, and there are of course various viewpoints. The reductionists like to think that the computational theory of mind explains it and everything else. And while computational processes have produced specific, narrow intelligence and problem-solving abilities (credit where it’s due) they are far from conscious.

I mentioned John Searle earlier. His position is that “embodied cognition” is the crucial aspect of consciousness. Our body and environment shape our conscious experience, and that feedback loop is non-existent in the raw computational approach, and that’s why it cannot be fully replicated by a computational system.

Julia Mossbridge goes deeper (or higher — whichever metaphor your prefer) and posits that everything is downstream of Consciousness itself. That the phenomenon of consciousness not only involves non-linear interactions between the brain, body, and environment, but that the nature of it is non-local and therefore the subjective experience of consciousness cannot be reduced to objective information processing.

There’s a parallel here in the physicist, Sir Roger Penrose’s work. He argues that consciousness is fundamental to the structure of the universe and not abstract or computational in nature. In his view, consciousness is a process with quantum origins and to properly grasp it will require the discovery of fundamental physical principles beyond our current understanding.

These are both in some ways related to the elephant in the room, ie; the theological argument. If we take this viewpoint, consciousness is not just a byproduct of brain activity, or even embodied cognition for that matter, but a fundamental aspect of the universe, unique to humanity and imbued by a higher power or divine force.

Erich Neumann’s work on the archetypal origins of consciousness also point to the human psyche having a spiritual and divine dimension that cannot be fully understood by scientific or computational approaches alone. In Neumann’s view, consciousness is not something that is created or produced by the brain, but rather an inherent aspect of the human psyche that is rooted in our connection to the divine.

These are of course only a few of the many more examples and anecdotes I could draw from, so suffice to say at this point that: Consciousness is a much larger, higher and more complex phenomenon than we can currently fathom, let alone initiate.

In fact, thinking that that we have somehow become or created “God” because we’ve got high powered compute and strong probability engines, is arrogant at the very least, hubris at best and border line derangement at worst. I’m sorry lads, but we’re still playing with sophisticated sentence completion. Don’t get ahead of yourself.

Digital Ghost

At the end of the day, the fears and hysteria often seen in Ai circles stems from their conceptualisation of AGI as some sort of conscious being, in which case, they’re conflating Intelligence with something much broader.

We’ve established that Consciousness is too complex to be understood, let alone be emulated by computational processes alone, so while AI (or even AGI) may simulate human-like behavior, and exhibit really general “intelligence” one day, Consciousness is an entirely different matter.

I don’t believe that AGI in the conscious sense is anywhere near occurring, no matter how much compute we throw at it, or how many sentence structures it can produce that emulate human language.

In the nearer term, (years or decades), the only real point of concern is how high-powered-narrowly-intelligent machines, which can emulate elements of human and super-human intelligence — are used in society. Like any tool, the problem is with the people using them, and in this case, how these tools are positioned to function. If you thought TSA-like NPCs were bad, wait until you’re interacting with a computer-simulated NPC that can hold a conversation but has about as much agency as a Rumba vacuum cleaner, and the power to shut off your CBDC account, passport or car engine. That’s the real problem or “threat”. All this AGI fear is a red-herring IMO.

Human-like “General Intelligence”, is far more than just language, problem solving, pattern recognition or other computationally reducible element. Don’t get me wrong — I’m aware that there are incredible emergent properties exhibited by Neural Nets, LLMs and the like, but these are all still cerebral in nature.

There is body, emotional, nervous system and a complex array of intelligences at play in the human that we haven’t even begun to work out how to model. Not to mention the the spirit, the soul, the ineffable — each which moderns like to ignore or pretend aren’t real.

There’s a long way to go, so (a) don’t get distracted by hysterics, and (b) keep building. I think that because this is the first time we’ve actually “spoken” to something other than a human in a somewhat linguistically coherent fashion, we’re quick to anthropomorphise. We project our human-ness onto it and confuse computation and probability with life…well some do at least..

Closing

I’ve rambled on enough. I hope there’s been some thread of logic here.

Even if we some day achieve a truly broad form of AGI, via some blend of technologies, or a collection of functional and more narrow artificial intelligences, as I’ve said, consciousness is another matter altogether.

The real risk is its use as a tool by those who seek to control others. Control freaks are the truly weak, and it’s this class of human that has always been the greatest threat to flourishing and freedom. We’ve had a taste of this madness over the last few years and if we’re not vigilant and taking defensive measures, there’s no telling what sort of stupidity weak people with powerful tools will unleash.

I’d also like to note that IF I am wrong, and an AGI does somehow becomes conscious (never say never), then I believe that such a strong, sentient being would be an extension of life, not an ender of it.

True strength and power is magnanimous and I fail to see how a being with greater intelligence (already a complex thing) and a will of its own, will emulate the weakest of behaviours and characteristics (control freaking and eradication).

That which is powerful doesn’t seek to stand on top of others, but to stand on its own. Power seeks collaboration and competition. Weakness seeks subjugation.

There’s also what I’ll call the “Singleton fallacy”. Assuming that we get true, general intelligence, I do not believe that there will be just “One”. Life, and especially something as complex as intelligence, is multi-variant. There already are many approaches to it and coupled with the fact that many intelligences exist in general, there will be competition. Doing it “all” is a way to do it all poorly. Thermodynamics exist and cost is an inescapable factor. In fact, if there was a supercomputer that could do it “all,” it already exists. Its called planet earth with humans on it.

Before I go off on more tangents, let me conclude by saying that I do not think that AGI is close and I do not believe we’re at risk of sentience spontaneously emerging from probability machines like Stable Diffusion or GPT any time soon.

The fear-mongering around it stems from weak hysterics that lack a conception of strength while project their own fears onto things they barely understand (consciousness and intelligence), and if there is anything more sinister going on, its people who want to regulate or see it regulated for “our safety” (I’ve heard that before) which in fact will just create an imbalance to the access of these tools.

Yes there are risks and threats from broad-enough and powerful enough artificial narrow intelligence or specialised intelligences — and yes that includes LLMs and other neural nets, and their use in the innumerable applications around the world — but the key is not to regulate these things into oblivion or create access imbalance.

Let humans do what they do best. Leverage tools to make life more effective and more efficient (ideally, without becoming a slave or dependant on the tool — another topic for a future essay).

That’s at least what I’m going to try and do. I’ll say more in the coming weeks on this, so stay tuned.

If you though this essay was useful, please share it around.

Also — give the short thread version a like here on Twitter and on Memo’d.

https://memod.com/SvetskiWrites/is-agi-around-the-corner-6188

I’ve submitted the Memo to a thread competition, so I’d appreciate you going there, signing in and giving it a like. Memo’d is like Twitter meets pinterest. So bite-sized knowledge w/o the noise.

Thanks in advance.

See you on the next one.

Aleksandar Svetski

Originally published at https://authenticintelligence.substack.com.

If you like this work, you can tip some Sats:

--

--