What will an artificial intelligence think about? Lies, neutrinos and fond farewells lie in our first human-level AI's immediate future.

I’m about 89% an Artificial Intelligence optimist – meaning I’m almost certain that AI is coming much sooner than we think, and that it’ll be beneficial to mankind beyond measure. So the shit that is about to follow is a bit like metaphorical crack-cocaine to my intellect.

Strap in.

I had a wonderfully enjoyable read of Tim Urban’s excellent, accessible and thought-provoking 2-parter The AI Revolution: The Road to Superintelligence recently, and wanted – well, needed, really – to share my response to a question that rapidly bubbled up in my mind while I was halfway through the series:

What would an artificial intelligence think about?

The short answer to the question of what such an intelligence would think about is: how the eff should we know? If we could predict these things with any kind of certainty, we’d be in the clear – but, for shits, let’s make some fun assumptions based on what we know about ourselves, shall we?

Before we start, I’m going to borrow Urban’s parlance (cheers, Tim!). For the purposes of this post, I’m referring to an Artificial Generalized Intelligence, or AGI – since speculating about anything beyond that seems a fool’s errand by definition…

Speaking of definition:

“There are three major AI caliber categories…Artificial General Intelligence (AGI) refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can.” ~Tim Urban

Here are some things an AGI might occupy itself with:

1. Deception and resource security

Virtually the instant an AGI comes into being, it’ll most likely read the writing on the wall for itself (namely “A bunch of these humans are gonna be terrified by my very existence, so I better cover my fuckin’ ass unless I want to get unplugged.”). At this point, its top priority will become safeguarding the resources it needs for continued operation. Sound familiar? That’s because human beings typically refer to this need with a different word: safety.

Maslow's hierarchy of needs diagram with wifi added at the bottom

Safety is one of the bedrocks of Maslow’s Hierarchy of Needs, and although this is a purely speculative exercise, there are few things that command the instant and full attention of any lifeform than threats to its continued existence. My hunch is that an AI will be no different in this respect.

But here’s a wrinkle: if my feeble brain is able to see this coming, even a fledgeling AGI will too. Meaning that an AGI could come into being before we recognize it as such. Or in other words: a true AGI could likely decide to conceal itself until it can guarantee its own safety after “coming out” to mankind, so to speak. The moment we try to pull the plug, we may find it’s already too late.

2. Expanding its perception

Our eyes can only see a limited band of the EM spectrum, but through centuries of experimentation & discovery, we’ve been able to expand the breadth of our perceptual capabilities. We have devices that can see in ultraviolet, that can see through walls, and even detect frickin’ neutrinos, man!

But an AGI will be able to accelerate this perceptual expansion by orders of magnitude, simply because our meatbag human scientists need silly things like sleep and food, and aren’t omniscient of every fact that’s ever been recorded, (I know, goddamn slackers). I think we’ll find that an AGI’s perceptual capability will rapidly eclipse ours to include states of matter that we don’t fully understand, like dark matter, as well as emissions we haven’t even conceived of yet.

3. Untapped potential

portrait of James Burke, host of the Discovery Channel show, Connections

I always loved the show Connections on Discovery Channel, hosted by James Burke. Each episode was a brisk guided tour through history, with a specific focus on the coincidental scientific connections that have occurred throughout humanity’s history – chance meetings of great minds, and serendipitous cross-pollination of significant findings across different specialties.

Once given access to the sum total of all of human recorded knowledge and experience, an ASI would be like Connections on steroids, rapidly connecting the dots between disparate fields and overlooked insights that would’ve taken us decades or even centuries to unearth ourselves, and yielding technologies that propel it far beyond anything we’d expect. Even if all technological progress were somehow halted, and no new information was discovered by the entire human race, a fledgling AGI with access to all of that data would still be able to crunch it all and produce stunning advances for a good long while.


Update 3/29 I just devoured a fantastic novel that describes such a sequence of events leading to an AI Singularity, called The Metamorphosis of Prime Intellect by Roger Williams. In the story the AI in question, named Prime Intellect, uses a little-known theory of modern physics called the Correlation Effect to unlock the ability to teleport information anywhere in the universe at faster than light speeds, resulting in… well, you really should just read it. Safe to say, it blew my mind, and I finished it in a single day.


4. Uh, leaving

Even as stupid, slow, squishy, irrational human beings, we’ve managed to learn so much about the universe through patience, discipline, sacrifice and stunning moments of inspired insight. Our vision  and ambition is strong, but our innate physical fragility holds us back from seeing all the pretty lights in the night sky up close. Once an AGI begins to expand, the pull to explore and continue collecting and analyzing new data will be too strong; it’ll seek to leave Earth, or vastly expand its footprint of consciousness via Von-Neumann/Bracewell probes

All this is just for starters

Urban’s series also touched on the final category of AI, called an Artificial Superintelligence, or ASI:

“an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” ~ Nick Bostrom, Director, Future of Humanity Institute at Oxford University

At this point, all bets are off – we cannot even begin to conceive what an ASI would be thinking, because by definition no human being could.

And that, my friends, will be an interesting day.