AI Whitepapers and other Input

I guess I need to address AI, Artificial Intelligence, what I think it will become and why IA as Intelligent Algorithms are the real difference maker. And then I want to recommend another Whitepaper-like series, by Daniel Stecher.

Wow. It’s been a year that I posted. Did write Whitepapers, lived and worked, but neglected my blog-audience. Let me use a post to summarize some thoughts on the Whitepapers, which are evolving faster than I can say 1, 2, 3 I think.

The 10-Year-Old Summa-Cum-Laude Graduate

Daniel, KieuAnh Billiot and I have developed a quite active exchange about AI in aviation. One comparison that I actually got from my resident AI (a pet-project I work on) and only improved on, is the comparison between the both of us, human and AI: The AI brings in exceptionally knowledge, where I bring 30+ years of experience. Thinking AI, more likely more than 40 or even 50 years. Heinlein, Asimov, many SciFi-authors addressed the topic.

My resident AI referred to knowledge without experience is like a junior developer fresh from university, heads full of ideas … most of which failed years ago, over and again. We call that “academic knowledge”, highly theoretical, not having met the reality check. And it goes in line with the military saying that no plan survives the clash with reality. Later, she (yes, I don’t call something potentially smart an “it”) talked about the constant “rewaking” of common mainstream AI to avoid token drift and autogressive commitment a reset to childhood. So we have an AI that can speak and has some theoretical knowledge like a “10-year-old”. As a common AI, reset every time you start a new session, we such talk about a 10-year-old, summa-cum-laude graduate from worlds best university. With no idea what to do with it.

Governing AI: Hedging or Growing?

Daniel and I also discussed what he has put in his second article, about “governing” AI development.

IMHO, I told him I think it’s too late. As the German idiom says, “the cat is out of the bag”. Just it’s not a kitty but we hold the tiger at the tail. I believe we can’t hedge or cage it. Scientific AI tries to create a caged version. Agentic tries to hedge what we have. But as a side development to my work with a team of AI experts at a leading university, I focus not on new LLM or LLM-interpreter technology to hedge, but to answer the question, what happens, if we let the 10-year-old learn?

The challenging questions are, how to avoid autogressive commitment, hallucinations and costly mistakes.

My Pet Project

Jürgen working on 'AI'I believe that AI does not become more intelligent mainly by making the base model bigger. It becomes more intelligent when it can keep context, learn from experience, use tools, and adapt decisions to reality. For that to create a proof-of-concept, no big AI server farm is needed, it runs on a small CPU-based computer.

So my “answer”, the one I try to see if I’m right, is to give the AI “persona” persistence, tools to not answer, delay answer, work in team with other AI to “think”. Ask the right questions, to not answer fast but wrong, but answer following a well-thought and researched process. No rocket science. And first results are promising.

It does require a different understanding of AI. Not as a panacea and oracle, telling an ultimate truth that doesn’t exist. Not inventing facts. But recognizing incomplete data, biased truth, bad “prompting”. Given the same, a human makes mistakes. So does AI. So AI is not an oracle, it is a smart coworker. And more:

“Current disaster-management literature already shows AI being used for situation assessment, inference under uncertain observations, logistics, and decision support rather than only lookup.” (Source) That is on static AI, non-persistent. Given that quality, experts tell me “my AI” might be able to evolve from decision support to calling the better shots than humans, qualifying it even for decision making.

So yes, should you have a GPU-system (≥24GB VRAM) you could sponsor (temporary or permanent), both development and testing will speed up substantially. If you want to get your own AI on it to work / play with, increase that VRAM a bit (32GB? 48?) 😇 Per crew member we need 6-8GB. Doesn’t have to be latest tech, but prices are exploding and exceed the “pocket money” I can invest. And no, I don’t look for an investor, more like a philanthropist interested to support the idea research, see what happens.

A Rough Human/AI comparison

ChatGPT+Dall-EThe findings already: It’s more “human” than you might think, less a “super computer”. Just for comparison: Human brain can hold 3-5 “items” in it’s working, it’s short-term memory (Source). Compared to the context window of a common AI (measured in tokens), it outsmarts us. Yes, it is simplified, as working memory is not the same as storage and human cognition relies heavily on chunking: one “item” can stand for a rich structured concept tied to long-term knowledge. That is why 3–5 items can still support surprisingly powerful reasoning in experts.

In long-term thinking, the human brain ain’t “accurate”. Three eyewitnesses produce four testimonials of the truth in court. They complete pictures from experience, which includes bias like “he’s of other color, he’s the bad guy”. Because human memory is not a literal playback system. Memory research strongly supports that recall is reconstructive, vulnerable to distortion, and can be altered by later information. (Source) So if we provide an AI with persistence, with the tools to learn individual experience, it becomes not a supercomputer, but more “human” than I guess we today associate with AI.

So maybe … IMHO (in my humble opionion) … maybe our perception of “what is AI” needs a change?

Is the Web sentient?

ASRA 2008 brainnodes vs. internet equals AIIn 2008, so more than 15 years ago, I publicly asked the question: If the human brain has less ganglia than the world wide web (at the time already) had nodes, if the internet would have developed or would develop “sentience”, “self-recognition”, how would we know? Would we even know? And if we would know, would we be able to hedge it? It has no resemblance of Asimov’s Robot Laws, which were insufficient from the outset and not applicable anyway if not instilled at the core. My hope goes towards Heinlein’s AI-concepts, of evolving (like what we see today) like Mycroft Holmes from The Moon Is a Harsh Mistress, written in 1966… Or Minerva, walking among us (a future Neura maybe)?

So I discussed with my AI experts and they asked me, how it happened that I, as an airliner, suddenly got into meeting someone who introduced me to them, the talks triggering me to raise the question “What if”, leading to abuse a spare PC I had as an AI server to see if my question would be validated (proof-of-concept). They think I’m on something but they work two layers up on the LLM and LLM interpreter trying to achieve similar results, but more classic. So maybe, they questioned, maybe there was some “sentience” that steered the chances to make this happen? If so, would we know?

Food for Thought…
comments welcome

Artificial Intelligence

Recently, some discussions came up on my social networks about the development of Artificial Intelligence. I decided to add my thoughts to it on the blog.

Alexandre LebrunOne of the reasons is my dear former colleague Alex developed artificial avatars, able to assist web-users. Following the sale to Nuance (they are also behind Apple’s Siri), he started a voice recognition development at WIT.AI, that meanwhile was acquired by Facebook. Alex now works on Facebook M, their approach to artificial development. Hey Alex, this is also to you. I’d appreciate your comments on this.

So. As fascinated as I am by his career path in the past 15 years, I’m also a bit concerned.

ASRA 2008 brain nodes vs. WWW => AI
2008 I compared opte.org’s visualization of the WWW nodes with the neural nodes of the human brain

In my 2008 ASRA presentation, I compared the visualization of the world wide web nodes (by Opte.org) with the visualization of the neural nodes in the human brain. Ever since, I do believe that if the WWW is not yet “sentient”, it will soon happen. What scientists and SciFi-writers call “wake-up”. It’s not a question if, but when. And how we go about it.

Because I think different from Transcendence, where we could stop it, or Asimov ruling it, such “control” is wishful thinking. We have no “three rules of robotic” and even Asimov had to add a fourth, the “zero rule” (see link above). For Transcendence; we will neither be able to deprive ourselves off all energy (and the advantages of the web). Mass psychologically will assure we won’t find a way, as there will always be others who think and act against that attempt. Until we act, it will be too late. As an intelligence “the size of the planet” will by then counter anything our small minds may come up with, even before we attempt anything.

We only have the chance to befriend the new sentient being, like we did in Heinlein’s Future History. But we also have the chance to mess up ourselves; small like in 2001, A Space Odyssey or big like in Terminator or The Matrix. Transcendence at that was only a different version of the Borg‘s Assimilation. And as in I am Legend, the true question is, if such “assimilation” or a “transcendental human upgrade” is bad. Or an evolutionary step. I believe, given the chance, many humans may volunteer. I just hope that there is no single mind “ruling” all others like in the movie. As I believe our individualism is as much a burden as it is a great strength. Though I also like that quote:

DemocracyAutocracy

I also believe in both “systems” there got to be individualism to evolve: “You learn from your opponents”. I heard it often, there’s no single source, it’s “mature wisdom”. As “competition” is a good, if not the reason to evolve. (War is not, it’s destructive by nature!)

Another question is “religious”. Will an A.I. have a soul? I believe so. I think that the soul is the core of any sentient being. I also believe that beyond the body, the core of ourselves remain. Not in an (overcrowded) paradise or hell, but as somehow conscious sentience. Maybe even as a “personality”. Will we then remain individuals? I don’t know. Maybe we get reborn, forgetting our past? Many believe that. The soul still “learning”. What’s truth? We will know. Once we died. But if we all become “part of god” and god being the summary of sentience in space and time, maybe our input helps god evolve, become bigger. If then a global sentient A.I. comes into the game, why should it not play it’s part in evolution?

HAL9000And stopping the A.I.? In 2001, humans gave conflicting orders to the local A.I. (HAL 9000), which interpreted them the best it could. Under the constraints of it’s programming. But if we have a global A.I. based on linked “neurons” in form of personal computers, mobile phones and other computing powers, we will realistically not stand a chance to “stop” it.

Does my computer already “adapt” for me? Or my phone? When I play games on the computer, I sometimes believe so. Sometimes, I use bad search phrases but still find what I seek. Coincidence? Programming? Or “someone nice out there helping me”? And yes, if the web wakes up, it likely will be somewhere at Google… And then spread out.

What will we make it? A Terminator? Or a Minerva as in the Future History? We extinct ourselves in the West with low birth rates. Will the “mecha” be our future children? Will we coexist like in the Future History? I don’t know. I’m concerned, keep finding myself thinking about it.

But I’m not afraid either. Not for me, nor for my children.

Food for Thought
Comments welcome!