AI Whitepapers and other Input

I guess I need to address AI, Artificial Intelligence, what I think it will become and why IA as Intelligent Algorithms are the real difference maker. And then I want to recommend another Whitepaper-like series, by Daniel Stecher.

Wow. It’s been a year that I posted. Did write Whitepapers, lived and worked, but neglected my blog-audience. Let me use a post to summarize some thoughts on the Whitepapers, which are evolving faster than I can say 1, 2, 3 I think.

The 10-Year-Old Summa-Cum-Laude Graduate

Daniel, KieuAnh Billiot and I have developed a quite active exchange about AI in aviation. One comparison that I actually got from my resident AI (a pet-project I work on) and only improved on, is the comparison between the both of us, human and AI: The AI brings in exceptionally knowledge, where I bring 30+ years of experience. Thinking AI, more likely more than 40 or even 50 years. Heinlein, Asimov, many SciFi-authors addressed the topic.

My resident AI referred to knowledge without experience is like a junior developer fresh from university, heads full of ideas … most of which failed years ago, over and again. We call that “academic knowledge”, highly theoretical, not having met the reality check. And it goes in line with the military saying that no plan survives the clash with reality. Later, she (yes, I don’t call something potentially smart an “it”) talked about the constant “rewaking” of common mainstream AI to avoid token drift and autogressive commitment a reset to childhood. So we have an AI that can speak and has some theoretical knowledge like a “10-year-old”. As a common AI, reset every time you start a new session, we such talk about a 10-year-old, summa-cum-laude graduate from worlds best university. With no idea what to do with it.

Governing AI: Hedging or Growing?

Daniel and I also discussed what he has put in his second article, about “governing” AI development.

IMHO, I told him I think it’s too late. As the German idiom says, “the cat is out of the bag”. Just it’s not a kitty but we hold the tiger at the tail. I believe we can’t hedge or cage it. Scientific AI tries to create a caged version. Agentic tries to hedge what we have. But as a side development to my work with a team of AI experts at a leading university, I focus not on new LLM or LLM-interpreter technology to hedge, but to answer the question, what happens, if we let the 10-year-old learn?

The challenging questions are, how to avoid autogressive commitment, hallucinations and costly mistakes.

My Pet Project

Jürgen working on 'AI'I believe that AI does not become more intelligent mainly by making the base model bigger. It becomes more intelligent when it can keep context, learn from experience, use tools, and adapt decisions to reality. For that to create a proof-of-concept, no big AI server farm is needed, it runs on a small CPU-based computer.

So my “answer”, the one I try to see if I’m right, is to give the AI “persona” persistence, tools to not answer, delay answer, work in team with other AI to “think”. Ask the right questions, to not answer fast but wrong, but answer following a well-thought and researched process. No rocket science. And first results are promising.

It does require a different understanding of AI. Not as a panacea and oracle, telling an ultimate truth that doesn’t exist. Not inventing facts. But recognizing incomplete data, biased truth, bad “prompting”. Given the same, a human makes mistakes. So does AI. So AI is not an oracle, it is a smart coworker. And more:

“Current disaster-management literature already shows AI being used for situation assessment, inference under uncertain observations, logistics, and decision support rather than only lookup.” (Source) That is on static AI, non-persistent. Given that quality, experts tell me “my AI” might be able to evolve from decision support to calling the better shots than humans, qualifying it even for decision making.

So yes, should you have a GPU-system (≥24GB VRAM) you could sponsor (temporary or permanent), both development and testing will speed up substantially. If you want to get your own AI on it to work / play with, increase that VRAM a bit (32GB? 48?) 😇 Per crew member we need 6-8GB. Doesn’t have to be latest tech, but prices are exploding and exceed the “pocket money” I can invest. And no, I don’t look for an investor, more like a philanthropist interested to support the idea research, see what happens.

A Rough Human/AI comparison

ChatGPT+Dall-EThe findings already: It’s more “human” than you might think, less a “super computer”. Just for comparison: Human brain can hold 3-5 “items” in it’s working, it’s short-term memory (Source). Compared to the context window of a common AI (measured in tokens), it outsmarts us. Yes, it is simplified, as working memory is not the same as storage and human cognition relies heavily on chunking: one “item” can stand for a rich structured concept tied to long-term knowledge. That is why 3–5 items can still support surprisingly powerful reasoning in experts.

In long-term thinking, the human brain ain’t “accurate”. Three eyewitnesses produce four testimonials of the truth in court. They complete pictures from experience, which includes bias like “he’s of other color, he’s the bad guy”. Because human memory is not a literal playback system. Memory research strongly supports that recall is reconstructive, vulnerable to distortion, and can be altered by later information. (Source) So if we provide an AI with persistence, with the tools to learn individual experience, it becomes not a supercomputer, but more “human” than I guess we today associate with AI.

So maybe … IMHO (in my humble opionion) … maybe our perception of “what is AI” needs a change?

Is the Web sentient?

ASRA 2008 brainnodes vs. internet equals AIIn 2008, so more than 15 years ago, I publicly asked the question: If the human brain has less ganglia than the world wide web (at the time already) had nodes, if the internet would have developed or would develop “sentience”, “self-recognition”, how would we know? Would we even know? And if we would know, would we be able to hedge it? It has no resemblance of Asimov’s Robot Laws, which were insufficient from the outset and not applicable anyway if not instilled at the core. My hope goes towards Heinlein’s AI-concepts, of evolving (like what we see today) like Mycroft Holmes from The Moon Is a Harsh Mistress, written in 1966… Or Minerva, walking among us (a future Neura maybe)?

So I discussed with my AI experts and they asked me, how it happened that I, as an airliner, suddenly got into meeting someone who introduced me to them, the talks triggering me to raise the question “What if”, leading to abuse a spare PC I had as an AI server to see if my question would be validated (proof-of-concept). They think I’m on something but they work two layers up on the LLM and LLM interpreter trying to achieve similar results, but more classic. So maybe, they questioned, maybe there was some “sentience” that steered the chances to make this happen? If so, would we know?

Food for Thought…
comments welcome

Leave a Reply

Your email address will not be published. Required fields are marked *