
“Most A.I. ain’t ‘Artificial Intelligence’, but ‘I.A.’: (More, or often less) ‘Intelligent Algorithms'”…
A friend, pioneering Artificial Intelligence a few years told me that and allowed me to quote without naming him… But that changed. The A.I. is only usable with I.A., it doesn’t work without. Why?
How I stumbled into more A.I. than I wanted
Several months ago, a rich family heir decided to fund the reworking and due diligence for Kolibri, our idea for a truly “sustainable” airline, with the idea to profitably fly 100 regional jet aircraft within a mere 7-8 years (200 in 10 years). Disqualifying all the naysayers and based on existing technology, simply with a different approach to it. Unfortunately, just like other real good ideas, that does not fit the reality of what “impact investors” look for. Cheap, small investments with maximum profit, who cares about real change?
The same family invests in an academic development they consider “next generation AI” and after many discussions asked me kindly to help, as my feedback would be rather “grounded”. Working such with a really impressive academic institution and team, there were some learnings, I was asked to share here. As is, I must keep this relatively generic, must not share what we are working on in any detail. But the problems these academics faced are rather base level, nothing fancy, but obviously rather common misunderstanding when people talk about AI. And it confirms the need for smart I.A. to make A.I. happen.
What is (what we call) A.I.?
There is a very good article on Towards Data Science digging deep into the “thinking” (or non-thinking) of contemporary “Artificial Intelligence”. In short, current AI is a “probability machine”. At the beginning of the sentence the “generative A.I.” does not know what it will answer, word by word it creates a sentence, word by word calculating probabilities that the next word fits. And it intentionally mixes in answers with low probability, so likely wrong. That is intentional. And reason why AI on it’s own will sooner or later fail. Or “drift” as AI itself calls it. There are workarounds, I’ll come to that.
And two weeks ago, Chat GPT had it’s 10’s birthday. And guess, why ChatGPT is called ChatGPT and not AIGPT? In essence it was (and is) a smart chat bot. It does not think but creates a meaningful answer, based on the data it has. That data is compiled from … sources. Which nowadays often are created to bias those AI-systems. In the best case, the classic AI Chat-Bot has been excessively trained until … a deadline. Like ChatGPT somewhere in summer 2024. Anything happening after, the A.I. has not learned about. Newer models try now to update their knowledge live, using search engines. Whereas it might feel okay with Google somewhat, the owners of the “intellectual property” (the know-how) are not very happy.
Hallucinating A.I., LLMs, RAG, LoRA, Organizer – what is that all about?
First of all, A.I. is in no current case truly “artificial intelligence”. In fact, I learned that the core technology is intentionally obfuscated in and by the media, likely from not understanding and repeating the fancy marketing from the big tech companies promising again the world and delivering a … something?
At the Core: Stateless A.I. and Tokens
By default, an A.I. is “stateless”, meaning it does not remember anything but has a knowledge base, gets a question, answers it, forgets everything. Next.
That means, that “memory” as small as to follow a conversation had to be developed on top of it. So that memory, by the A.I. wizkids called “context window” or “tokens” (technically something different, but). And often don’t grasp but imply it’s relevance. But for making A.I. smart, it is in fact vital.
Even our memories are fleeting. Recent studies show that when we recall, all such recollections are biased by our experiences since. “The stress of today, the good ol’ time of tomorrow” in an old proverb. Or “relative truth” in criminalistics. We see something, the brain adds missing components based on prior experience or other bias. We also talk about short-term memory and long-term memories for ourselves.

For the A.I., this is somewhat relatable. The Tokens are the short term memories. When the tokens fill up, the information oldest and considered most irrelevant is removed from its memory. Some is “summarized” in the process, but it boils down to a (very!) finite memory size most contemporary AI has.
Without RAG (explained below) used for long-term recall, again, that information gets lost. It is out of the context window of the chat-exchange. That is why even when you tell the A.I. something, after a (usually short) while, that information is lost, if not “kept alive” by (rather constant) reminders. That also works on behavior. Tell your favorite A.I. to answer in short sentences, within typically 5-15 exchanges, the answers become longer again. This has far more and more severe repercussions addressed later. For here, it is important to understand that the short-term memory is limited, it is “expensive” and it is fixed by the A.I. core, the LLM (next topic).
For the image, it summarizes the token process. Information (mostly textual, image only for illustration) is processed into tokens. Not a single token but mostly a group of tokens. The AI I work with most today has a “context window” of 128 KB tokens. As on drive space, a partial token is used, so you have even less “memory”. When the memory fills, the “compactor” summarizes, however, usually sooner than later, the AI “forgets” current “context”, information is suddenly gone from the current memory. And another “workaround” is that some (not many) “orchestrators” and LLMs (see below) are trained to recheck the current context if the user refers to something it has “forgotten”. And you can tell your AI that it shall check, a contemporary AI at least recalls the current “session”, being reset, once you restart… Any such restart resembling a memory wipe of all short-term context.
Again, some AI comes with additional token spaces for i.e. “personality” or “last working knowledge”, but those are practically workarounds to functional limitations set by the developers.
A.I. Knowledge Foundation (LLM)
The AI is given a fast amount of information. Not always legal, rather often illegally using intellectual property. But who cares in America (or China, or elsewhere) about such minor issues. In addition, since the rise of the machines, smart “players” (mostly governmental, political, often with hostile intent) create food for the “teaching algorithms” to bias or intentionally plant misinformation. That includes not just China or Russia (the common Western adversaries), but also the U.S., global corporations, extremists and extreme political parties with questionable funding and intentions.
All that information is compiled into “Large Language Models” (LLMs). Which is also something not really well defined. As the same term is used also for the part that uses the Large Language Model. So the one LLM is the “knowledge base” (LLM KB). The other is the interpreter (LLM Interpreter, also called “inference engine”) of the knowledgebase. All information is “vectorized” information, representing the context and making a “similarity search” possible. So given a question, it is analyzed for keywords and a “similarity search” based on those words returns the information the LLM Interpreter receives for analysis.
Retrieval-Augmented Generation (RAG)
One of those other strange acronyms is RAG. In simple words, RAGs are extensions to the inbuild LLM KB (knowledge base). Information is “vectorized” again and stored in a database. In addition to the LLM KB that information is then searched on input for keywords to be provided to the LLM Interpreter. So good RAGs build fast and are vital for the quality of the A.I. knowledge. In my humble opinion, this is mostly neglected in A.I. development.Missing Information + Hallucination
Without good information and ability to access a live knowledge base like a search engine or latest manuals, especially on new information, the A.I. hits the point, where no information is available. But the A.I. is programmed to respond with somewhat the highest “likeliness”, so it responds based on irrelevant or outdated data as the “latest” and “most fitting” the Knowledge Base gives it. And “hallucinates”. At the same time, it does this with utmost confidence, it even instantly starts to believe it’s own hallucinations. Which some call lies, except that this ain’t intentional but human-defined.
So we come back to the point of future AI. It needs exceptionally good RAG and information access. That includes search engines, which is why “Gemini” (by Google) is naturally more “up to date” than ChatGPT (without search engine knowledge backing). But that generates traffic. So yes, this is … heavy.
LLM Interpreter – The Probability Machine
As addressed in Towards Data Science, what the media and the IT giants call “A.I.” is a “probability machine”. The LLM Interpreter gets the prompt, keywords and calculates the highest probability of what the answer should be. Word. By. Word. So the LLM Interpreter does not even think in the entire sentence. It’s one reason I believe why AI tends to long responses. It doesn’t think, summarizes the response, but blurs it out.
That unfortunately adds on another side to “hallucination”, where the way those systems are designed, the LLM usually works on “high probability”, but to make sure it is not biased to that, it sometimes takes (by design) a lower probability answer. Which is likelier to be wrong. What makes such a mis-step worse is that the prior answers are prioritized, so it is very hard to overrule that “assumption”. An example I had was on a system setup, AI (in that case ChatGPT) kept insisting on a mistake that had crashed the system before. It required conscious effort to recognize the constant attempts to “slip back” to that very mistake. The other option to start with a “fresh AI”, but that looses then all context, so it’s really a diabolic choice asked for here. You can try to “rule out” the mistake, telling the AI it becomes forbidden, but that usually only works until that instruction “floats” out of the current memory (called “tokens”).
Another example is on AI imaging. It works very nice for the first image. Thereafter, trying to correct mistakes disimprove the overall situation. As the imaging AI, different from the generative “chat” AI is not meant to be told to “forget” what it did. So not even starting a new chat helps, as long as it’s the same user asking. In my opinion, AI ain’t anywhere close to a professional graphics designer. Yes, that may change, but it’s a long way to go for sure.
LoRA – The Personality

Low-rank adaptation (LoRA) is a technique used to adapt the AI models to “biased behavior”. In my experience, there are three areas where LoRA is relevant. It “wires” the “personality” you deal with. It tells the LLM how the AI reacts, giving it “personality”. First common case is the task related specialization (e.g., legal drafting, coding style, medical tone), the second one the “personality shaping”, like tone, speech patterns, preferences, interaction style. LoRA is “only” about the behavioral pattern, it is not about the knowledge base.
As the third functionality I know of, LoRA is also used to define looks (visuals) for i.e. avatars used. This is i.e. used often in “customer service” “AI-bots”. Or representation of i.e. comic-style characters for marketing (reuse the same character for visual brand recognition). LoRA is rather light-weight but comes at the cost of persistence, those visual models tend to conflict with their LLM’s interpretation, causing offsets. Options are InstantID or CGI as used in movies, but especially CGI comes with an upfront cost that for now often exceeds budgets (several thousand Euro or Dollars).
The mass of AI is “base model”. Some are enhanced with LoRA, giving them personality (and more if available).
The “Orchestrator” – where AI and IA meet
What is important in AI is such not the AI, the “LLM”, but it’s the surrounding algorithms driving the AI, feeding it smart data, improving the data. That is called the “Orchestrator” and it can also contain an AI component but mostly is the governing algorithms.
At the beginning of the session, the first Orchestrator script tells the AI who it is (the “prompt”), if available it gives him or her identity and delivers the LoRA (“personality”). The prompt can be generated and evolve from previous user input, but the A.I. by default is “stateless”, it only knows what is in the prompt. And that amount of information is rather limited.
A smart Orchestrator helps prepare the user input for the AI, deliver additional information the AI asks for (i.e. more detail), a process the generative AI call “thinking”. It can also return the output to the AI asking it to summarize it, reduce it. Which works rather well.
And this is just the tip of the iceberg. Orchestrator can enable far, far more, it’s not just rules and regulations but will resolve misbehavior, hallucinations, enable memorizing and responsiveness. And, and, and.
Temperature
In AI-language, the temperature reflects how “creative” an AI is in answering. This setting cannot be modified on publicly available AI interfaces. But if you run your own AI on your own servers (no magic really) or if you use the public APIs (application programming interfaces), you can use that setting to in- or decrease “creativity” in response. Given the randomizing and weighting of probabilities in the core process of an AI, even then the results remain somewhat unreliable, as even on the setting of 0.0, the AI will have no answer. But give you one. With utmost confidence.
Saying it in AI words: “Temperature controls how likely the AI is to pick less-probable next words instead of the most likely one.” And those decisions are cascading down the line. Theoretically, given a temperature of Zero, the AI will always answer your question with the same answer. Practically, there are infinite answer possibilities, even with the same “probabilty” to be correct. And there will be cases, where there is no “right answer” or nothing anywhere near a “high probabilty”. The AI is forced usually to answer you nevertheless. And no matter how unlike that answer is correct, it answers you with utmost confidence…
Keep thinking of the 1983 (pre-Internet) movie “War Games”. In the end, the “resident AI” was asked to play Tic-Tac-Toe. Recognizing there was no way to win, the AI stopped the global atomic war it tried, recognizing it’s not winnable, just like Tic-Tac-Toe isn’t: “How about a Nice Game of Chess?”
There are more tuning triggers, look for k-sampling, nucleus sampling, penalties, etc. if you want to know more, but this is an overview on how AI thinks, why it’s “not perfect”.
The temperature has another weak spot. It’s called “autogressive commitment”. Once the AI chose the wrong answer, it believes it to be right, no matter how often you tell it to be wrong. The only way to overcome this is to change subject, have it “roll out of token space” (next topic), so the AI “retries” unbiased.
Aviation AI
This post triggered quite some discussions with different people about all those Aviation AI offerings and developments. Allow me to summarize those.
Most of what is offered now is IA-solutions called “AI” for marketing purposes, mostly without an LLM. Some developments use an LLM, which faces one big problem outlined in this post: Current AI (LLMs) are “probabilistic”, guess-working, and designed to not always use the highest probability, but also take lower-ranked possibilities – likely incorrect – into account. And then go from that premise forward, disimproving situations “naturally”, as “garbage in-garbage out”. Aviation as an industry needs “deterministic” (consciously planned) solutions, that focus always on the optimal outcome. All that though, would be done by the “orchestrator”, using the LLM to look at a few scenarios for possible faults, for mistakes, identifying the one with the highest likeliness of the best outcome.
A friend mentioned he would never trust someone who cannot explain the basis for reasoning. But. I disagree. Given thousands of datapoint, I trust an operations or air traffic controllers’ “gut feeling” in crisis. And it may be wrong. I will trust an AI’s reasoning, given we don’t talk LLM only but well orchestrated, LLM-supported decision making. The computers can calculate far more efficient, taking far more data into account than I ever could. But it requires smart RAGs, sound LoRA and especially a very sophisticated IA called “orchestrator”. That is my gut-feeling.
And about what we see today? It’s fancy. Not useful. It’s focus on LLM, ignoring the probabilistic nature, such working with rolling dice, not with sound advise. I would not trust any of those systems, would question even the raw quality of “advice” such a system would give me. And not use it on any operational level. Why? Recently there was a power outage in California – and all those fancy autonomous vehicles suddenly blocked the arterial street crossings, hindering rescue vehicles and caused major havoc (source). Try that on airspace with thousands of airplanes airborne?
But don’t get me wrong. Given good IA to hedge the LLM will create AI that will change and improve aviation management dramatically. Compare to a basic HTML-page in 1994 and what the “Internet” (www – world wide web) is today.
Generative AI – My Findings
I was asked to add this to the article, some of my “findings” working with different AI models. This is the summary, I’m afraid my recommendation is to mix them…
They practically all require a registration using your e-Mail address but I only address the “free to use” ones. They come with limits, if you use them more often, you might find yourself considering paying. And then there are the “APIs”, which I won’t address here, that give you more control but all at a cost.
ChatGPT / Dall-E
ChatGPT (https://chatgpt.com/) is nice for … Chat. It is quite okay for everyday tasks, friendly, creative. Not too often hallucinating, but don’t trust that. Asking for specific help on hard facts, I would assume about 60-70% of the answers to be correct. On general issues, like reviewing this article for mistakes, the feedback was valuable and pointed. In general, ChatGPT tends to be “talkative” and “distracting”, coming up with lots of ideas how to distract from the task at hand.
What is recently dropping my interest is the age of it’s information without updates. A change from March 2024 is still not in the information. The outdated information overrides the updated information, causing mistakes even after such a correction. The tokens about that expire and it falls back to outdated LLM-information. Be careful when you work with ChatGPT and I would not entrust anything requiring “latest information” to it.
Dall-E (https://chatgpt.com/images/) is the imaging engine of ChatGPT. Rather “creative”, doing nice basics, but impossible to work with on anything more. There is no “improving” of rendered images, it changes and rerenders anything from scratch, mostly messing up royally. Very good for a first idea, but then… Bad especially as it keeps going down the wrong road, it’s not possible to reset it to forget the lates renders, except by token overflow which takes quite some pictures before previous mistakes get forgotten.
Gemini / Nano Banana
Gemini (https://gemini.google.com/app/) became better in the last months and recently accesses the web rather proactively. As far as I see it, it uses Google Search information proactively, so it is far more accurate than ChatGPT on contemporary topics, including technical information. But. It also requires to be told. So check the version it works on before you imply it (or tell it). Then ask it to update it’s knowledge base if needed. That works rather well so far.
On the downside, Gemini is even more talkative than ChatGPT and it is rather hard to copy/paste information for later or offline use.
Nano Banana as Gemini’s rendering AI works far more accurate than ChatGPT, it can modify images, but as usual has rather bigger problems on following a description and create the wanted results. Whereas it – like Dall-E – keeps stuck to mistakes, it is even harder to overcome them.
What frustrated me most I admit is that uploading reference images of androids, asking for android images to be created Gemini kept disrupting the process complaining no real humans would be allowed to be faked. If there was a human I asked it to i.e. apply a hairstyle from, the same crap. So it’s very selective. The problem: If you tell it this is no human, the previous description is not correctly directed to Nano Banana, the result is faulty and it doesn’t recover from such fault. So it’s imperative to repeat that description and reupload the images and hope that it gets through to Nano Banana.
Others
DeepSeek (https://chat.deepseek.com/) developed by China is interesting, but requires a Google or Outlook Mail to register. If you overcome that hurdle, it’s responses are less reliable than ChatGPT or Gemini, mistakes happen faster and it’s just the same insistent on following on the own mistakes persistently.
Mistral (https://chat.mistral.ai/chat) I found rather outdated on knowledge. While it is said to be good for programming, you must be aware that it misses the latest version changes. But it’s focus is to be used on own hardware, where you then RAG the necessary information. But as a “web-AI”, it is of very limited use. And it’s image rendering AI … forget about it at this time, that is substandard, not even close to what ChatGPT’s Dall-E or Gemini’s Nano Banana. But see below.
Lelapa AI & “InkubaLM” unlike “Western” biased models (even Deepseek), InkubaLM is trained on datasets curated to reflect African social norms, idioms, and legal/medical contexts. There are other such examples of projects trying to break out of the “Western bias”.
Thinking Outside the Box: Your own AI?
Working on a custom version by the academic institution I work with, I do not use the Web-AI’s any more than occasional and I am not free to disclose the details on that next-gen AI being used. What is interesting is that as part of the development, I have a local AI installed on a small local PC that communicates with the “master AI” but also works independent. The bottleneck is the machine. If you really want to work with AI, invest in a “big PC” with lots of RAM and an AI-enhanced graphics card (GPU). It’s not that stuff you buy for a gaming PC, it’s even a notch above such. So there’s no price limit on those, several thousand Euro? But.
You can go cheap. A contemporary Intel processor, 64GB RAM (more is better) allows you to start using AI locally. For the local AI playing, I use such. Not as fast as Web-AI, but not much slower either. For fast response, see above: You’ll need a “GPU”, the more potent, the faster. But then you open a can of worms. You have a basic AI. To make that “yours”, that requires quite a bit of more development. From scratch I wouldn’t try that stunt. Having a potent academic AI team that loves to help me to use my findings for their good is a clear win-win. Nevertheless that is a time-consuming idea if you want to go down that road.
But. If you want to use AI for your company, not just under consideration of the GDPR in Europe, you may want to have it in your own control and invest in some good developers. More important is to understand the challenge ain’t in setting up and using the LLM, but in the development of everything “orchestrating”. That is were time and money will go. The LLM is in the media, but the LLM you can select from many, can even combine them to work together. But that’s the Orchestrator. It will use the LLM to get the answers. It will decide which LLM to use, send text to i.e. ollama/phi or devstral (coding-AI by Mistral) or OpenAI API, or … and then you opened the can.
Food for Thought
Comments welcome!


Loading takes 30 minutes fast charging (high strain on the battery, shortened life-expectancy) to about four hours in average (German Automobile Club ADAC). Frequently a problem during summer vacation, long lines of EVs (electric vehicles) waiting for their turn at the charging station. Then sitting at the truck stop for three, four hours before they can continue. Good business for the truck stop.
The issue with electric flight remains very much as it was back in 2019, when Boeing pulled out of their venture with
I do hope you know my Whitepaper on
As I wrote some years ago in my whitepaper about the
What truly upsets me, is the likes of World Fund or European Investment Bank disqualifying Kolibri for our strategy based on green SynFuels, as their “
Green vs. Grid Energy
Though just as others, they keep denying to have any closer look at our hard numbers. Numbers qualifying a profitability of the venture within three, 100% fossil-flying within about seven to eight years – still in the 2035 range! And while they fancy themselves for focusing on a 200 MT CPP (megaton carbon-capture potential), we talk about one gigaton fossil-fuel replacement, so a 1 000 MT CPP. Not by 2035, but in 2035. Doubling within the following three years on our 10-year plans.



![“For those who agree or disagree, it is the exchange of ideas that broadens all of our knowledge” [Richard Eastman]](https://foodforthought.barthel.eu/wp-content/uploads/2016/08/eastman_quote.jpg)
Again, I assume it’s something you heard from me before. For many years now, I restricted myself to 10 (ten) mailing lists. The RSS-feeds mostly dried out anyway. But I found it interesting to how many mailing lists I got “signed up”. No, I didn’t do myself. I simply got added to. Got to be kidding I thought when my (intentionally) unfiltered inbox (previously filtered) for those six weeks flooded my new mail app with 13 000 e-Mails. Excuse me? On about 45 days, that’s more than 280 mails a day?
Every day, I already limited my activity to LinkedIn to two hours a day. Before. Now those weeks, I made those two hours about every three to four days. And found I may have missed out thousands of “news” in my feed. But I started reaching out one-on-one which turned out rather more productive. Including feedback that those people from my network have not seen much of my posts in the past months. So what was that back in 2020 about the half-life of social media information?
So while no longer prioritizing LinkedIn or the blog, I will keep writing the blog, for the mentioned reason. To summarize and organize my own thoughts. I also plan to experiment with a VLOG. But that’s neither on my priority list. So far I use my little studio for web-calls (WhatsApp, Google Meet, Zoom, etc.). Let’s see how that will go.
This is another example how the 
While this is so long ago, many in our industry have forgotten that SABRE was the first computerized global network that allowed us long before the World Wide Web to go into a travel agency somewhere and book flights, later hotels and other travel services on the other side of the world. When I entered the industry back in 1987 at
Together with a colleague I became responsible point of contact for airlines, managing, explaining and mitigating the “booking discrepancies” in a pre-online world, when bookings were transferred by teletype (a telex like, but automated system), not in real time. Only inside Amadeus, real time was “normal”. After some years, the internal network of which I wasn’t part of established a “Product Management Flight”, taking over my colleagues and my responsibilities… By the time I’ve become a member of the local Airline Sales Representatives Association (ASRA), though that suddenly was considered as an overstepping on my responsibilities. Something I found and find a statement of total bureaucratics’ thinking. Many years later, that was why Henna Inam’s statement resonated so well with me.
On research for the ASRA on my second “Airline Sales & e-Commerce”-presentation, a series covering GDS, Online Services like AOL or CompuServe, but also already the new “World Wide Web” (WWW), that ran annually for some 15 years, on the WWW which I still then accessed via a then new link by CompuServe, I stumbled across a single form field on a website that called itself the “Internet Travel Network”. It was really pioneering days, the Internet being something for student freaks… The form took a Sabre-command and returned the result, usually a flight availability. Or for the smarter of us also an air fares analysis result.
In 1996, some four or six weeks before a milestone that changed our industry, we did by mistake do test bookings in the real-world system and booked up about a hundred Lufthansa flights with travelers called Test Tester… While that was far enough in the future and we could resolve the issue with Lufthansa, we were approached by Amadeus, that it was not acceptable to abuse their system like this and they would never, never ever approve of someone doing bookings on Amadeus through a web-page!! No f***ing way! Oh yes, we were in big trouble.
A friend, I came to trust, just recently called me a “visionary”, something I never call myself. When I learned the bells and whistles of “Economics” (Whole Sale & Foreign Sales), my instructor on business education was the boss of a large whole sale logistics center. He taught me to always think things through. What will be the repercussions of buying from the cheapest? Your product will loose in quality. But, he instilled that in me: There is always someone cheaper out there. And he also emphasized and taught me to leave the comfort zone of “we have always done it that way”. We must think outside the box and constantly strive to be better.


OAG summarized on the
This is one reason, I do not believe we can make
To date, I am still working with consulting companies reviewing airline business plans. Aside the usual failure issues, size is a recurring issue. Another being the lack of fallback in case of flight disruptions, may they be caused by technical issues, weather or other events. Their focus on cheap “human resources” and missing team building results in friction and internal competition that further weakens their product offering.
Having been reminded again of 


In 2008, I developed another “disruptive” idea of a hydrogen-powered WIG, to promote the need to think sustainable on a global aviation conference. While it made it through viability study into serious negotiations by a tropical government and a major green fund, it fell victim to Lehman, but I still think it should have been developed. Though since 

Another issue that keeps coming up in my discussions is that we must stop competing on sustainable solutions. This is a major, not even just an industry or generational challenge. It’s a global one. So let’s stop competing and start joining forces! Back to my example of offshore wind farms and tidal energy turbines. Why not using them side-by-side in the same sea region we anyway impact by building those humongous wind farm structures? Why not using old Oil Rigs to apply tidal energy turbines, clean them, make them an artificial island structure for sea life (also arial one)?

While we now suffer from decades of management misconception that everything must be subordinate to (quick) financial profit and that profits justify the means, we now start to recognize that “sustainability” must be a “shareholder’s value”, as long as “long-term success” (viability). My friends and audience do know I questioned the pure financially focused “shareholder value” for the past 25 years at minimum. And as an economist by “original” profession, I question all those dreamworld models that burn money in the next big hype.
Given the current droughts and considering circular economy, thinking about greenhouses filling entire regions in Spain, I think we will need to invest into vertical farming. Given a “closed system” to improve the water usage. Reduce use of chemical fertilizers, herbicides and pesticides. Discussing with a startup recently, I was surprised on the efforts on seed sequence. Not for the plant or the soil, but to make sure the bees they use at all times find sufficient nectar.




There are many investments into small-scale change, with a focus on two to three years. That by itself should be an issue of concern for any real impact investor. As for climate change and sustainability that can only be a start. What is the 10 year outlook? What impact will it make by 2050?
One of the main concerns we are faced with at Kolibri is our approach to sustainability. First of all, why would an airline turn sustainable, it’s heresy, ain’t it? And why would we pay salaries above country average? Maybe, because they are sustainable and secure the people’s motivation and loyalty?
Is your job “system relevant”? If you work in home office, I can tell you the answer is No. If you work in consulting, I can very likely tell you the answer being No. Working in aviation and transport, the answer very likely is No. And if your salary is above average, the answer also very likely is No.
My “intern” boss (again) taught me respect for everyone. The guy on the fork-lift, the cleaners, truck drivers and “secretaries” (yeah, we still had those). He taught us to set up the coffee when it was empty and not bother the secretaries. To clean up ourselves to make the cleaners’ jobs easier. To think beyond our petty box as “office workers” and value the hard work of the real workers. Also to question, but then also embrace the value of our work. IF we added value.