### Thought about Learning with LLMs I really enjoy learning with LLMs. If you know what you are doing, that is you know how learning works and don't cheat yourself by assuming you'll remember everything you see once, you can bounce ideas around, mold them, explain them, until you remember. It is much more fun than any other mode of learning, having the same reward mechanisms as a conversation with a tutor. The major danger is that you just cannot be sure all the info you get is correct. You never can, not even with a human tutor ([[Schools also Hallucinate]]) and I think it largely doesn't matter because you learn to use and as soon as you get in contact with the reality of a problem, you're bound to correct the mistakes. But i feel that the optimal future is an education system where you learn anything you want quickly from such a conversational system. On the same platform, you could also query an actually working expert, once you reach a certain level by self study. This would solve multiple problems: experts not finding new people to solve problems (company hiring), hallucinating software, isolation, fear of being replaced. ([[Invent A Silly Thing#2026-03-20 A Future Learning Platform|Invent A Silly Thing]]) That last one, fear of being replaced is probably just not going to happen. Let's assume that AI could one day be considered alive, running as an independent, self-sustaining process. It is then separate from us, either neutral, beneficial or an enemy and humanity will deal with it accordingly. We either lose or just weather the new situation like we would with a new pathogen. This is not a fear of being replaced but a doomsday fantasy, most of which don't actually happen. Much more likely is a shift towards incremental learning (and [[Incremental Action|Incremental Society]] in general) by building up one's own ideas, goals, know-how and then asking interactive systems to carry out the first steps (build an app, provide cutting edge research, set-up a production line...). To scale this up to the current state-of-the-art will always require concentrated effort and collaboration with established parties. I imagine a world in which research won't even be published anymore for the most part, but directly fed into databases and served to people and AI agents alike, when it pertains to something they are currently working on. So you can ask an LLM to write in the style of Shakespeare. That's a statistical distribution. You cannot ask it to create the next text that will become famous like Shakespeare's work. That is an ill defined question. A summary of Hamlet will teach you about a very famous story but not why this story has become famous. You can ask it to give you an authentic sentence using a certain word in a language you are learning. That's again statistics gathered from how actual people talk nowadays online. But while learning and working in technical fields will now become even more interesting, and the humanities might specialize even more to be for and about the human condition, I think the technical problem of information retrieval (both from the unknown into the human sphere and among humans) is now more or less solved, but the quality of this system will depend on ethical, legal, societal framing. If you think about it, every human being is a database of highly similar knowledge. We are all different because of how we use that knowledge. It is the cultural framing that determines how we use knowledge. And so it will be the cultural framing of different AI providers, that will determine the quality and usefulness of their output. [[The cultural framing will determine the quality of AI output]] ### Thoughts on LLM Output Quality between Technical Domains and Humanities I don't expect this to get better. In technical domains there is a right answer or at least it is possible to disprove a statement (Falsifikation, [[Karl Popper]]). That is papers, claims, programs (do this automatically), designs, policy, can either be correct or not. Human institutions peer-review and fact-check for safety and to reduce abuse. Machines do that even better. So the feedback loop where algorithms make better algorithms to do decisions based on technical knowledge are very plausible. And learning technical knowledge is too, for the same reason. An LLM knows better what a particular formula says then the nuance behind words like (kann, koennte unter Umstaenden...). At least this is my intuition based on vague output I receive for questions from the humanities and about physical subjects. Authors in the arts and humanities might furthermore want to defend their work, so it won't be directly accessible. And crucially, I think the humanities and art doesn't have any other meaning than their form. We read a philosopher just like a narrative writer for the way they tell their stories. The process of reading is where we learn and feel. It doesn't make sense to automate either side. If we were to analyze the works of philosophy, we would find out that they all talk vaguely about subjects that they don't really know anything about. There is nothing to prove, no right or wrong answer. Matters of moral, ethics, taste, beauty: they all exist in the realm of a human beings interior appreciation of the world. Engaging with them enriches life experience, nothing else. There are secondary effects like a particular idea, read by a particular person at a certain point in time can increase the probability of them finding a new analogy to build an actionable technical statement that can be built or calculated. I don't think we are near understanding how that works exactly and AI probably isn't either. It also doesn't make sense to automate engagement with the humanities. A summary of [[Laszlo Krasznahorkai]] doesn't give you the same feelings, memories, interests as the reading the actual book. A bullet point list of what Nietzsche thought about a good way of life won't cause any miraculous insight into how you could lead one.