{ "url": "https://www.youtube.com/watch?v=8GGuKOrooJA", "metadata": { "video_id": "8GGuKOrooJA", "title": "AI Dual Manifold Cognitive Architecture (Experts only)", "description": "All rights w/ authors:\n\"MirrorMind: Empowering OmniScientist with the Expert Perspectives and Collective Knowledge of Human Scientists\"\nQingbin Zeng 1 Bingbing Fan 1 Zhiyu Chen 2 Sijian Ren 1 Zhilun Zhou 1\nXuhua Zhang 2 Yuanyi Zhen 2 Fengli Xu 1,2∗ Yong Li 1,2 Tie-Yan Liu 2\nfrom\n1 Department of Electronic Engineering, BNRist, Tsinghua University\n2 Zhongguancun Academy\n\n\"PersonaAgent with GraphRAG: Community-Aware Knowledge Graphs for\nPersonalized LLM\"\nSiqi Liang 1*†, Yudi Zhang 2*, Yue Guo 3\nfrom\n1 Purdue University\n2 Iowa State University\n3 Columbia University\n\n#airesearch \n#machinelearning \n#scienceexplained \n#deeplearning \n#artificialintelligence \n#aiexplained", "uploader": "Discover AI", "channel": "Discover AI", "channel_id": "UCfOvNb3xj28SNqPQ_JIbumg", "upload_date": "20251127", "upload_date_formatted": "2025-11-27", "duration": 4262, "duration_formatted": "01:11:02", "view_count": 8597, "like_count": 452, "comment_count": 104, "tags": [ "artificial intelligence", "AI models", "LLM", "VLM", "VLA", "Multi-modal model", "explanatory video", "RAG", "multi-AI", "multi-agent", "Fine-tune", "Pre-train", "RLHF", "AI Agent", "Multi-agent", "Vision Language Model", "Video AI" ], "categories": [ "Science & Technology" ], "thumbnail": "https://i.ytimg.com/vi/8GGuKOrooJA/maxresdefault.jpg", "webpage_url": "https://www.youtube.com/watch?v=8GGuKOrooJA", "audio_available": true, "extracted_at": "2025-12-03T13:23:34.282948" }, "transcription": { "text": "Hello, community. So great to do you back. Today I have a little bit of an EI revolution for you. So at first, welcome to our channel, this Kariai. We have a look at the latest EI research paper, the latest three research paper that I selected here for this particular video. And I will talk about a dual manifold cognitive architecture. And I think this is a little bit of an EI revolution. And I will argue that this might be even the future of the complete EI industry. Let's have a look. Now you know what is the problem? Our LLAMs operate currently on a single manifold hypothesis. They flatten all the training data, all the personal habit, all the individual bias, all the historic facts, and all the collective reasoning of um, alpha domain like physics or chemistry into a single high dimensional probability and up until now, this was just perfect. It was great. But I'm going to argue that our do that our DMCA, our dual magnifold cognitive architecture will define intelligence much better, not as a next token prediction like we have currently with our LLAMs, but as a geometric intersection of two distinct topological vector spaces that we are going to build. Now have a look at this. I'm just amazed what here Gemini 3 pro image preview my little nano banana pro can do. And I spent about 20 minutes to describe this image here to nano banana pro. And after three times we got this beautiful thing. We gonna go through each and everything. So let's start. This is our paper of today. This is here by Jinghua University in China. And November 21st, 2025, Miro Mind. And the title tells it all. We want here more or less to Miro a real human mind. We want really to understand a certain scientific personality empowering the omniscientist, the AI scientist with the expert perspective and the collective knowledge of human scientists. So we're not satisfied anymore to build a synthetic AI system, but we want to bring a closer to the human scientist. You immediately see that we have a common topic, the AI persona agents. Like in one of my last videos I showed you the contextual instantiation here of AI persona agents like shown by Stanford University just some days ago. And now we have here the other outstanding university, Jinghua University and they have now the same topic. And they tell us, you know, when asked to act as a scientist, you know, and have your prompt here to your AI, hey, act as a financial broker, act as a medical expert, act as a scientist, a standard LLM up until now relies now on a flattened representation of all the textual patterns. But you know what, it lacks the complete structural memory of a specific individual cognitive trajectory. And this is what Jinghua University is trying to map now to and advance the AI system. So what they do, they shift here the paradigm from a pure role playing, you are now a medical expert, which is more or less fragile because you have no idea about the pre-training data for this particular LLM to a cognitive simulation, which is structured and constrained. I'm going to explain why we have structure and what are the mathematical formulas for the constrained we're going to impose on a specific LLM. Now, the orders of mere mind organ are that the scientific discovery is not just factory retrieval. So as we go here to a very specific case, we go into science and we want to have here a discovery process. I want to find new pattern, new interdistinonal plenary pattern between physics, mathematics, chemistry, pharmacology, whatever. So it is about simulating now the specific cognitive style of a scientist, more or less the individual memory of a human that is now constrained by the field norms. This means by the collective memory. And I think this is really the end of one size fits all age, because all this, more or less, flat generalist framework like Leagley Act or Autogen, they all fail in specialized domain and have multiple videos on this. But now we're going to build not just the digital twin, but a cognitive digital twin. So they really pushed the boundaries here for, well, let's say from simple data repos to a functional cognitive model that can predict future EI directions offering here. And this is now the interesting part of a blueprint for an automatic scientific discovery. And it's not going to be that simple as we have read here in the last publications. So I said, let's start here with our little tiny EI revolution and let's have a look. Now, Chingwa tells us, so we have here now the individual level, the human, the singular human level. Now we look at the memory structure. And they decide everything that we had up until now was not enough. So they go now with an episodic layer of memory with a semantic layer of memory and a persona layer. And one layer built upon the other and then we built a gravity well. We built here a force field if you want with very specific features. And this is then our first manifold for our dual manifold branding. So let's have a look. They start and they say, okay, you know, the basic is here the episodic memory, you know, all the raw papers, all the facts, everything that you have, the PDF, I don't know, the latest 1000 medical PDFs or the latest 10,000 publication and theoretical physics. Then we go for an semantic memory. But we do have in, if you want, evolving narrative that is now developing of a single person of the author's research trajectory. Now, if we go for an individual level, we restrict this here to one person and we just look at the temporal distillation pipeline of this single person. What is the author written in the first month? What has the author written in the second month? Then we go through all the 12 months, we have yearly summaries here and we want to answer how did they thinking evolved of a single scientist, not just what he has published. So whenever you know, give here an LLAM or any I system that has computer use access to your files and your local desktop laptop, whatever you have. Now this is great because now all those data become available every email, every file that you worked on, every if you prepared your PhD or your prepared any publication. How many month have you been working on this? How many version of the final paper are stored in your directories? Now, if any I would have access to this, it would be really able to map your personal or my personal thinking process, my mental if you want, evolvement here, how I understand this topic. And if we are able to bring this here into a temporal pipeline, we can distill further insights. And then if you have this information, let's say of my persona, we have now an agent or an LLAM that can build now my persona schema with all my knowledge about mathematics, theoretical physics, whatever. So we can build now an abstraction, a dynamic concept network, capturing now my let's say also stylistic, but also my reasoning preferences, all my knowledge is now mapped to an AI system. Plus we have everything timeline stamped. So we have here, as you see here in the semantic layer, perfect time series going on for month or even years, depending how much data you have on your computer. So they say, okay, let's start with the individual person and let's build this. Let's do this. Let's follow their traces. Okay, the episodic memory of the series is here, the very last layer at the bottom. What is it? We have now what they call a dual index structure to handle the specificity of the scientific terminology. Now, I didn't know about you, but in theoretical physics, we have real long technical terms, also in astrophysics, long technical terms, in high energy physics, elementary particle physics, long technical terms, thing about medicine, long Latin terms, thing about pharmacology. You understand immediately. You are not allowed to make one single type mistake. So you cannot give this to an LLM. So what do you do? You build a hybrid regga engine. Of course, our good old friend, the reg machine. But now the reg documents are paused into semantically coherent chunks. So what we do now is we have a certain chunk. Let's say a sentence or maybe if I have a complete paragraph, it's a very homogenous paragraph, then we have to source document. This is in file number, whatever from and we have a timestamp. So exactly here, the recording when did I, when did I write down the standards on my computer or when did I publish it or when did I just cast it, when send it out in an email to my friends, exactly timestamp here, the complexity of a topic. Now, if you do this for million and millions and millions of chunk IDs, you got no idea where we are. And may remind you order say, hmm, you know what? We looked at all the vector search capabilities and they are often too fuzzy for real science. And so what we have to do, we have specific acronyms or chemical formulas, they all must be exact. You can't go with an LLM that just has a probability distribution here for the next token prediction. So therefore we will choose not an LLM but something different. So now they went with the episodic memory, the stores, every chunk of information they found, let's say on my computer here, in two parallel searchable indexes. And the first is a dense vector index. This is what you know, this is a high dimensional embedding via here the encoder model of a transformer for the conceptual similarities. So we build a new mathematical vector space and we say, okay, given our dissimantic similarity of my, let's say 100 files and the content of these files, we can now place the vectors here in the new vector space and we can arrange those vectors that we do have conceptual similarity of the technical terms. But talking about technical terms, we now store them separately because we say, hmm, we use now a sparse inverted index. So this is a standard BM25 index for an underlying exact, exact, laxical matching. So we have absolute the keywords, the symbols, the technical term that we have and they go in a separate index. So there's no mixing up and there's no hallucination by any LLM. We cannot afford this in physics or chemistry or medicine. And then, since we have now two specific scientific indexes, we can merge the result via a rank fusion, a reciprocal rank fusion. And this is the way they set up here the episodic memory of a single researcher. So this is here all the scientific content over the last five years that I have here, let's say on my laptop. Right. The next step is here the semantic layer, as you can see, you know, the semantic memory builds on the episodic layer and performs what they call now a cognitive distillation. If you're familiar with map reviews from the very early days of EI, you know exactly what we're looking at. Map reviews this deal pipeline. This is all there is. So let's see, they use any LLM to transform them. Now all the definition from the episodic layer come up. And now just give you an example. I say, analyze the cognitive evolution focus on any moderation of ideas of this stupid human, any conceptual shift that you can detect here on all the hundred and thousand files on his notebook or any changes in the research focus of this personal or the methodology he uses. Or why suddenly in, I don't know, April 19, I decided to go from a particular branch of mathematics to a more complex branch of mathematics because the complexity of my problem suddenly increase. And LLM should now distill from all the episodic layer elements with the timestamp here. As you see here, the map reduce pipeline. And if we have this information, you know what we're going to build, we're going to build a trajectory. As you see here, we have a trajectory of time of trends of keywords, topics here, whatever clusters you can define your clusters, if you're particular looking for some quantum field theoretical subtopics here. So you see exactly how my knowledge evolved here over the last five years, and I have to nothing, I just give you my laptop and this is it. Now, they model a cognitive trajectory. So they say now we distill not as semantics. So the system now understands the reasoning link that I had in my mind between paper, I published a file, a on my laptop under the file B. So what it does, it captures now, and what they call the cognitive inertia of my intellectual topics. Now, this is interesting. You see, we have now a five year timeline of my scientific work. We have nine, the semantically at a complete time series. And guess what we do next? Yeah, if you want to very simply find explanation, think of a semantic memory as a biograph, AI system. Now, look, so everything that I published on my computer and says, okay, there's this fellow. Oh, no, there's no way he's doing science now. So trends isolated time stem into a cohesive intellectual history. And if we have this, the next step is, of course, and you already guessed it, we have now a mathematical transformation. We have now the next step and we go to the persona layer. Now, I am modeled in my, what do I call this, scientific intellectual development. We are now here transforming this here from a temporal flow from the time series into a topological structure. And the simplest topological structure that we know is here, knowledge graph with specific weights here. So we have here particular focus on some topics and I'm going to explain what I mean in a second. The simplest way to explain this is with an example. Let's see, the input signal now entering here, the persona layer is now, let's say in 2023, the order moved away from his CNN's convolutional neural networks and started focusing heavily on graph neural networks. Now, you know, this is not true because we did this in 2021 to get on this channel, but just to be here on the safe side, it's just an example. And we did this for more like color modeling, see my videos from 2021. Okay, great. So what we do now with this. The system now understands looking here at the centers that comes up from the semantic layer, and says, okay, we have to create some nodes. Now we have to build a topological structure. Let's have here knowledge graph. So what is new? We have here CNN's, we have here the GNN's and we have molecular and we have modeling. So let's build this. Now, particular of interest is of course the quality of the nodes. GNN's are not just a subtopic, but it's a main and major topic. No graph, neural networks. So it becomes a concept node. Moleicles, there are thousands and millions of different molecules. So it becomes a concept node again. So you see, we already introduced here kind of a hierarchical structure in our knowledge graph. And now we have here a certain wing that we're going to do because it might decay or lower now the centrality. This is a graph theoretical feature that I explained in one of my videos of the particular nodes here. And because it is stated falsely that in 2023 and it was 2021 that I moved away from CNN's. So currently the centrality, the importance here on all the sub-nets here of my graph, CNN's are somewhere lower in the importance. No, they're not as important right now. They calculate this with the centrality measures. And if we have this and here you see it here, the persona layer, this is not my profile. I have a profile, a machine learning. These are my sub topics. I studied, I learned, I published, I wrote code. I did not publish and just have on my computer, whatever. And then we have something in bioinformatics to work. I've done done something whatever, other topic you have. How strong are the interlinks? How strong are the edges between these topics? So we build a knowledge of my temporal scientific evolution as a scientist. But you are not happy with this, because we are going to map this further. So in this step, we mapped it from the temporal flow of the semantic layer of the time series into a topological structure. But this topological structure is not really the word we can have a smooth transition and inter-gurls. This is a graph. Come on, this is bulky. This is not elegant. So what we're going to build is a gravity well. We're going to build a field representation. This is here the blue heat map that you see on top. And this shifts now the sender. Let's say somewhere, there was G&N. Now shifts here the sender here to G&N. So you see, we have a lot of mapping here to have here the internal individual, my personal evolution. But this is not all done by the eye. So now the eye says, okay, let's do some inference. Now it looks like the new topology of the graph and ask, given this new shape, what kind of scientist is this person now? If I don't know, some AI says, okay, who is this person that does hear all these beautiful YouTube videos? What is now his actual current characteristics? And now the system might update here if it's working now for me, the system prompt in a way that it says now him, okay, listen, if you work with this guy as an AI, your style has to be highly theoretical based on first principle reasoning. So you see, all of this just took a rive at this simple sentence as that, the eye has now a perfect characteristic of my actual learning experience, understanding what I know, what I do not know, and now the AI is the perfect intellectual sparing partner for me. Now the CI system is the perfect professional AI companion for theoretical physics, for bioinformatics or whatever. So what we have achieved is not only build me as a perfect mirror mind for the eye to understand, but the eye can now decide to find the perfect complement to my intellectual morphism. So it is the perfect partner for me to have here an augmentation here of our an acceleration of the research. Now you can look at this of course from a mathematical point of view and say, why was this necessary? I mean, look at this, we went through a four different mapping. Why? Well, Adolams cannot calculate a similarity against a story against my learning. They can calculate it against a vector or a graph state. It is a simple mathematical operation. And now by converting the trajectory into a weighted graph, the system can now mathematically compute, hey, if I get a new idea, how close is this to the current network to the current, if you want gravity value here after what we call this scientific intellectual capacity of this person. Now we can calculate it. And then if we can calculate it, we can code it in Python C++, whatever you like. Now I have been already talking here about this gravity value. And I just call it a gravity value, call it whatever you like it. But it's just important that you understand the idea. What is it? And now if we change the framing, we look at it from a little bit more of a mathematical perspective, you immediately see it's a probability density field that we derive from the topology of the persona graph. Persona graph allows us this mapping here into a n-dimensional gravity value. So how we do this? I mean, how can you have just a stupid graph, a flat planner graph, and suddenly you have a three-dimensional beauty of a manifold? You ought to tell us the way they decided to go. So here they say, okay, first the system calculates the mass of every existing node in our network. And we are in mind determines the mass using here a particular graph-specific centrality measure. This is the way they determine now the mass of every node, or if you would say the importance of, mean, the current temporal involvement of my scientific knowledge. And then they define also the distance. The distance you notice is of course, and then by the space one minus cosine similarity beautiful. If we go here for an Euclidean simple distance, I have later we are going to discuss some other hypothetical spaces, then it becomes a little bit more difficult. Now this blue gravity well is, let's go to the next step of abstraction, a kernel density estimation over the embedding space of the persona graph. Now I have multiple videos here on this kernel density estimation, but in summary, you can say that the gravity intensity G at a point Q here in my blue gravity field, and let's say Q is now a new idea, is the sum of the influences of all the nodes in the graph, exponentially decaying with distance. I mean, this is the simplest thing you can think of, right? Everything has to contribute to this, but we have an exponential decay function so that not everything is contributing here in equal matters here to this particular, that the points are the closest are the most influential. I mean, it couldn't be easy, you know? And here we have this simple formula that the students here, the experts here from Jinghua University, show us. Great. So what did you do? This deep blue visualizes not a specific region of a, let's call it a latent space, where the outer fields, or I feel most comfortable, you see here in this dark here, I called it more of the same. This is my expertise. This is what I know is exceptional, need well to do. I've worked the last two years only on this dark area here in this gravity well. Those are my topics. This is I know well. But of course, if I want to have a brand new discovery, now they argue, hmm, maybe it is not exactly in the same old thing that you do for two years, because otherwise you would have discovered it. So maybe there's somewhere else. And they say now, okay, so what we have to do now is find a mathematical algorithm, a repulsive force that acts on this, if you want gravity well structure, to bring me out of my minimum over the mountains and somewhere beautiful new. So what I need is a novelty repulsor. I have to have a force acting on me sitting here, boring and doing the same thing over and over again, and not this carrying anything new. So push me out here of this and let's go somewhere we have never been before. So you see, it wants here to simulate here the discovery, not the repetition. Repetition is done in the blue. And therefore the algorithm treats here my order persona graph, not as a target to hit, but it is exactly the negative, as a penalty zone to avoid. Now the thing becomes interesting because yeah, you can push me out with any force out of here my stable position at a minimum, but in what direction do you push me, where should I go and continue my research on. And now, think about this covers here, where says, well, what we have is the second manifold is an external manifold. And this external manifold is here, let's say here open Alex. So this is the knowledge of all, I don't know, one million published paper in my topics that I research on, it's a free and open source database of scholar research paper, author, institution, everything is there. And let's say, okay, this is not the outside world. This is not a second manifold. This is here my personal manifold. And this is here the community manifold in total, the global science community, where they are, what they have done, what their examine, where do you feel. And they say, let's do this. And they build now simple idea, a wireframe grid. So you don't have to build a real a smooth manifold, a wireframe grid is enough. You just have some estimation points and you can connect this net in the, in real, isn't it? So what do we add here to my stupidity here on the left side in the blue valley here? We add if you want a social connection to my social community, this is here, the research community from astrophysics and some new ideas might come from astronomy, some new idea might come from medicine, whatever. So we have now from simple approach here to an interdisciplinary approach. So we have here now one manifold, the second manifold, and the second manifold is also constructed that we clearly can detect hallucination. Because if the LLM suddenly does some hallucination, we can pocket him here into this rabbit hole and say, okay, let's forget about this hole. What we are interested here is the maximum of the community knowledge. Can I contribute with my knowledge here to the open problem started here at the top of the mountain here, this particular sweet spot? And you see, told you a force has to push me out, and this is not a path to optimal research, an optimal research idea P star. As easy as can be. And again, thank you to my nano banana pro, because about 20 minutes, it took me that I put all the data in, I said, hey, this play the summary, I want this and this position over there. And it just, it just did it. There was not one mistake here. Okay. Now, this is now the story, this is my story, no, it's a scientist. But now, of course, we have to code this. So if you want to code this, we have to work with agents, we have to work with LLM, we have to work with networks, we have to work with different mathematical operations, like mapping functions, so let's do this now. Okay. So what we have is the order say, so we need to have a super, I know we have an interdisciplinary level where the super coordinator agent is supervising everything notices the mastermind. And this coordinator agent decomposes now an incoming query and roots them to particular domain agents that are navigating here the open Alex concept graphs or building the graphs or the author agents that understand, now my scientific personality, no? So the system solves now proposing complementarity or ideas as a dual constraint optimization. I have both manifolds and in both manifolds, I have constrained. And now I have to do a dual constraint optimization process in mathematics. Couldn't be easier, no? It is just the perfect path. Let's do this. So the idea is, or if you want to, optimal idea that I'm as a researcher looking for, P-Star, is forced to exist in the Goldilocks zone right on the Ramzer. It has to be valid science that is accepted by the scientific community, but also real close to my particular areas of expertise, so what I'm as an author, almost developed, but almost thought of, but I just didn't do this little tiny baby step. So what we are going for is the easy wins. The I would analyze, hmm, this particular guy here with his YouTube channel, he did some research here and he was almost there to discover something that the community also indicated there might be some new element. So let's tell him, hey, go in this direction, learn this and this and this, and then you will make a significant step in your knowledge and discover a new element. So this is now, and now I need a little bit feedback from my viewers, because I'm now trying to accelerate my learning, but at the same time, I'm trying to accelerate my understanding of a visualization so I can communicate better with you, my viewers, my subscribers, and you're the members of my channel. And this is the first time I really invested heavily into the visuals here with Nanobanana Pro, for example, to build a visualization of a complex tier rim that is more than 4050, 100 papers and I try to bring it here just on one simple image. It is not easy, but I will try this if you as my viewer, you'll like it and you have this additional visualization. So mirror mind here and the next paper, what we call person-agent, demonstrate now that the vector databases here are simply insufficient for complex reasoning. But what we need, we need more complex graph structure and mapping from graph to graph to represent new and established relations between the different memories. And in mirror mind, I showed you the temporal evolution of my scientific mind. Now, if you have a closer look at this, especially the semantic memory now, it explicitly models how a scientist's mind changes. But you know, understand what is happening now? We break with one of the most important theorems that we had in artificial intelligence. And this was that everything is a macovian system. And suddenly, it is not that I just can look at the system and say, this is the current state of the system. And it is not depending on the history. Because now that you mirror a human brain, a human mind, it is very well-depending on my personal history, where I started to learn mathematics, then physics, then whatever. And then, you know, bit by bit, I'm a little bit better here. You have to understand here the time evolution. So suddenly, we break with a macovian state. This means that all algorithms that we have in LLM also break and become invalid, inoperable. So now these things become really interesting. And now you might ask, hey, I'm just here to learn how to code an agent. Do agents do any of those operations you are asking for? Are you doing? And I say, it's so glad that you ask this question. No, because now I can tell you about the multi-agent interact on pattern here in the work done with the coding here by Jinghua University. And I want to focus here on the multi-agent cognitive engine. As I told you, we have here an interdisciplinary coordinator here, our super-yide understands everything can sort everything can plan everything can execute everything great. So what it does, it gets in here my human query. Hey, I don't know, find me the next research topic because I'm as a human. I'm too stupid to know where I want to go for two. Okay, so this here I says, okay, I signed out two query vectors. I send a query vector now to, you know, now I exchanged here the manifold. This is here my human learning manifold on the right side. And on the left side, they sent here the same query vector in an embedding here in a mathematical tensor structure now to the other side. And this is here the objective visibility, so all the hundred thousand of research paper that are now suddenly in the brain of any system. Of course, so this is the collective domain of theoretical physics of medicine. You got the idea. But let's say we have here built a holographic wireframe wall. So this is my idea. Please go with whatever you like. This is just an illustration. I try to find to explain this area to you. And let's say we have here a domain agent. And the domain agent is just reading every day here, the latest AI research publication that has to do anything with theoretical physics. And then we have here an agent. This is reading here every single scientific paper that has to do with biology. And they built here their internal representation and their network here, their wireframe here, after complexity of the topics of the dependencies here in science. Great. So if you want, we have here the domain knowledge graph of physics combined with biology. And now the query vector comes in. This is a very specific query vector with a brand new idea. And this is now, hey, does the general global research community as ever heard of this idea that I how I should develop as a human? Is there anything related to it? Is there any publication that gives me a help? Is there any publication that guides me in my personal development? Anybody has tried something crazy enough or similar enough. And now we are again working with a cosine similarity in a normal vector space. You see, explore the space and says, yeah, we found some path of augmentation that your idea is not as stupid as you think, but maybe it's a valid idea. And we provide now from the complete, if you want knowledge graph of the world, we provide now the particular output here. This is the green beam. We provide now as an output. But at the same time, of course, this query vector was sent here to my personal learning manifold. Now, I told you I have a repellent force field here. Now, this is an orange here. But I do not want that if this square vector comes in, it is already the same as I've already doing. So more of the same, I don't want this. I want to go here for a scientific discovery, go where no one has ever gone before and you know the story. Now, so if this vector here crashes through my force field, it has to have a certain, let's call it impulse impetus. And then I will analyze this. Now, and I just explained to this here all the different layers here of the individual personality of my mirror mind. And now I now discover is this something, is this an idea that would push me out of my deep blue gravity well into a new direction. And I send out, hey, yeah, this sounds absolutely interesting. This is absolutely normal. T I have my experience in the topic A, B and C. And now I say, hey, this is my specialization. I have sent out the orange beam to novelty. So now we have here the knowledge integrator, which is something beautiful. This is now where the braiding is going to happen. We combine now the green beam and the orange beam into something completely new and the output of this will be my new research direction, my new research title, where I should move to have a scientific discovery as decided by the AI system. Oh, wow. Okay, let's go with this. I hope I'm clear as or as right now. If not, I just want to give you an example. How does it work? Let's say we have the idea, hey, let's build a narrow morphic battery. No, battery is always our topic on case. So how is now the flow diagram? Now, we have a coordinated HN and takes in here my crazy idea, building here an our morphic battery. So the coordinated AI say, okay, I activate now an auto agent to or already if I'm already mapped in the system, if not, you can build here. Your auto agent, if you say, hey, build me, yeah, you get the idea. And a domain agent for biology. Great. So if you want, this is me and then here we have here agent here for biology. Great. Activates and creates here agents. Then your agent, the individual, if you want person, builds now our excesses, I have has access to your persona graph to the history, whatever I've already researched and cut out and electrolytes in voltage fade, all the constraints here and do whatever I do every Tuesday that I build better cathodes. Okay. So I say, don't go there because this is what he is already doing and it has not having any discovery at all. So he pushes me away from those areas that I already do. Then the domain agent, if you want to collective agent here, we're guarding biology looks now at all the publication, the biology concepts related to energy. Finds here neural glia cells, the concept to ion regulation here returns now. Yeah, there's something like ion regulation biology to an electric light transport in batteries. Maybe there's some hidden patterns here in the understanding and the reasoning in the, I don't know, molecular transport architecture that we can use now from biology now in battery technology. And then comes here the cooperation phase, the optimization as a studio in the blue well. The coordinator asks, hey, is this a valid path? The domain agent says yes, but I mean, actually I showed here reading here 50,000 publication that we have here. The other agents say I've never mentioned glia cells in my last 50 paper. So this now for me is a complete new topic, but a new everything about science. No, I just never focused on this particular point of research. So let me do this. And then it scores here a novelty score and they try to maximize the novelty score. So the eyes are not going to give me a brand new topic. And the integrator now generates it a final output. And the integrator says, hmm, after having looked at all the AI research paper and what have you learned in your last 18 years, I give you now a proposal, design a self regulating electorate gale that mimics an ion buffering capacity of a neural glia cell to prevent voltage spikes. This is your topic. This is your PhD. Do it if you solve it. You gonna spend or an millions of dollars. Right. Yeah, you're gonna spend millions of dollars too for a computer button. Now I'm mind about this. But it was the first paper. And I know I told you, I want to accelerate my learning. I want to accelerate my explanation and we can go in higher complexity because now with nano banana pro, hopefully I have a tool to to to show you my ideas, how I see things and maybe it becomes clear to you or say, Hey, buddy, no way what you are thinking. So let's increase here the speed, let's increase here the acceleration. And let's go to another paper. And you see I place it here and this is also a paper by November 21st. This is here from Purdue University, our state University, Columbia University. And they have a topic pair zone agents with graphrag. Our good old friend graphrag. So what they build is a community of their knowledge graph for personalized LLM. And you might think this sounds real similar to what we just did. All of course, what coincidence that I selected this paper, but we published on the very same date. Okay, they tell us just is this raw reading? They say, Hey, our method improves the data organization here that if one score by 11% and for the movie tagging is now improved by 56% and I say, Okay, if this is the step in the improvement, if we use this, let's have a look at this paper. So, persona agents. So let's say you want to build here the little Einstein. No problem. So you ought to see our tell us, Okay, our framework generates personalized prompts now for any eye systems by combining here a summary of the user's historical behavior. Let's take again me as a user. So my historical behavior and the preferences extracted from the knowledge graph. So what I'm doing, so if I have multiple AI systems from I don't know, and tropic, open AI, and Google, and to meter and Microsoft on my computer and all of those AI have access to my complete computer and to my complete documentation. Everybody has my data. Great. So what did you do it? And then we have a mixture and then we have also the global interaction patterns that we see, let's see on social media, all the scientific publication and who is referencing what other paper. So we have to complete social interaction. Let's go only on the science level. And this can be identified through a graph based community detection. So social media. We bring it all together. We have to compute power. No problem. No problem at all. Let's go with the complete science community. And let's build here with this user history who is definitely not an Einstein. How can he become a simple topic now? So they tell us here and this is not mine, not a banana, but this is done here by the orders here. You see here that it's not as beautiful. They say we have a user profile construction. And I would explain everything to you. You know, I have a personal preferences, the relevant concept, the interaction statistics of me, all the emails who I talked to, I cooperate with who might publish what paper, and then they have the external knowledge graph construction. So what is happening to currently in quantum field theory and theoretical physics in computational science, all the interaction node, the concept nodes, concepts we all were encountered. No, then they have category theoretical physics, mathematics, biology, whatever. You know, and then all the semantic relations, remember the co-sense similarity in a normalized vector space. So we have to use the data in a community data and then we bring them all together in a mixer and then we have a personalized agent that is now almost a substitute for this human, but the personalized agent we can develop much faster. No, this will become a machine that is much more intelligent than a human user. This is me, by the way. So what would be, we build a semantic memory and say, Hey, I noticed you just talked about this and said, yeah, of course. And then we need an episodic memory and say, Hey, this was the first layer, yes, of course. And then we have a community context and I said, what is the surprise? So you see, complete different place at the very same day, they published something that is almost identical. And they now generate here a personalized prompt to then they feed to the LAM to get a real highly specialized personalized response. Now, the beauty of what they do is they work only with graph rack. So they are not going here with BM25 or with some dense algorithm. They are here on the graph level. They're operational only on the graph level. Real nice. So let's go there. So we have now from a graph topology, what we want is the output in a linearized context here for a stupid LAM. If you want, this is here the braiding mechanism that was already talking about. And here again, word, coincidence, I ask here nano banana pro to generate here almost identical image here for our braiding process for our machine that brings here everything together. Okay, let's start. So what we have again, as I told you, we have now we start not with the three levels of memory, but we are now operating here in a graph rack system. So we have here a graph and this graph, I have now interaction note of my history. So that I the user right here, now we are somehow in a in a movie. So the ghost and then I watched matrix, I watched matrix again and then I read here a particular book about this and you see, okay, so these are my interaction notes. These are here the things. Then they built here what they call here. Where is it? The concept notes. These are the triangles. So this goes to Cyberpunk. This goes here to dystopia. This goes here to virtual reality and you see we already kind of a hierarchical structure of here of our note layers. And then we have pure community notes. But these are the global interaction notes. In general, all the people in this planet like ghost in a shell or whatever, whatever, matrix garden tomato, whatever you like to use here. So you built here a network. Now this network has of course, if you want two components, but the first component is here my personal stream. Then we have here how did the community, let's go again with the last five years. So how I developed in the last five years and how does the research community developed in the last five years. And then we have to bring it together in this rating process or by partite fusion operator, whatever you like call it, we go have a look in detail what this is doing and how it is doing. But just the idea. And then after we won't linearize this complexity, we have now for the LLM context window, we can create a system prompt, we can have a stream A of my personal history and the stream B where I tell the AI, look in this five years, my sub community theoretical physics developed decent decent decent decent this. And now this is the information for you as an LLM. This is my input to you as an LLM and know you LLM do the job. So you see we are here in the pre-processing of the data to an LLM. So you see that again, looking here at the graph distribution, we have here the user manifold and we have if you want the community manifold. And now these two streams here are brought to together. So I'm not again squeezing everything into a flat one manifold structure, if it's with high dimensional, but I separate here very specific persona. This is the blue stream. This is me, for example, or you too, hey, what is happening in the world? What is happening in the community? If you are an artist, if you are creative, if you are dance, if you music, whatever, what is happening in your world? And what you have been doing the last five years and we bring it together and we see what emerges. So this persona agent, and this is the complete framework here, overcomes now the cognitive flatness that I told you here at the very beginning of this video. How we do this through a recursive graph rack that we built. So we use something that we know, there's nothing new, there's a little bit new, but everything else is clear. Let's have a look. So what I especially found interesting, how would you code a braiding processor? No, in code, because what it's doing, it's just a linearization. So it must be real simple. And in standard drag, our retrieve log manager generation, the system retrieves the list of documents here from external data sources and just paste them into one to one another in the LLM, but this is stacking this is not braiding. So the often the LLM often gets confused by contradictory or irrelevant data, because maybe in the data we brought back from rack is the earth is flat and then the earth is not flat. So what to believe? So let's solve this. Braiding is now a much smarter structural merge operation. It doesn't just pile up the data. So the earth is flat, the earth is not flat, the earth is whatever. It leaves now two distinct strands of information together to create a stronger rope. I hope with this image, I can communicate what I want to tell you. So the strand A is of course the self. So this is my knowledge and a strand B is the community, the world. So strand A more or less is, hey, what have I done the last five years in theoretical physics? This is my personal history. It's not a vector, but yeah, it's a high dimensional vector, a tensile structure, okay. And strand B simply, hey, what has everyone else on this planet done and published here on archive? So this is the complete knowledge graph and we have here traversal vector that we can explore in the simplest case. So what is this braiding process? It is of course a mathematical function, or if you want an algorithm here, that compares these two strands and finds now an interference pattern. You see what? We don't just here add it up. We have a concatenation. No. We have a look now at the interference. So specific points where your unique quirks, my ideas overlap with the collective trend here of the research community. Very simple example, but it's the simplest example I can think of. Hey, I say at the individual stream is, hey, you like dark chocolate and the collective stream is people who buy red wine also buy dark chocolate and guess what they separated out, but it's yes, you can imagine this. Now, of course, it is a little bit more complicated and it took me again about 20 minutes so that can that nano banana pro generated this image. I wanted to have it like a stargate. I don't know if you know this TV series, but exactly. So here we have stream a here we have stream B personal vector episodic. So with all our little boxes here of knowledge and then here the collective vector, all the publication that have references to all the other publications and those reference other publication and those reverence here persona this reference here some tweets or you get the idea. What is happening here? And at first I saw that I build it like a DNA strand here, a molecular strand, but no, because what I want I want this input and you see here still to do the DNA strand it was not I read it here by nano banana pro, okay? Because this is not the input to our LLM. This is just a data process pre-processing for our LLM machine. So I have to bring this to a linearized context tensor that has your particular optimization routine to have your the perfect input to the LLM. So what is this? Now if you are a subscriber of my channel, you understand immediately when I tell you, you know, this is nothing else than a graph neural network attention mechanism that we apply at inference time. Okay. So what is happening here? This is the most important area now. This braiding processor with our logic gate and here I free the breed is just that is not as important as just push back in space and we just need here the perfect braided here knowledge stream that enters here the LLM as a linearized tensor structure. Let's do this. Now if you look at it from a mathematical perspective that I introduced at the beginning of this video, you immediately see that this is a dual source manifold alignment. The first source is here the episodic stream and the second here is the collective knowledge stream. A dual source manifold alignment. So yeah followed by gated linearization. Of course we have only have a linear prompt here to our LLM but of course it is not a single equation. It would be two easy no come on here. This would be not a topic of one of my videos, but it is a computational pipeline to project see a query into two orthogonal vector spaces again and we have individual and collective. See hope this visualization helps and computes now their intersection to filter out the noise and the rank relevance. So let our domain be defined by heterogeneous knowledge graph on all of theoretical physics. Then we define two distinct submanifolds within this graph structure. Now you know what it is it is the individual manifold at a local subgraph defined here by my little brain and a collective manifold the beauty that everybody else and this planet did in the last five years doing research and subgraph reachable through a community traversal and now the task is the stream a is an individual resonance score that we can calculate and we call this parameter alpha. So this measures how well a candidate node aligns with the user established history. It combines the semantic similarity with the historical weights. The stream b is of course the collective feasibility score from the whole community we call this parameter beta and this measures now how strongly the node is supported by the topology after domain graph itself. So more or less is this a valid node. Am I allowed to sink this in my individual vector stream is this really something that the community recognized as yeah this is something an object that you do we worth to investigate. Beta computes here the random work probability of landing on the node and starting from the query concepts within the domain graph G. But we do have two parameter alpha and beta. It's a simplification I know please don't write to me but there's another parameter yes I know I just want to be here in the main idea. So how is this fusion how is this braiding kernel now operational. You understand that this is the core process allergic that we are talking about. It is not the sum of alpha and beta. We have to perform here a gated fusion operation to reject the hallucination and irrelevant noise. You remember in the first part of the video I showed you that the hallucination is here now is here this big minus here in the grid. So we have a high individual score and zero collective support now. The hallucination is not supported by the research community or published upon it is only apparent here in my individual score. And the irrelevant noise has here high collective scores but zero individual relevance for me. So I don't care for something that is so far away I don't even understand it. And now we calculate here the braided score S braid. And this is now defined since you know the title of this video by a geometric interaction term of two manifolds. So I told you we're going to look here and it is not a good incidence that I tried to make this here not as a vector but more like a wave function. We are looking here at the interference pattern. So just going to give you the result. The braided score is calculated here with an alpha and a beta and in this structure where we have a linear mixture of alpha and beta. So what do I know and what does the community know and a structural gate. And this structural gate is now really important. But you know if you look at this and you think about the very first PDF archive that we just talked about the mirror mind you understand wait a minute. If this is not interpretation here for the mixture process I can use this imagination come back to the first PDF and also build here the identical formula. And now I say here the braided S or further mirror mind is no example it is. Have a look at this. So you see those paper not only have a very similar topic but given here the mathematical formula of the first paper of the second paper I can induce now a equilibrium no and an almost identical idea where I can come up now with the braided score for the mirror mind and you see they are operating now differently. Why? Because this has a repulsory effect the first one and this has a structural gate. So there is a difference but there otherwise real similar. So what is the critical nuance that distinguishes this? I told you mirror mind is for the scientific discovery process here and the persona agent here is of course about a recommendation. While both systems use you the braiding mechanism they use you the individual stream alpha or opposite purposes. One is respulsion and this is the mirror mind the individual stream acts as a negative constraint where I remember this was the deep blue gravity valve where I told you this is what I knew best this is where I'm sitting I'm lazy I don't move at all out of my beauty zone here and I need now some powers I'm impetus to move me out of here for the optimal path to P store. So this is now in mirror mind a repulsor my alpha. Now of course in this yeah again here this is here the term our novelty repulsor if you want to be specific. So you do have an intersection of a high domain visibility and a high persona surprise and the optimization objective is to find out the node N that maximizes this s-breeded value or in this formulation here for the mirror mind. Again alpha the individual nurture measures how similar the idea is to what the scientist what I have already written in the last five years and beta is yet a collective validity all the global publication here that is what is mathematically possible that has been peer-reviewed that has been agreed upon yeah this is a real interesting research topic this is yet a wireframe great that I showed you here in the first visualization here of this video and we want this to be high because this is now exactly at the intersection that we're going to optimize. Now of course as I told you I will show you here that title in a particular way if you read these two preprints in this sequence and I'm just here sorting this out for you that you have an easier learning process I can come up with this idea so to those persons who are really checking here whatever I tell you is this really written down in the PDF no I'm not going beyond both PDF publications I know combine them since they were published on the same day the authors had no idea from each other so but I now reading those I see they have common ground and so let's do this so my idea careful bugle up is we can combine PDF1 mirror mind with the persona agent to get a unified contextualization and output so image1 clear now we have p-starter proposed great new idea where I have to go and now all I say is listen if I have no this idea I can bring it over now into the persona agent where I told you we're working out pure in a graph structure the graph extractor for the persona agent and I just bring this over as one node for the network this is it I mean simple come on this is all you have to do to have some new insights and I'm trying to be good to combine both coding and I mean Gemini 3 pro will do the coding for me and maybe I can build this system operation only let's see but of course I can insert any node if I want and why not insert here the perfect research idea node here into the interaction node here of my personal history because this would be my personal future the very new future where this system tells me integrate this into your rough knowledge graph because this is your future that you should research and then I just combine this here with the persona agent as published already with the concept nodes with the community nodes here we have the braiding machine that does here our braiding processing as I already described to you and then the output what you have is a linearization a linearization context window where I showed you have the perfect system prompt for me as a persona for me to be an intellectual sparring partner I have my personal history that I present here to the AI the collective signal what has the our community done in the last five years for my particular brand new idea and then again now I refine the contextual linear idea this is here the p-star and the collective inside here also from a purely graph structure so you see just braided together everything together and isn't this looking gorgeous now if you want to have to go a little bit deeper I further annotated this graph that was built with nano banana pro so here you find some additional sorts here from my side but yeah I'm sure you get the idea so this image now illustrate here a new solution to the cognitive flatness we want to solve this now and we sequentially apply here to simple structural operation we have an optimization as I showed you in my own mind so we find a local maximum for novelty within the value constraints this is here a blue graph anti contextualization as the second structural operation as I've shown today autos of persona agent it so what it is we anchor the maximum if in the heterogeneous knowledge graph to ensure it aligns with both the personal history and the social reality of the research community take a step back and think about what we have just achieved just reading two paper you have read now only two papers structure is the new prompt the intelligence itself is not here because this is just the input to the lalm this is not intelligence is encoded in the manifold and in the graph well the lalm serves merely here as a traversal engine that is now computing this it is not even computing this because this manifold and the graph are constructing constraints on the operational space of the lalm itself so what I want to propose to you huh that this shift here defines the next generation of neural symbology why because the locals the place of intelligence is shifting now from the parametric knowledge of the lalm the model weights the tensor weights itself after vision language model to the non parametric structure to the external architecture so for my case this would be here my intellectual landscape with the community landscape we process here the path my personal path to my personal optimal idea then I bring it here into a pure graph representation I have the degrading process a computing here this and then I have here more or less all the history of mine and all the intelligence and the development of my scientific ideas here all very presented here so I think we are shifting here more away from the lalm is the only source of intelligence and we have a lot more non parametric structure that will do here in front of the lalm the real intelligence work if you want to call it now now maybe you have seen that some days ago I posted here on my channel also here the latest research here from medical about manifold learning for medical EEG and I've showed you here publication they discovered it really depends here on the mathematical space that we construct and they found that the Euclidean latent spaces distorted the true structure of the electro-entervalogram they said with this you know this unconstrained vector space this is not optimal we can use AI for medical here because near bone neural state may be mapped for a path in this unconstrained vector space irrelevant state may become artificial close what we do not want the attention operates with the wrong metric operator and the dynamics prediction must learn the geometry from scratch which is unstable in itself and the authors found a solution and they said we have to build a remaining and variational order encoder that will fix this by forcing the complete latent space to have the correct curvature it is just about the geometry of the space and they say once we have fixed the geometry and put on constrained on this space the geometry becomes correct the geodesic distance becomes meaningful the geometric attention works properly and neural ordinary differential equation to the trajectory becomes smooth consistent and stable and I it is also this paper here that I will show you here and I've given you a very short introduction what is a Riemann variational order encoder what is the geometric transformers particular the geometric attention height is calculated and why do we need manifold constrained neural ODE's but have a look at this paper this is here from Yale University Lehigh University, Badley Ham and School of Medicine, Yale University and they all ready and this is here just a day before November 20th 2025 and they did something similar not the identical idea but they also said hey listen our solution space is too huge is too unconstrained it doesn't make sense no which is don't waste energy and everything but it's not stable it is not what we need and they built it is a Riemann variational order encoder then they built it a geometric transformer and you see here too we operate here on a very particular manifold with a very particular optimization in a very particular positional encoding if you want here for a path optimization problem and then we bring this path optimization problem from a manifold in a pure graph structure we do the braiding and then we get a result and this is more or less exactly here and a different complexity level what they did here with their architecture in this particular paper and they called it a many fold former the geometric deep learning for neural dynamics on Riemannian manifolds and this is now my third paper that I want just to show you because I have a feeling this is the way we're going with the completed I system it is not that we're going to have the next extremely huge alarm and we put all of the intelligence only in this alarm I think this would be the wrong way I don't feel the dizziness the right way to go but of course you could say okay this is now your idea but let's increase the complexity because if we are playing around that we have no help individualization and I don't have to do this visualization by hand I can now think a little bit longer no like any idea it seems a little bit longer in a problem so let's increase the complexity further yeah so I found a not only this third paper but I found another paper really high level paper that it brings this to a complete new level but it has a coherence in the development but I think this is the end of part one I think it the video is already long enough but I just wanted to present you some brand new ideas in the eye that I have a feeling will be the future of the eye and I have to tell you the next part will a little bit more challenging so I decided to do part two of this video and it will be only an expert outlook and I will do it for members only because I want to give back to the people to support me with their membership of my channel so I want to give back to them and I want to present them just my ideas in the way I see the future of the eye so I think part one provides already so many new ideas for the AI community in general but if you decided here to support me personally I want to give back to you and therefore part two will show you here my personal thoughts here and we will increase the complexity and we will go a step further and I will give you an outlook of the eye that is just what I feel that we are going to move together as an AI community anyway I hope you enjoyed it was a little bit longer the video but I wanted to show you how amazing it can be if you just read two three four five maybe a hundred new PDF papers and you see common patterns you develop here common ground you see that everybody is moving in the same direction and I just wanted to make it crystal clear to you where this is now going to be but of course it could be that we have a brand new development tomorrow but at least let's have fun with AI let's play with it it is so beautiful to discover here complete new ideas in other federal intelligence so I hope you enjoyed it maybe you want to subscribe maybe you even become a member of the channel anyway I hope I see you in one of my next videos", "segments": [ { "start": 0.0, "end": 3.2, "text": "Hello, community. So great to do you back.", "confidence": -0.25865556575633863 }, { "start": 3.84, "end": 8.64, "text": "Today I have a little bit of an EI revolution for you. So at first, welcome to our channel,", "confidence": -0.25865556575633863 }, { "start": 8.64, "end": 14.64, "text": "this Kariai. We have a look at the latest EI research paper, the latest three research paper that", "confidence": -0.25865556575633863 }, { "start": 14.64, "end": 20.88, "text": "I selected here for this particular video. And I will talk about a dual manifold cognitive", "confidence": -0.25865556575633863 }, { "start": 20.88, "end": 27.36, "text": "architecture. And I think this is a little bit of an EI revolution. And I will argue that this", "confidence": -0.25865556575633863 }, { "start": 27.36, "end": 33.28, "text": "might be even the future of the complete EI industry. Let's have a look. Now you know what is the", "confidence": -0.19799207331060054 }, { "start": 33.28, "end": 39.84, "text": "problem? Our LLAMs operate currently on a single manifold hypothesis. They flatten all the training", "confidence": -0.19799207331060054 }, { "start": 39.84, "end": 45.120000000000005, "text": "data, all the personal habit, all the individual bias, all the historic facts, and all the collective", "confidence": -0.19799207331060054 }, { "start": 45.120000000000005, "end": 51.519999999999996, "text": "reasoning of um, alpha domain like physics or chemistry into a single high dimensional probability", "confidence": -0.19799207331060054 }, { "start": 52.480000000000004, "end": 58.32, "text": "and up until now, this was just perfect. It was great. But I'm going to argue that our do", "confidence": -0.18824610146143103 }, { "start": 58.32, "end": 66.0, "text": "that our DMCA, our dual magnifold cognitive architecture will define intelligence much better,", "confidence": -0.18824610146143103 }, { "start": 66.64, "end": 75.04, "text": "not as a next token prediction like we have currently with our LLAMs, but as a geometric intersection", "confidence": -0.18824610146143103 }, { "start": 75.04, "end": 80.96000000000001, "text": "of two distinct topological vector spaces that we are going to build. Now have a look at this.", "confidence": -0.18824610146143103 }, { "start": 81.75999999999999, "end": 89.67999999999999, "text": "I'm just amazed what here Gemini 3 pro image preview my little nano banana pro can do.", "confidence": -0.17326746930132855 }, { "start": 90.39999999999999, "end": 95.67999999999999, "text": "And I spent about 20 minutes to describe this image here to nano banana pro. And after three", "confidence": -0.17326746930132855 }, { "start": 95.67999999999999, "end": 102.56, "text": "times we got this beautiful thing. We gonna go through each and everything. So let's start.", "confidence": -0.17326746930132855 }, { "start": 102.56, "end": 108.39999999999999, "text": "This is our paper of today. This is here by Jinghua University in China. And November 21st,", "confidence": -0.17326746930132855 }, { "start": 108.4, "end": 116.4, "text": "2025, Miro Mind. And the title tells it all. We want here more or less to Miro a real human mind.", "confidence": -0.21493435977550035 }, { "start": 116.4, "end": 123.52000000000001, "text": "We want really to understand a certain scientific personality empowering the omniscientist,", "confidence": -0.21493435977550035 }, { "start": 123.52000000000001, "end": 129.68, "text": "the AI scientist with the expert perspective and the collective knowledge of human scientists.", "confidence": -0.21493435977550035 }, { "start": 129.68, "end": 134.72, "text": "So we're not satisfied anymore to build a synthetic AI system, but we want to bring a closer to", "confidence": -0.21493435977550035 }, { "start": 134.72, "end": 141.28, "text": "the human scientist. You immediately see that we have a common topic, the AI persona agents.", "confidence": -0.15735060593177533 }, { "start": 141.28, "end": 147.36, "text": "Like in one of my last videos I showed you the contextual instantiation here of AI persona agents", "confidence": -0.15735060593177533 }, { "start": 147.36, "end": 153.44, "text": "like shown by Stanford University just some days ago. And now we have here the other outstanding", "confidence": -0.15735060593177533 }, { "start": 153.44, "end": 160.07999999999998, "text": "university, Jinghua University and they have now the same topic. And they tell us, you know,", "confidence": -0.15735060593177533 }, { "start": 160.16000000000003, "end": 164.8, "text": "when asked to act as a scientist, you know, and have your prompt here to your AI,", "confidence": -0.1884465930105626 }, { "start": 164.8, "end": 170.08, "text": "hey, act as a financial broker, act as a medical expert, act as a scientist,", "confidence": -0.1884465930105626 }, { "start": 170.08, "end": 176.88000000000002, "text": "a standard LLM up until now relies now on a flattened representation of all the textual patterns.", "confidence": -0.1884465930105626 }, { "start": 176.88000000000002, "end": 183.12, "text": "But you know what, it lacks the complete structural memory of a specific individual cognitive", "confidence": -0.1884465930105626 }, { "start": 183.12, "end": 191.04, "text": "trajectory. And this is what Jinghua University is trying to map now to and advance the AI system.", "confidence": -0.1234942801455234 }, { "start": 191.04, "end": 198.0, "text": "So what they do, they shift here the paradigm from a pure role playing, you are now a medical", "confidence": -0.1234942801455234 }, { "start": 198.0, "end": 202.8, "text": "expert, which is more or less fragile because you have no idea about the pre-training data for this", "confidence": -0.1234942801455234 }, { "start": 202.8, "end": 210.8, "text": "particular LLM to a cognitive simulation, which is structured and constrained. I'm going to explain", "confidence": -0.1234942801455234 }, { "start": 210.8, "end": 216.96, "text": "why we have structure and what are the mathematical formulas for the constrained we're going to", "confidence": -0.2737303497970745 }, { "start": 216.96, "end": 224.96, "text": "impose on a specific LLM. Now, the orders of mere mind organ are that the scientific discovery", "confidence": -0.2737303497970745 }, { "start": 224.96, "end": 231.28, "text": "is not just factory retrieval. So as we go here to a very specific case, we go into science and we", "confidence": -0.2737303497970745 }, { "start": 231.28, "end": 236.8, "text": "want to have here a discovery process. I want to find new pattern, new interdistinonal", "confidence": -0.2737303497970745 }, { "start": 236.88000000000002, "end": 242.72, "text": "plenary pattern between physics, mathematics, chemistry, pharmacology, whatever. So it is about", "confidence": -0.1508229374885559 }, { "start": 242.72, "end": 248.64000000000001, "text": "simulating now the specific cognitive style of a scientist, more or less the individual memory of", "confidence": -0.1508229374885559 }, { "start": 248.64000000000001, "end": 254.88000000000002, "text": "a human that is now constrained by the field norms. This means by the collective memory.", "confidence": -0.1508229374885559 }, { "start": 257.28000000000003, "end": 260.96000000000004, "text": "And I think this is really the end of one size fits all age,", "confidence": -0.1508229374885559 }, { "start": 261.68, "end": 267.59999999999997, "text": "because all this, more or less, flat generalist framework like Leagley Act or Autogen,", "confidence": -0.2541045930650499 }, { "start": 267.59999999999997, "end": 272.96, "text": "they all fail in specialized domain and have multiple videos on this. But now we're going to build", "confidence": -0.2541045930650499 }, { "start": 272.96, "end": 280.32, "text": "not just the digital twin, but a cognitive digital twin. So they really pushed the boundaries here", "confidence": -0.2541045930650499 }, { "start": 280.32, "end": 287.2, "text": "for, well, let's say from simple data repos to a functional cognitive model that can predict", "confidence": -0.2541045930650499 }, { "start": 287.28, "end": 292.56, "text": "future EI directions offering here. And this is now the interesting part of a blueprint for an", "confidence": -0.2036680723491468 }, { "start": 292.56, "end": 298.0, "text": "automatic scientific discovery. And it's not going to be that simple as we have read here in the", "confidence": -0.2036680723491468 }, { "start": 298.0, "end": 304.96, "text": "last publications. So I said, let's start here with our little tiny EI revolution and let's have a", "confidence": -0.2036680723491468 }, { "start": 304.96, "end": 313.44, "text": "look. Now, Chingwa tells us, so we have here now the individual level, the human, the singular", "confidence": -0.2036680723491468 }, { "start": 313.44, "end": 318.71999999999997, "text": "human level. Now we look at the memory structure. And they decide everything that we had up until", "confidence": -0.1273290059899771 }, { "start": 318.71999999999997, "end": 325.84, "text": "now was not enough. So they go now with an episodic layer of memory with a semantic layer of memory", "confidence": -0.1273290059899771 }, { "start": 325.84, "end": 332.96, "text": "and a persona layer. And one layer built upon the other and then we built a gravity well. We built", "confidence": -0.1273290059899771 }, { "start": 332.96, "end": 339.84, "text": "here a force field if you want with very specific features. And this is then our first manifold", "confidence": -0.1273290059899771 }, { "start": 339.84, "end": 346.4, "text": "for our dual manifold branding. So let's have a look. They start and they say, okay, you know,", "confidence": -0.15500153435601127 }, { "start": 346.4, "end": 351.84, "text": "the basic is here the episodic memory, you know, all the raw papers, all the facts, everything", "confidence": -0.15500153435601127 }, { "start": 351.84, "end": 357.76, "text": "that you have, the PDF, I don't know, the latest 1000 medical PDFs or the latest 10,000", "confidence": -0.15500153435601127 }, { "start": 357.76, "end": 364.96, "text": "publication and theoretical physics. Then we go for an semantic memory. But we do have in,", "confidence": -0.15500153435601127 }, { "start": 365.52, "end": 372.23999999999995, "text": "if you want, evolving narrative that is now developing of a single person of the author's research", "confidence": -0.1486595920894457 }, { "start": 372.23999999999995, "end": 379.03999999999996, "text": "trajectory. Now, if we go for an individual level, we restrict this here to one person and we just", "confidence": -0.1486595920894457 }, { "start": 379.03999999999996, "end": 384.71999999999997, "text": "look at the temporal distillation pipeline of this single person. What is the author written in the", "confidence": -0.1486595920894457 }, { "start": 384.71999999999997, "end": 389.52, "text": "first month? What has the author written in the second month? Then we go through all the 12 months,", "confidence": -0.1486595920894457 }, { "start": 389.59999999999997, "end": 396.47999999999996, "text": "we have yearly summaries here and we want to answer how did they thinking evolved of a single", "confidence": -0.19453323971141467 }, { "start": 396.47999999999996, "end": 405.2, "text": "scientist, not just what he has published. So whenever you know, give here an LLAM or any I", "confidence": -0.19453323971141467 }, { "start": 405.2, "end": 412.4, "text": "system that has computer use access to your files and your local desktop laptop, whatever you", "confidence": -0.19453323971141467 }, { "start": 412.4, "end": 419.28, "text": "have. Now this is great because now all those data become available every email, every file that", "confidence": -0.19453323971141467 }, { "start": 419.28, "end": 425.52, "text": "you worked on, every if you prepared your PhD or your prepared any publication. How many", "confidence": -0.1444840431213379 }, { "start": 425.52, "end": 431.28, "text": "month have you been working on this? How many version of the final paper are stored in your", "confidence": -0.1444840431213379 }, { "start": 431.28, "end": 438.32, "text": "directories? Now, if any I would have access to this, it would be really able to map your personal", "confidence": -0.1444840431213379 }, { "start": 438.32, "end": 446.96, "text": "or my personal thinking process, my mental if you want, evolvement here, how I understand this topic.", "confidence": -0.1444840431213379 }, { "start": 447.84, "end": 453.35999999999996, "text": "And if we are able to bring this here into a temporal pipeline, we can distill further", "confidence": -0.11044331254630253 }, { "start": 453.35999999999996, "end": 460.08, "text": "insights. And then if you have this information, let's say of my persona, we have now an agent", "confidence": -0.11044331254630253 }, { "start": 460.08, "end": 467.28, "text": "or an LLAM that can build now my persona schema with all my knowledge about mathematics,", "confidence": -0.11044331254630253 }, { "start": 467.28, "end": 474.4, "text": "theoretical physics, whatever. So we can build now an abstraction, a dynamic concept network,", "confidence": -0.11044331254630253 }, { "start": 474.4, "end": 481.59999999999997, "text": "capturing now my let's say also stylistic, but also my reasoning preferences, all my knowledge", "confidence": -0.16170692443847656 }, { "start": 481.59999999999997, "end": 488.4, "text": "is now mapped to an AI system. Plus we have everything timeline stamped. So we have here, as you see", "confidence": -0.16170692443847656 }, { "start": 488.4, "end": 493.84, "text": "here in the semantic layer, perfect time series going on for month or even years, depending how much", "confidence": -0.16170692443847656 }, { "start": 493.84, "end": 501.03999999999996, "text": "data you have on your computer. So they say, okay, let's start with the individual person and", "confidence": -0.16170692443847656 }, { "start": 501.04, "end": 507.04, "text": "let's build this. Let's do this. Let's follow their traces. Okay, the episodic memory", "confidence": -0.1546629089670083 }, { "start": 507.04, "end": 514.16, "text": "of the series is here, the very last layer at the bottom. What is it? We have now what they call", "confidence": -0.1546629089670083 }, { "start": 514.16, "end": 520.64, "text": "a dual index structure to handle the specificity of the scientific terminology. Now, I didn't know", "confidence": -0.1546629089670083 }, { "start": 520.64, "end": 526.96, "text": "about you, but in theoretical physics, we have real long technical terms, also in astrophysics,", "confidence": -0.1546629089670083 }, { "start": 526.96, "end": 532.32, "text": "long technical terms, in high energy physics, elementary particle physics, long technical", "confidence": -0.17639937608138376 }, { "start": 532.32, "end": 539.6800000000001, "text": "terms, thing about medicine, long Latin terms, thing about pharmacology. You understand immediately.", "confidence": -0.17639937608138376 }, { "start": 539.6800000000001, "end": 545.44, "text": "You are not allowed to make one single type mistake. So you cannot give this to an LLM. So what", "confidence": -0.17639937608138376 }, { "start": 545.44, "end": 551.0400000000001, "text": "do you do? You build a hybrid regga engine. Of course, our good old friend, the reg machine.", "confidence": -0.17639937608138376 }, { "start": 551.68, "end": 559.28, "text": "But now the reg documents are paused into semantically coherent chunks. So what we do now is we have", "confidence": -0.23561947616105228 }, { "start": 559.28, "end": 564.16, "text": "a certain chunk. Let's say a sentence or maybe if I have a complete paragraph, it's a very homogenous", "confidence": -0.23561947616105228 }, { "start": 564.16, "end": 570.9599999999999, "text": "paragraph, then we have to source document. This is in file number, whatever from and we have a", "confidence": -0.23561947616105228 }, { "start": 570.9599999999999, "end": 576.56, "text": "timestamp. So exactly here, the recording when did I, when did I write down the standards on", "confidence": -0.23561947616105228 }, { "start": 576.56, "end": 580.7199999999999, "text": "my computer or when did I publish it or when did I just cast it, when send it out in an email", "confidence": -0.2250241575569942 }, { "start": 580.7199999999999, "end": 587.76, "text": "to my friends, exactly timestamp here, the complexity of a topic. Now, if you do this for", "confidence": -0.2250241575569942 }, { "start": 587.76, "end": 594.2399999999999, "text": "million and millions and millions of chunk IDs, you got no idea where we are. And may", "confidence": -0.2250241575569942 }, { "start": 594.2399999999999, "end": 598.7199999999999, "text": "remind you order say, hmm, you know what? We looked at all the vector search capabilities", "confidence": -0.2250241575569942 }, { "start": 598.7199999999999, "end": 605.1199999999999, "text": "and they are often too fuzzy for real science. And so what we have to do, we have specific", "confidence": -0.2250241575569942 }, { "start": 605.28, "end": 611.36, "text": "acronyms or chemical formulas, they all must be exact. You can't go with an LLM that just has a", "confidence": -0.15929103101420605 }, { "start": 611.36, "end": 617.6, "text": "probability distribution here for the next token prediction. So therefore we will choose not an LLM", "confidence": -0.15929103101420605 }, { "start": 617.6, "end": 622.8, "text": "but something different. So now they went with the episodic memory, the stores, every chunk of", "confidence": -0.15929103101420605 }, { "start": 622.8, "end": 628.0, "text": "information they found, let's say on my computer here, in two parallel searchable indexes.", "confidence": -0.15929103101420605 }, { "start": 628.5600000000001, "end": 632.88, "text": "And the first is a dense vector index. This is what you know, this is a high dimensional", "confidence": -0.15929103101420605 }, { "start": 632.88, "end": 639.84, "text": "embedding via here the encoder model of a transformer for the conceptual similarities.", "confidence": -0.17656345594496953 }, { "start": 639.84, "end": 645.4399999999999, "text": "So we build a new mathematical vector space and we say, okay, given our dissimantic", "confidence": -0.17656345594496953 }, { "start": 645.4399999999999, "end": 651.84, "text": "similarity of my, let's say 100 files and the content of these files, we can now place the", "confidence": -0.17656345594496953 }, { "start": 651.84, "end": 657.92, "text": "vectors here in the new vector space and we can arrange those vectors that we do have conceptual", "confidence": -0.17656345594496953 }, { "start": 657.92, "end": 664.4799999999999, "text": "similarity of the technical terms. But talking about technical terms, we now store them separately", "confidence": -0.1512087069059673 }, { "start": 664.4799999999999, "end": 671.36, "text": "because we say, hmm, we use now a sparse inverted index. So this is a standard BM25 index for an", "confidence": -0.1512087069059673 }, { "start": 671.36, "end": 677.8399999999999, "text": "underlying exact, exact, laxical matching. So we have absolute the keywords, the symbols, the", "confidence": -0.1512087069059673 }, { "start": 677.8399999999999, "end": 682.9599999999999, "text": "technical term that we have and they go in a separate index. So there's no mixing up and there's", "confidence": -0.1512087069059673 }, { "start": 682.96, "end": 688.08, "text": "no hallucination by any LLM. We cannot afford this in physics or chemistry or medicine.", "confidence": -0.14460477828979493 }, { "start": 689.52, "end": 696.96, "text": "And then, since we have now two specific scientific indexes, we can merge the result via a rank", "confidence": -0.14460477828979493 }, { "start": 696.96, "end": 703.6, "text": "fusion, a reciprocal rank fusion. And this is the way they set up here the episodic memory", "confidence": -0.14460477828979493 }, { "start": 703.6, "end": 708.5600000000001, "text": "of a single researcher. So this is here all the scientific content over the last five years that", "confidence": -0.14460477828979493 }, { "start": 708.7199999999999, "end": 715.1999999999999, "text": "I have here, let's say on my laptop. Right. The next step is here the semantic layer, as you can", "confidence": -0.20339013735453287 }, { "start": 715.1999999999999, "end": 721.52, "text": "see, you know, the semantic memory builds on the episodic layer and performs what they call now", "confidence": -0.20339013735453287 }, { "start": 721.52, "end": 727.28, "text": "a cognitive distillation. If you're familiar with map reviews from the very early days of EI,", "confidence": -0.20339013735453287 }, { "start": 727.28, "end": 732.0799999999999, "text": "you know exactly what we're looking at. Map reviews this deal pipeline. This is all there is.", "confidence": -0.20339013735453287 }, { "start": 732.0799999999999, "end": 738.3199999999999, "text": "So let's see, they use any LLM to transform them. Now all the definition from the", "confidence": -0.20339013735453287 }, { "start": 738.32, "end": 744.08, "text": "episodic layer come up. And now just give you an example. I say, analyze the cognitive evolution", "confidence": -0.163690479523545 }, { "start": 744.08, "end": 751.5200000000001, "text": "focus on any moderation of ideas of this stupid human, any conceptual shift that you can detect here", "confidence": -0.163690479523545 }, { "start": 751.5200000000001, "end": 756.6400000000001, "text": "on all the hundred and thousand files on his notebook or any changes in the research focus of", "confidence": -0.163690479523545 }, { "start": 756.6400000000001, "end": 762.6400000000001, "text": "this personal or the methodology he uses. Or why suddenly in, I don't know, April 19, I decided", "confidence": -0.163690479523545 }, { "start": 762.6400000000001, "end": 767.36, "text": "to go from a particular branch of mathematics to a more complex branch of mathematics because", "confidence": -0.163690479523545 }, { "start": 767.36, "end": 773.84, "text": "the complexity of my problem suddenly increase. And LLM should now distill from all the episodic", "confidence": -0.14134781486109682 }, { "start": 773.84, "end": 781.92, "text": "layer elements with the timestamp here. As you see here, the map reduce pipeline. And if we have", "confidence": -0.14134781486109682 }, { "start": 781.92, "end": 786.64, "text": "this information, you know what we're going to build, we're going to build a trajectory. As you see", "confidence": -0.14134781486109682 }, { "start": 786.64, "end": 794.48, "text": "here, we have a trajectory of time of trends of keywords, topics here, whatever clusters you can", "confidence": -0.14134781486109682 }, { "start": 794.48, "end": 800.0, "text": "define your clusters, if you're particular looking for some quantum field theoretical subtopics", "confidence": -0.21202313381692636 }, { "start": 800.0, "end": 805.6800000000001, "text": "here. So you see exactly how my knowledge evolved here over the last five years, and I have to", "confidence": -0.21202313381692636 }, { "start": 805.6800000000001, "end": 811.9200000000001, "text": "nothing, I just give you my laptop and this is it. Now, they model a cognitive trajectory. So they", "confidence": -0.21202313381692636 }, { "start": 811.9200000000001, "end": 818.4, "text": "say now we distill not as semantics. So the system now understands the reasoning link that I had in", "confidence": -0.21202313381692636 }, { "start": 818.48, "end": 826.16, "text": "my mind between paper, I published a file, a on my laptop under the file B. So what it does,", "confidence": -0.2613457690228473 }, { "start": 826.16, "end": 832.48, "text": "it captures now, and what they call the cognitive inertia of my intellectual topics.", "confidence": -0.2613457690228473 }, { "start": 834.72, "end": 838.88, "text": "Now, this is interesting. You see, we have now a five year timeline of my scientific work.", "confidence": -0.2613457690228473 }, { "start": 838.88, "end": 844.24, "text": "We have nine, the semantically at a complete time series. And guess what we do next?", "confidence": -0.2613457690228473 }, { "start": 844.4, "end": 851.2, "text": "Yeah, if you want to very simply find explanation, think of a semantic memory as a biograph,", "confidence": -0.24749812075966282 }, { "start": 852.0, "end": 856.64, "text": "AI system. Now, look, so everything that I published on my computer and says, okay,", "confidence": -0.24749812075966282 }, { "start": 856.64, "end": 862.32, "text": "there's this fellow. Oh, no, there's no way he's doing science now. So trends isolated time", "confidence": -0.24749812075966282 }, { "start": 862.32, "end": 870.48, "text": "stem into a cohesive intellectual history. And if we have this, the next step is, of course,", "confidence": -0.24749812075966282 }, { "start": 870.48, "end": 876.08, "text": "and you already guessed it, we have now a mathematical transformation. We have now the next step", "confidence": -0.15845539353110574 }, { "start": 876.08, "end": 883.6, "text": "and we go to the persona layer. Now, I am modeled in my, what do I call this, scientific intellectual", "confidence": -0.15845539353110574 }, { "start": 884.96, "end": 891.6, "text": "development. We are now here transforming this here from a temporal flow from the time series", "confidence": -0.15845539353110574 }, { "start": 891.6, "end": 896.4, "text": "into a topological structure. And the simplest topological structure that we know is here,", "confidence": -0.15845539353110574 }, { "start": 896.4, "end": 902.88, "text": "knowledge graph with specific weights here. So we have here particular focus on some topics", "confidence": -0.18271414438883463 }, { "start": 902.88, "end": 908.64, "text": "and I'm going to explain what I mean in a second. The simplest way to explain this is with an", "confidence": -0.18271414438883463 }, { "start": 908.64, "end": 915.36, "text": "example. Let's see, the input signal now entering here, the persona layer is now, let's say in 2023,", "confidence": -0.18271414438883463 }, { "start": 915.36, "end": 921.12, "text": "the order moved away from his CNN's convolutional neural networks and started focusing heavily on", "confidence": -0.18271414438883463 }, { "start": 921.2, "end": 926.4, "text": "graph neural networks. Now, you know, this is not true because we did this in 2021 to get on this", "confidence": -0.19624287964867765 }, { "start": 926.4, "end": 931.84, "text": "channel, but just to be here on the safe side, it's just an example. And we did this for more", "confidence": -0.19624287964867765 }, { "start": 931.84, "end": 937.36, "text": "like color modeling, see my videos from 2021. Okay, great. So what we do now with this.", "confidence": -0.19624287964867765 }, { "start": 940.24, "end": 944.48, "text": "The system now understands looking here at the centers that comes up from the semantic layer,", "confidence": -0.19624287964867765 }, { "start": 944.48, "end": 948.24, "text": "and says, okay, we have to create some nodes. Now we have to build a topological structure. Let's", "confidence": -0.19624287964867765 }, { "start": 948.32, "end": 955.2, "text": "have here knowledge graph. So what is new? We have here CNN's, we have here the GNN's and we have", "confidence": -0.2344145728546439 }, { "start": 955.2, "end": 961.76, "text": "molecular and we have modeling. So let's build this. Now, particular of interest is of course the", "confidence": -0.2344145728546439 }, { "start": 961.76, "end": 968.5600000000001, "text": "quality of the nodes. GNN's are not just a subtopic, but it's a main and major topic. No graph,", "confidence": -0.2344145728546439 }, { "start": 968.5600000000001, "end": 974.08, "text": "neural networks. So it becomes a concept node. Moleicles, there are thousands and millions of", "confidence": -0.2344145728546439 }, { "start": 974.08, "end": 979.36, "text": "different molecules. So it becomes a concept node again. So you see, we already introduced here", "confidence": -0.14889701755567528 }, { "start": 979.36, "end": 988.1600000000001, "text": "kind of a hierarchical structure in our knowledge graph. And now we have here a certain wing", "confidence": -0.14889701755567528 }, { "start": 988.1600000000001, "end": 994.1600000000001, "text": "that we're going to do because it might decay or lower now the centrality. This is a graph", "confidence": -0.14889701755567528 }, { "start": 994.1600000000001, "end": 1000.08, "text": "theoretical feature that I explained in one of my videos of the particular nodes here. And because", "confidence": -0.14889701755567528 }, { "start": 1000.08, "end": 1007.36, "text": "it is stated falsely that in 2023 and it was 2021 that I moved away from CNN's. So currently", "confidence": -0.24517532244120557 }, { "start": 1008.0, "end": 1016.8000000000001, "text": "the centrality, the importance here on all the sub-nets here of my graph, CNN's are somewhere", "confidence": -0.24517532244120557 }, { "start": 1016.8000000000001, "end": 1024.32, "text": "lower in the importance. No, they're not as important right now. They calculate this with the", "confidence": -0.24517532244120557 }, { "start": 1024.8799999999999, "end": 1030.3999999999999, "text": "centrality measures. And if we have this and here you see it here, the persona layer,", "confidence": -0.20499989137811175 }, { "start": 1030.3999999999999, "end": 1035.6799999999998, "text": "this is not my profile. I have a profile, a machine learning. These are my sub topics. I studied,", "confidence": -0.20499989137811175 }, { "start": 1035.6799999999998, "end": 1041.12, "text": "I learned, I published, I wrote code. I did not publish and just have on my computer, whatever.", "confidence": -0.20499989137811175 }, { "start": 1041.12, "end": 1046.08, "text": "And then we have something in bioinformatics to work. I've done done something whatever,", "confidence": -0.20499989137811175 }, { "start": 1046.08, "end": 1051.2, "text": "other topic you have. How strong are the interlinks? How strong are the edges between these", "confidence": -0.20499989137811175 }, { "start": 1051.28, "end": 1057.8400000000001, "text": "topics? So we build a knowledge of my temporal scientific evolution as a scientist.", "confidence": -0.23164218405018683 }, { "start": 1059.52, "end": 1065.52, "text": "But you are not happy with this, because we are going to map this further. So in this step,", "confidence": -0.23164218405018683 }, { "start": 1065.52, "end": 1071.28, "text": "we mapped it from the temporal flow of the semantic layer of the time series into a topological structure.", "confidence": -0.23164218405018683 }, { "start": 1071.28, "end": 1077.8400000000001, "text": "But this topological structure is not really the word we can have a smooth transition and inter-gurls.", "confidence": -0.23164218405018683 }, { "start": 1078.1599999999999, "end": 1083.36, "text": "This is a graph. Come on, this is bulky. This is not elegant. So what we're going to build is a", "confidence": -0.19345884233991675 }, { "start": 1083.36, "end": 1088.48, "text": "gravity well. We're going to build a field representation. This is here the blue heat map that", "confidence": -0.19345884233991675 }, { "start": 1088.48, "end": 1095.76, "text": "you see on top. And this shifts now the sender. Let's say somewhere, there was G&N. Now shifts", "confidence": -0.19345884233991675 }, { "start": 1095.76, "end": 1103.4399999999998, "text": "here the sender here to G&N. So you see, we have a lot of mapping here to have here the", "confidence": -0.19345884233991675 }, { "start": 1103.44, "end": 1109.6000000000001, "text": "internal individual, my personal evolution. But this is not all done by the eye.", "confidence": -0.21944882007355385 }, { "start": 1111.04, "end": 1116.64, "text": "So now the eye says, okay, let's do some inference. Now it looks like the new topology of the graph", "confidence": -0.21944882007355385 }, { "start": 1116.64, "end": 1124.0, "text": "and ask, given this new shape, what kind of scientist is this person now? If I don't know,", "confidence": -0.21944882007355385 }, { "start": 1124.0, "end": 1129.28, "text": "some AI says, okay, who is this person that does hear all these beautiful YouTube videos?", "confidence": -0.21944882007355385 }, { "start": 1130.08, "end": 1136.96, "text": "What is now his actual current characteristics? And now the system might update here if it's working", "confidence": -0.19594465825975554 }, { "start": 1136.96, "end": 1142.96, "text": "now for me, the system prompt in a way that it says now him, okay, listen, if you work with this guy", "confidence": -0.19594465825975554 }, { "start": 1143.52, "end": 1149.92, "text": "as an AI, your style has to be highly theoretical based on first principle reasoning.", "confidence": -0.19594465825975554 }, { "start": 1150.6399999999999, "end": 1157.2, "text": "So you see, all of this just took a rive at this simple sentence as that, the eye has now a perfect", "confidence": -0.19594465825975554 }, { "start": 1157.2, "end": 1163.68, "text": "characteristic of my actual learning experience, understanding what I know, what I do not know,", "confidence": -0.16133089175169496 }, { "start": 1163.68, "end": 1169.92, "text": "and now the AI is the perfect intellectual sparing partner for me. Now the CI system is the perfect", "confidence": -0.16133089175169496 }, { "start": 1169.92, "end": 1176.88, "text": "professional AI companion for theoretical physics, for bioinformatics or whatever. So what we have", "confidence": -0.16133089175169496 }, { "start": 1176.88, "end": 1184.88, "text": "achieved is not only build me as a perfect mirror mind for the eye to understand, but the eye", "confidence": -0.16133089175169496 }, { "start": 1184.88, "end": 1193.2, "text": "can now decide to find the perfect complement to my intellectual morphism. So it is the perfect", "confidence": -0.171532695943659 }, { "start": 1193.2, "end": 1199.3600000000001, "text": "partner for me to have here an augmentation here of our an acceleration of the research.", "confidence": -0.171532695943659 }, { "start": 1200.72, "end": 1204.24, "text": "Now you can look at this of course from a mathematical point of view and say, why was this", "confidence": -0.171532695943659 }, { "start": 1204.24, "end": 1210.4, "text": "necessary? I mean, look at this, we went through a four different mapping. Why? Well,", "confidence": -0.171532695943659 }, { "start": 1210.48, "end": 1217.2, "text": "Adolams cannot calculate a similarity against a story against my learning. They can calculate it", "confidence": -0.21559244936162775 }, { "start": 1217.2, "end": 1221.92, "text": "against a vector or a graph state. It is a simple mathematical operation. And now by converting", "confidence": -0.21559244936162775 }, { "start": 1221.92, "end": 1227.76, "text": "the trajectory into a weighted graph, the system can now mathematically compute, hey, if I get a new", "confidence": -0.21559244936162775 }, { "start": 1227.76, "end": 1235.44, "text": "idea, how close is this to the current network to the current, if you want gravity value here", "confidence": -0.21559244936162775 }, { "start": 1235.44, "end": 1240.0800000000002, "text": "after what we call this scientific intellectual capacity of this person.", "confidence": -0.17163032397889255 }, { "start": 1242.48, "end": 1249.04, "text": "Now we can calculate it. And then if we can calculate it, we can code it in Python C++, whatever you", "confidence": -0.17163032397889255 }, { "start": 1249.04, "end": 1255.3600000000001, "text": "like. Now I have been already talking here about this gravity value. And I just call it a gravity", "confidence": -0.17163032397889255 }, { "start": 1255.3600000000001, "end": 1259.52, "text": "value, call it whatever you like it. But it's just important that you understand the idea.", "confidence": -0.17163032397889255 }, { "start": 1260.0800000000002, "end": 1264.56, "text": "What is it? And now if we change the framing, we look at it from a little bit more of a mathematical", "confidence": -0.17163032397889255 }, { "start": 1264.56, "end": 1270.56, "text": "perspective, you immediately see it's a probability density field that we derive from the topology", "confidence": -0.16031095841351678 }, { "start": 1270.56, "end": 1276.56, "text": "of the persona graph. Persona graph allows us this mapping here into a n-dimensional gravity value.", "confidence": -0.16031095841351678 }, { "start": 1278.1599999999999, "end": 1285.28, "text": "So how we do this? I mean, how can you have just a stupid graph, a flat planner graph,", "confidence": -0.16031095841351678 }, { "start": 1286.08, "end": 1289.52, "text": "and suddenly you have a three-dimensional beauty of a manifold?", "confidence": -0.16031095841351678 }, { "start": 1290.48, "end": 1296.16, "text": "You ought to tell us the way they decided to go. So here they say, okay, first the system calculates", "confidence": -0.1756810188293457 }, { "start": 1296.16, "end": 1303.36, "text": "the mass of every existing node in our network. And we are in mind determines the mass using here", "confidence": -0.1756810188293457 }, { "start": 1303.36, "end": 1310.72, "text": "a particular graph-specific centrality measure. This is the way they determine now the mass of", "confidence": -0.1756810188293457 }, { "start": 1310.72, "end": 1316.6399999999999, "text": "every node, or if you would say the importance of, mean, the current temporal", "confidence": -0.1756810188293457 }, { "start": 1316.64, "end": 1321.92, "text": "involvement of my scientific knowledge. And then they define also the distance.", "confidence": -0.20460990456973804 }, { "start": 1322.72, "end": 1328.0800000000002, "text": "The distance you notice is of course, and then by the space one minus cosine similarity beautiful.", "confidence": -0.20460990456973804 }, { "start": 1328.0800000000002, "end": 1334.0800000000002, "text": "If we go here for an Euclidean simple distance, I have later we are going to discuss some other", "confidence": -0.20460990456973804 }, { "start": 1334.0800000000002, "end": 1342.24, "text": "hypothetical spaces, then it becomes a little bit more difficult. Now this blue gravity well is,", "confidence": -0.20460990456973804 }, { "start": 1342.24, "end": 1348.96, "text": "let's go to the next step of abstraction, a kernel density estimation over the embedding space", "confidence": -0.1460265910371821 }, { "start": 1348.96, "end": 1354.96, "text": "of the persona graph. Now I have multiple videos here on this kernel density estimation,", "confidence": -0.1460265910371821 }, { "start": 1354.96, "end": 1362.0, "text": "but in summary, you can say that the gravity intensity G at a point Q here in my blue gravity field,", "confidence": -0.1460265910371821 }, { "start": 1362.56, "end": 1368.24, "text": "and let's say Q is now a new idea, is the sum of the influences of all the nodes in the graph,", "confidence": -0.1460265910371821 }, { "start": 1369.2, "end": 1373.76, "text": "exponentially decaying with distance. I mean, this is the simplest thing you can think of,", "confidence": -0.21499254085399486 }, { "start": 1373.76, "end": 1378.56, "text": "right? Everything has to contribute to this, but we have an exponential decay function so that", "confidence": -0.21499254085399486 }, { "start": 1378.56, "end": 1383.6, "text": "not everything is contributing here in equal matters here to this particular, that the points", "confidence": -0.21499254085399486 }, { "start": 1383.6, "end": 1388.56, "text": "are the closest are the most influential. I mean, it couldn't be easy, you know? And here we have", "confidence": -0.21499254085399486 }, { "start": 1388.56, "end": 1394.88, "text": "this simple formula that the students here, the experts here from Jinghua University, show us.", "confidence": -0.21499254085399486 }, { "start": 1394.96, "end": 1402.0800000000002, "text": "Great. So what did you do? This deep blue visualizes not a specific region of a, let's call it a", "confidence": -0.19946912255617652 }, { "start": 1402.0800000000002, "end": 1408.8000000000002, "text": "latent space, where the outer fields, or I feel most comfortable, you see here in this dark here,", "confidence": -0.19946912255617652 }, { "start": 1408.8000000000002, "end": 1415.0400000000002, "text": "I called it more of the same. This is my expertise. This is what I know is exceptional,", "confidence": -0.19946912255617652 }, { "start": 1415.0400000000002, "end": 1421.44, "text": "need well to do. I've worked the last two years only on this dark area here in this gravity well.", "confidence": -0.19946912255617652 }, { "start": 1421.44, "end": 1429.28, "text": "Those are my topics. This is I know well. But of course, if I want to have a brand new discovery,", "confidence": -0.12099937726092595 }, { "start": 1429.28, "end": 1435.52, "text": "now they argue, hmm, maybe it is not exactly in the same old thing that you do for two years,", "confidence": -0.12099937726092595 }, { "start": 1435.52, "end": 1439.28, "text": "because otherwise you would have discovered it. So maybe there's somewhere else.", "confidence": -0.12099937726092595 }, { "start": 1441.04, "end": 1446.3200000000002, "text": "And they say now, okay, so what we have to do now is find a mathematical algorithm,", "confidence": -0.12099937726092595 }, { "start": 1446.32, "end": 1453.36, "text": "a repulsive force that acts on this, if you want gravity well structure, to bring me out of my", "confidence": -0.12697493036588034 }, { "start": 1453.36, "end": 1461.52, "text": "minimum over the mountains and somewhere beautiful new. So what I need is a novelty repulsor.", "confidence": -0.12697493036588034 }, { "start": 1462.1599999999999, "end": 1468.56, "text": "I have to have a force acting on me sitting here, boring and doing the same thing over and over again,", "confidence": -0.12697493036588034 }, { "start": 1468.56, "end": 1475.52, "text": "and not this carrying anything new. So push me out here of this and let's go somewhere we have", "confidence": -0.12697493036588034 }, { "start": 1475.52, "end": 1483.6, "text": "never been before. So you see, it wants here to simulate here the discovery, not the repetition.", "confidence": -0.17613250978531375 }, { "start": 1483.6, "end": 1489.36, "text": "Repetition is done in the blue. And therefore the algorithm treats here my order persona graph,", "confidence": -0.17613250978531375 }, { "start": 1489.36, "end": 1496.6399999999999, "text": "not as a target to hit, but it is exactly the negative, as a penalty zone to avoid. Now the", "confidence": -0.17613250978531375 }, { "start": 1496.6399999999999, "end": 1500.8, "text": "thing becomes interesting because yeah, you can push me out with any force out of here my stable", "confidence": -0.17613250978531375 }, { "start": 1500.8, "end": 1506.24, "text": "position at a minimum, but in what direction do you push me, where should I go and continue my", "confidence": -0.19739441076914468 }, { "start": 1506.24, "end": 1513.28, "text": "research on. And now, think about this covers here, where says, well, what we have is the second", "confidence": -0.19739441076914468 }, { "start": 1513.28, "end": 1520.56, "text": "manifold is an external manifold. And this external manifold is here, let's say here open Alex.", "confidence": -0.19739441076914468 }, { "start": 1520.56, "end": 1525.84, "text": "So this is the knowledge of all, I don't know, one million published paper in my topics that I", "confidence": -0.19739441076914468 }, { "start": 1525.84, "end": 1531.84, "text": "research on, it's a free and open source database of scholar research paper, author, institution,", "confidence": -0.1953357368387202 }, { "start": 1531.84, "end": 1536.56, "text": "everything is there. And let's say, okay, this is not the outside world. This is not a second", "confidence": -0.1953357368387202 }, { "start": 1536.56, "end": 1543.6, "text": "manifold. This is here my personal manifold. And this is here the community manifold in total,", "confidence": -0.1953357368387202 }, { "start": 1543.6, "end": 1549.4399999999998, "text": "the global science community, where they are, what they have done, what their examine, where do you", "confidence": -0.1953357368387202 }, { "start": 1550.4, "end": 1556.8, "text": "feel. And they say, let's do this. And they build now simple idea, a wireframe grid. So you don't", "confidence": -0.2384783308082652 }, { "start": 1556.8, "end": 1562.4, "text": "have to build a real a smooth manifold, a wireframe grid is enough. You just have some estimation points", "confidence": -0.2384783308082652 }, { "start": 1562.4, "end": 1568.72, "text": "and you can connect this net in the, in real, isn't it? So what do we add here to my stupidity here", "confidence": -0.2384783308082652 }, { "start": 1568.72, "end": 1574.16, "text": "on the left side in the blue valley here? We add if you want a social connection to my social", "confidence": -0.2384783308082652 }, { "start": 1574.24, "end": 1580.24, "text": "community, this is here, the research community from astrophysics and some new ideas might come from", "confidence": -0.13753221148536318 }, { "start": 1580.24, "end": 1586.88, "text": "astronomy, some new idea might come from medicine, whatever. So we have now from simple", "confidence": -0.13753221148536318 }, { "start": 1586.88, "end": 1594.24, "text": "approach here to an interdisciplinary approach. So we have here now one manifold, the second manifold,", "confidence": -0.13753221148536318 }, { "start": 1594.24, "end": 1599.3600000000001, "text": "and the second manifold is also constructed that we clearly can detect hallucination. Because if", "confidence": -0.13753221148536318 }, { "start": 1599.52, "end": 1606.8, "text": "the LLM suddenly does some hallucination, we can pocket him here into this rabbit hole and say,", "confidence": -0.14418505607767307 }, { "start": 1606.8, "end": 1612.7199999999998, "text": "okay, let's forget about this hole. What we are interested here is the maximum of the community", "confidence": -0.14418505607767307 }, { "start": 1612.7199999999998, "end": 1618.7199999999998, "text": "knowledge. Can I contribute with my knowledge here to the open problem started here at the top", "confidence": -0.14418505607767307 }, { "start": 1618.7199999999998, "end": 1624.8, "text": "of the mountain here, this particular sweet spot? And you see, told you a force has to push me out,", "confidence": -0.14418505607767307 }, { "start": 1624.8, "end": 1630.96, "text": "and this is not a path to optimal research, an optimal research idea P star.", "confidence": -0.21416687719600716 }, { "start": 1632.24, "end": 1639.36, "text": "As easy as can be. And again, thank you to my nano banana pro, because about 20 minutes, it took me", "confidence": -0.21416687719600716 }, { "start": 1639.36, "end": 1644.32, "text": "that I put all the data in, I said, hey, this play the summary, I want this and this position", "confidence": -0.21416687719600716 }, { "start": 1644.32, "end": 1650.32, "text": "over there. And it just, it just did it. There was not one mistake here. Okay.", "confidence": -0.21416687719600716 }, { "start": 1650.48, "end": 1658.8799999999999, "text": "Now, this is now the story, this is my story, no, it's a scientist. But now, of course, we have to", "confidence": -0.22000441184410682 }, { "start": 1658.8799999999999, "end": 1664.0, "text": "code this. So if you want to code this, we have to work with agents, we have to work with LLM,", "confidence": -0.22000441184410682 }, { "start": 1664.0, "end": 1668.24, "text": "we have to work with networks, we have to work with different mathematical operations,", "confidence": -0.22000441184410682 }, { "start": 1668.24, "end": 1674.56, "text": "like mapping functions, so let's do this now. Okay. So what we have is the order say,", "confidence": -0.22000441184410682 }, { "start": 1674.72, "end": 1681.2, "text": "so we need to have a super, I know we have an interdisciplinary level where the super", "confidence": -0.20338355867486252 }, { "start": 1681.2, "end": 1688.3999999999999, "text": "coordinator agent is supervising everything notices the mastermind. And this coordinator agent", "confidence": -0.20338355867486252 }, { "start": 1688.3999999999999, "end": 1695.9199999999998, "text": "decomposes now an incoming query and roots them to particular domain agents that are navigating", "confidence": -0.20338355867486252 }, { "start": 1695.9199999999998, "end": 1702.72, "text": "here the open Alex concept graphs or building the graphs or the author agents that understand,", "confidence": -0.20338355867486252 }, { "start": 1702.72, "end": 1708.96, "text": "now my scientific personality, no? So the system solves now proposing complementarity", "confidence": -0.2648595727008322 }, { "start": 1708.96, "end": 1715.68, "text": "or ideas as a dual constraint optimization. I have both manifolds and in both manifolds,", "confidence": -0.2648595727008322 }, { "start": 1715.68, "end": 1720.8, "text": "I have constrained. And now I have to do a dual constraint optimization process in mathematics.", "confidence": -0.2648595727008322 }, { "start": 1721.28, "end": 1729.1200000000001, "text": "Couldn't be easier, no? It is just the perfect path. Let's do this. So the idea is, or if you want to,", "confidence": -0.2648595727008322 }, { "start": 1729.9199999999998, "end": 1737.1999999999998, "text": "optimal idea that I'm as a researcher looking for, P-Star, is forced to exist in the Goldilocks", "confidence": -0.23794425802027924 }, { "start": 1737.1999999999998, "end": 1742.6399999999999, "text": "zone right on the Ramzer. It has to be valid science that is accepted by the scientific community,", "confidence": -0.23794425802027924 }, { "start": 1743.28, "end": 1748.7199999999998, "text": "but also real close to my particular areas of expertise, so what I'm as an author,", "confidence": -0.23794425802027924 }, { "start": 1749.52, "end": 1755.84, "text": "almost developed, but almost thought of, but I just didn't do this little tiny baby step.", "confidence": -0.23794425802027924 }, { "start": 1755.84, "end": 1763.76, "text": "So what we are going for is the easy wins. The I would analyze, hmm, this particular guy here", "confidence": -0.17270174233809762 }, { "start": 1763.76, "end": 1769.36, "text": "with his YouTube channel, he did some research here and he was almost there to discover something", "confidence": -0.17270174233809762 }, { "start": 1769.36, "end": 1776.32, "text": "that the community also indicated there might be some new element. So let's tell him, hey, go in this", "confidence": -0.17270174233809762 }, { "start": 1776.32, "end": 1782.3999999999999, "text": "direction, learn this and this and this, and then you will make a significant step in your", "confidence": -0.17270174233809762 }, { "start": 1782.4, "end": 1790.0800000000002, "text": "knowledge and discover a new element. So this is now, and now I need a little bit feedback from", "confidence": -0.10668089101602743 }, { "start": 1790.0800000000002, "end": 1796.5600000000002, "text": "my viewers, because I'm now trying to accelerate my learning, but at the same time, I'm trying to", "confidence": -0.10668089101602743 }, { "start": 1796.5600000000002, "end": 1803.3600000000001, "text": "accelerate my understanding of a visualization so I can communicate better with you, my viewers,", "confidence": -0.10668089101602743 }, { "start": 1803.3600000000001, "end": 1808.4, "text": "my subscribers, and you're the members of my channel. And this is the first time I really", "confidence": -0.10668089101602743 }, { "start": 1808.4, "end": 1815.52, "text": "invested heavily into the visuals here with Nanobanana Pro, for example, to build a visualization", "confidence": -0.1872065372956105 }, { "start": 1815.52, "end": 1824.0, "text": "of a complex tier rim that is more than 4050, 100 papers and I try to bring it here just on one", "confidence": -0.1872065372956105 }, { "start": 1824.8000000000002, "end": 1831.6000000000001, "text": "simple image. It is not easy, but I will try this if you as my viewer, you'll like it and you have", "confidence": -0.1872065372956105 }, { "start": 1831.6799999999998, "end": 1841.6799999999998, "text": "this additional visualization. So mirror mind here and the next paper, what we call person-agent,", "confidence": -0.19055877587734124 }, { "start": 1841.6799999999998, "end": 1846.8, "text": "demonstrate now that the vector databases here are simply insufficient for complex reasoning.", "confidence": -0.19055877587734124 }, { "start": 1847.4399999999998, "end": 1853.36, "text": "But what we need, we need more complex graph structure and mapping from graph to graph", "confidence": -0.19055877587734124 }, { "start": 1853.36, "end": 1859.28, "text": "to represent new and established relations between the different memories. And in mirror mind,", "confidence": -0.19055877587734124 }, { "start": 1859.28, "end": 1862.32, "text": "I showed you the temporal evolution of my scientific mind.", "confidence": -0.1494270301446682 }, { "start": 1865.28, "end": 1872.0, "text": "Now, if you have a closer look at this, especially the semantic memory now, it explicitly models how", "confidence": -0.1494270301446682 }, { "start": 1872.0, "end": 1879.52, "text": "a scientist's mind changes. But you know, understand what is happening now? We break with one of the most", "confidence": -0.1494270301446682 }, { "start": 1879.52, "end": 1885.12, "text": "important theorems that we had in artificial intelligence. And this was that everything is a", "confidence": -0.1494270301446682 }, { "start": 1885.12, "end": 1891.76, "text": "macovian system. And suddenly, it is not that I just can look at the system and say, this is the", "confidence": -0.1590055429710532 }, { "start": 1891.76, "end": 1899.36, "text": "current state of the system. And it is not depending on the history. Because now that you mirror a", "confidence": -0.1590055429710532 }, { "start": 1899.36, "end": 1906.3999999999999, "text": "human brain, a human mind, it is very well-depending on my personal history, where I started to learn", "confidence": -0.1590055429710532 }, { "start": 1906.3999999999999, "end": 1912.0, "text": "mathematics, then physics, then whatever. And then, you know, bit by bit, I'm a little bit better here.", "confidence": -0.1590055429710532 }, { "start": 1912.56, "end": 1918.64, "text": "You have to understand here the time evolution. So suddenly, we break with a macovian state.", "confidence": -0.1408401467334265 }, { "start": 1920.16, "end": 1926.56, "text": "This means that all algorithms that we have in LLM also break and become invalid, inoperable.", "confidence": -0.1408401467334265 }, { "start": 1927.68, "end": 1930.64, "text": "So now these things become really interesting.", "confidence": -0.1408401467334265 }, { "start": 1933.36, "end": 1938.96, "text": "And now you might ask, hey, I'm just here to learn how to code an agent. Do agents do any of those", "confidence": -0.1408401467334265 }, { "start": 1938.96, "end": 1944.4, "text": "operations you are asking for? Are you doing? And I say, it's so glad that you ask this question.", "confidence": -0.21531938873561082 }, { "start": 1944.4, "end": 1949.8400000000001, "text": "No, because now I can tell you about the multi-agent interact on pattern here in the work done", "confidence": -0.21531938873561082 }, { "start": 1949.8400000000001, "end": 1956.32, "text": "with the coding here by Jinghua University. And I want to focus here on the multi-agent cognitive", "confidence": -0.21531938873561082 }, { "start": 1956.32, "end": 1963.6000000000001, "text": "engine. As I told you, we have here an interdisciplinary coordinator here, our super-yide understands", "confidence": -0.21531938873561082 }, { "start": 1963.6000000000001, "end": 1967.28, "text": "everything can sort everything can plan everything can execute everything great.", "confidence": -0.21531938873561082 }, { "start": 1968.24, "end": 1975.6, "text": "So what it does, it gets in here my human query. Hey, I don't know, find me the next research topic", "confidence": -0.17134622427133414 }, { "start": 1975.6, "end": 1979.36, "text": "because I'm as a human. I'm too stupid to know where I want to go for two.", "confidence": -0.17134622427133414 }, { "start": 1979.92, "end": 1985.36, "text": "Okay, so this here I says, okay, I signed out two query vectors. I send a query vector now to,", "confidence": -0.17134622427133414 }, { "start": 1986.08, "end": 1991.68, "text": "you know, now I exchanged here the manifold. This is here my human learning manifold on the right side.", "confidence": -0.17134622427133414 }, { "start": 1992.3200000000002, "end": 1998.24, "text": "And on the left side, they sent here the same query vector in an embedding here in a mathematical", "confidence": -0.20044299288912937 }, { "start": 1998.24, "end": 2004.96, "text": "tensor structure now to the other side. And this is here the objective visibility, so all the", "confidence": -0.20044299288912937 }, { "start": 2004.96, "end": 2010.3200000000002, "text": "hundred thousand of research paper that are now suddenly in the brain of any system. Of course,", "confidence": -0.20044299288912937 }, { "start": 2010.3200000000002, "end": 2014.96, "text": "so this is the collective domain of theoretical physics of medicine. You got the idea.", "confidence": -0.20044299288912937 }, { "start": 2015.6000000000001, "end": 2020.48, "text": "But let's say we have here built a holographic wireframe wall. So this is my idea. Please", "confidence": -0.20044299288912937 }, { "start": 2021.28, "end": 2026.56, "text": "go with whatever you like. This is just an illustration. I try to find to explain this area to you.", "confidence": -0.15475407162228147 }, { "start": 2026.56, "end": 2032.0, "text": "And let's say we have here a domain agent. And the domain agent is just reading every day here,", "confidence": -0.15475407162228147 }, { "start": 2032.0, "end": 2037.2, "text": "the latest AI research publication that has to do anything with theoretical physics. And then we", "confidence": -0.15475407162228147 }, { "start": 2037.2, "end": 2042.56, "text": "have here an agent. This is reading here every single scientific paper that has to do with biology.", "confidence": -0.15475407162228147 }, { "start": 2043.52, "end": 2049.36, "text": "And they built here their internal representation and their network here, their wireframe here,", "confidence": -0.15475407162228147 }, { "start": 2049.84, "end": 2055.44, "text": "after complexity of the topics of the dependencies here in science. Great. So if you want,", "confidence": -0.1398067701430548 }, { "start": 2055.44, "end": 2059.6800000000003, "text": "we have here the domain knowledge graph of physics combined with biology.", "confidence": -0.1398067701430548 }, { "start": 2061.44, "end": 2065.6, "text": "And now the query vector comes in. This is a very specific query vector with a brand new idea.", "confidence": -0.1398067701430548 }, { "start": 2066.1600000000003, "end": 2073.84, "text": "And this is now, hey, does the general global research community as ever heard of this idea that I", "confidence": -0.1398067701430548 }, { "start": 2074.6400000000003, "end": 2079.6800000000003, "text": "how I should develop as a human? Is there anything related to it? Is there any publication that", "confidence": -0.13907522814614431 }, { "start": 2079.6800000000003, "end": 2085.6800000000003, "text": "gives me a help? Is there any publication that guides me in my personal development? Anybody", "confidence": -0.13907522814614431 }, { "start": 2085.6800000000003, "end": 2091.44, "text": "has tried something crazy enough or similar enough. And now we are again working with a cosine", "confidence": -0.13907522814614431 }, { "start": 2091.44, "end": 2097.6800000000003, "text": "similarity in a normal vector space. You see, explore the space and says, yeah, we found some", "confidence": -0.13907522814614431 }, { "start": 2097.6800000000003, "end": 2102.7200000000003, "text": "path of augmentation that your idea is not as stupid as you think, but maybe it's a valid idea.", "confidence": -0.13907522814614431 }, { "start": 2102.72, "end": 2108.3999999999996, "text": "And we provide now from the complete, if you want knowledge graph of the world,", "confidence": -0.1458841095799985 }, { "start": 2109.2, "end": 2115.7599999999998, "text": "we provide now the particular output here. This is the green beam. We provide now as an output.", "confidence": -0.1458841095799985 }, { "start": 2115.7599999999998, "end": 2121.2, "text": "But at the same time, of course, this query vector was sent here to my personal learning manifold.", "confidence": -0.1458841095799985 }, { "start": 2122.56, "end": 2128.72, "text": "Now, I told you I have a repellent force field here. Now, this is an orange here.", "confidence": -0.1458841095799985 }, { "start": 2128.7999999999997, "end": 2134.72, "text": "But I do not want that if this square vector comes in, it is already the same as I've already", "confidence": -0.15244471337184434 }, { "start": 2134.72, "end": 2139.9199999999996, "text": "doing. So more of the same, I don't want this. I want to go here for a scientific discovery,", "confidence": -0.15244471337184434 }, { "start": 2139.9199999999996, "end": 2145.3599999999997, "text": "go where no one has ever gone before and you know the story. Now, so if this vector here", "confidence": -0.15244471337184434 }, { "start": 2145.3599999999997, "end": 2150.3999999999996, "text": "crashes through my force field, it has to have a certain, let's call it impulse impetus.", "confidence": -0.15244471337184434 }, { "start": 2151.04, "end": 2156.3199999999997, "text": "And then I will analyze this. Now, and I just explained to this here all the different layers here", "confidence": -0.15244471337184434 }, { "start": 2156.32, "end": 2164.2400000000002, "text": "of the individual personality of my mirror mind. And now I now discover is this something,", "confidence": -0.16935211546877596 }, { "start": 2164.2400000000002, "end": 2170.56, "text": "is this an idea that would push me out of my deep blue gravity well into a new direction.", "confidence": -0.16935211546877596 }, { "start": 2171.92, "end": 2175.84, "text": "And I send out, hey, yeah, this sounds absolutely interesting. This is absolutely normal.", "confidence": -0.16935211546877596 }, { "start": 2175.84, "end": 2183.44, "text": "T I have my experience in the topic A, B and C. And now I say, hey, this is my specialization.", "confidence": -0.16935211546877596 }, { "start": 2183.44, "end": 2189.68, "text": "I have sent out the orange beam to novelty. So now we have here the knowledge integrator,", "confidence": -0.14280887083573776 }, { "start": 2189.68, "end": 2196.08, "text": "which is something beautiful. This is now where the braiding is going to happen. We combine now the", "confidence": -0.14280887083573776 }, { "start": 2196.08, "end": 2202.2400000000002, "text": "green beam and the orange beam into something completely new and the output of this will be my new", "confidence": -0.14280887083573776 }, { "start": 2202.2400000000002, "end": 2207.68, "text": "research direction, my new research title, where I should move to have a scientific discovery as", "confidence": -0.14280887083573776 }, { "start": 2207.8399999999997, "end": 2215.12, "text": "decided by the AI system. Oh, wow. Okay, let's go with this. I hope I'm clear as", "confidence": -0.26060067623033434 }, { "start": 2216.16, "end": 2222.0, "text": "or as right now. If not, I just want to give you an example. How does it work? Let's say we have", "confidence": -0.26060067623033434 }, { "start": 2222.0, "end": 2227.68, "text": "the idea, hey, let's build a narrow morphic battery. No, battery is always our topic on case. So", "confidence": -0.26060067623033434 }, { "start": 2228.24, "end": 2234.3999999999996, "text": "how is now the flow diagram? Now, we have a coordinated HN and takes in here my crazy idea,", "confidence": -0.26060067623033434 }, { "start": 2234.4, "end": 2240.1600000000003, "text": "building here an our morphic battery. So the coordinated AI say, okay, I activate now an", "confidence": -0.23613316405053233 }, { "start": 2240.1600000000003, "end": 2245.84, "text": "auto agent to or already if I'm already mapped in the system, if not, you can build here.", "confidence": -0.23613316405053233 }, { "start": 2245.84, "end": 2252.48, "text": "Your auto agent, if you say, hey, build me, yeah, you get the idea. And a domain agent for biology.", "confidence": -0.23613316405053233 }, { "start": 2252.48, "end": 2259.28, "text": "Great. So if you want, this is me and then here we have here agent here for biology. Great.", "confidence": -0.23613316405053233 }, { "start": 2259.84, "end": 2265.44, "text": "Activates and creates here agents. Then your agent, the individual, if you want person,", "confidence": -0.25801647020422896 }, { "start": 2265.44, "end": 2271.44, "text": "builds now our excesses, I have has access to your persona graph to the history, whatever I've", "confidence": -0.25801647020422896 }, { "start": 2271.44, "end": 2277.36, "text": "already researched and cut out and electrolytes in voltage fade, all the constraints here and do", "confidence": -0.25801647020422896 }, { "start": 2277.36, "end": 2283.36, "text": "whatever I do every Tuesday that I build better cathodes. Okay. So I say, don't go there because", "confidence": -0.25801647020422896 }, { "start": 2283.36, "end": 2288.2400000000002, "text": "this is what he is already doing and it has not having any discovery at all. So he pushes me away", "confidence": -0.25801647020422896 }, { "start": 2288.3199999999997, "end": 2295.2, "text": "from those areas that I already do. Then the domain agent, if you want to collective agent here,", "confidence": -0.25909419392430505 }, { "start": 2295.2, "end": 2301.04, "text": "we're guarding biology looks now at all the publication, the biology concepts related to energy.", "confidence": -0.25909419392430505 }, { "start": 2302.16, "end": 2307.7599999999998, "text": "Finds here neural glia cells, the concept to ion regulation here returns now. Yeah, there's", "confidence": -0.25909419392430505 }, { "start": 2307.7599999999998, "end": 2313.4399999999996, "text": "something like ion regulation biology to an electric light transport in batteries. Maybe there's", "confidence": -0.25909419392430505 }, { "start": 2313.44, "end": 2318.8, "text": "some hidden patterns here in the understanding and the reasoning in the, I don't know, molecular", "confidence": -0.28641554686400267 }, { "start": 2318.8, "end": 2325.52, "text": "transport architecture that we can use now from biology now in battery technology. And then comes", "confidence": -0.28641554686400267 }, { "start": 2325.52, "end": 2330.2400000000002, "text": "here the cooperation phase, the optimization as a studio in the blue well. The coordinator asks,", "confidence": -0.28641554686400267 }, { "start": 2330.2400000000002, "end": 2335.12, "text": "hey, is this a valid path? The domain agent says yes, but I mean, actually I showed here reading", "confidence": -0.28641554686400267 }, { "start": 2335.12, "end": 2341.2000000000003, "text": "here 50,000 publication that we have here. The other agents say I've never mentioned glia cells", "confidence": -0.28641554686400267 }, { "start": 2341.2799999999997, "end": 2346.72, "text": "in my last 50 paper. So this now for me is a complete new topic, but a new everything about", "confidence": -0.21665368331106086 }, { "start": 2346.72, "end": 2353.04, "text": "science. No, I just never focused on this particular point of research. So let me do this.", "confidence": -0.21665368331106086 }, { "start": 2353.3599999999997, "end": 2359.3599999999997, "text": "And then it scores here a novelty score and they try to maximize the novelty score. So the", "confidence": -0.21665368331106086 }, { "start": 2359.3599999999997, "end": 2367.2, "text": "eyes are not going to give me a brand new topic. And the integrator now generates it a final output.", "confidence": -0.21665368331106086 }, { "start": 2367.52, "end": 2372.48, "text": "And the integrator says, hmm, after having looked at all the AI research paper and what have you", "confidence": -0.19643106063206991 }, { "start": 2372.48, "end": 2379.2, "text": "learned in your last 18 years, I give you now a proposal, design a self regulating electorate", "confidence": -0.19643106063206991 }, { "start": 2379.2, "end": 2385.04, "text": "gale that mimics an ion buffering capacity of a neural glia cell to prevent voltage spikes.", "confidence": -0.19643106063206991 }, { "start": 2386.0, "end": 2393.6, "text": "This is your topic. This is your PhD. Do it if you solve it. You gonna spend or an millions of", "confidence": -0.19643106063206991 }, { "start": 2393.6, "end": 2398.48, "text": "dollars. Right. Yeah, you're gonna spend millions of dollars too for a computer button. Now I'm", "confidence": -0.22877558898925782 }, { "start": 2398.48, "end": 2405.04, "text": "mind about this. But it was the first paper. And I know I told you, I want to accelerate my learning.", "confidence": -0.22877558898925782 }, { "start": 2405.04, "end": 2409.36, "text": "I want to accelerate my explanation and we can go in higher complexity because now with nano banana", "confidence": -0.22877558898925782 }, { "start": 2409.36, "end": 2416.3199999999997, "text": "pro, hopefully I have a tool to to to show you my ideas, how I see things and maybe it becomes", "confidence": -0.22877558898925782 }, { "start": 2416.3199999999997, "end": 2421.2, "text": "clear to you or say, Hey, buddy, no way what you are thinking. So let's increase here the speed,", "confidence": -0.22877558898925782 }, { "start": 2421.2, "end": 2426.96, "text": "let's increase here the acceleration. And let's go to another paper. And you see I place it here", "confidence": -0.23846703926018908 }, { "start": 2426.96, "end": 2432.24, "text": "and this is also a paper by November 21st. This is here from Purdue University, our state", "confidence": -0.23846703926018908 }, { "start": 2432.24, "end": 2438.48, "text": "University, Columbia University. And they have a topic pair zone agents with graphrag.", "confidence": -0.23846703926018908 }, { "start": 2438.48, "end": 2443.3599999999997, "text": "Our good old friend graphrag. So what they build is a community of their knowledge graph for", "confidence": -0.23846703926018908 }, { "start": 2443.3599999999997, "end": 2450.72, "text": "personalized LLM. And you might think this sounds real similar to what we just did. All of course,", "confidence": -0.23846703926018908 }, { "start": 2450.72, "end": 2455.3599999999997, "text": "what coincidence that I selected this paper, but we published on the very same date.", "confidence": -0.27542789614930446 }, { "start": 2456.72, "end": 2462.16, "text": "Okay, they tell us just is this raw reading? They say, Hey, our method improves the data", "confidence": -0.27542789614930446 }, { "start": 2462.16, "end": 2468.08, "text": "organization here that if one score by 11% and for the movie tagging is now improved by 56%", "confidence": -0.27542789614930446 }, { "start": 2468.08, "end": 2474.48, "text": "and I say, Okay, if this is the step in the improvement, if we use this, let's have a look at this paper.", "confidence": -0.27542789614930446 }, { "start": 2474.96, "end": 2484.08, "text": "So, persona agents. So let's say you want to build here the little Einstein. No problem.", "confidence": -0.3073852144438645 }, { "start": 2484.08, "end": 2490.72, "text": "So you ought to see our tell us, Okay, our framework generates personalized prompts now for any", "confidence": -0.3073852144438645 }, { "start": 2490.72, "end": 2496.96, "text": "eye systems by combining here a summary of the user's historical behavior. Let's take again", "confidence": -0.3073852144438645 }, { "start": 2496.96, "end": 2502.08, "text": "me as a user. So my historical behavior and the preferences extracted from the knowledge graph. So", "confidence": -0.3073852144438645 }, { "start": 2502.08, "end": 2507.6, "text": "what I'm doing, so if I have multiple AI systems from I don't know, and tropic, open AI, and Google,", "confidence": -0.2294125832802008 }, { "start": 2507.6, "end": 2512.88, "text": "and to meter and Microsoft on my computer and all of those AI have access to my complete computer", "confidence": -0.2294125832802008 }, { "start": 2512.88, "end": 2518.64, "text": "and to my complete documentation. Everybody has my data. Great. So what did you do it? And then we", "confidence": -0.2294125832802008 }, { "start": 2518.64, "end": 2524.48, "text": "have a mixture and then we have also the global interaction patterns that we see, let's see on social", "confidence": -0.2294125832802008 }, { "start": 2524.48, "end": 2531.52, "text": "media, all the scientific publication and who is referencing what other paper. So we have to", "confidence": -0.2294125832802008 }, { "start": 2531.52, "end": 2537.36, "text": "complete social interaction. Let's go only on the science level. And this can be identified", "confidence": -0.11430740356445312 }, { "start": 2537.36, "end": 2543.36, "text": "through a graph based community detection. So social media. We bring it all together. We have", "confidence": -0.11430740356445312 }, { "start": 2543.36, "end": 2549.2, "text": "to compute power. No problem. No problem at all. Let's go with the complete science community.", "confidence": -0.11430740356445312 }, { "start": 2549.2, "end": 2555.28, "text": "And let's build here with this user history who is definitely not an Einstein. How can he become", "confidence": -0.11430740356445312 }, { "start": 2556.2400000000002, "end": 2563.1200000000003, "text": "a simple topic now? So they tell us here and this is not mine, not a banana, but this is done here", "confidence": -0.2471172535313969 }, { "start": 2563.1200000000003, "end": 2568.96, "text": "by the orders here. You see here that it's not as beautiful. They say we have a user profile", "confidence": -0.2471172535313969 }, { "start": 2568.96, "end": 2573.52, "text": "construction. And I would explain everything to you. You know, I have a personal preferences,", "confidence": -0.2471172535313969 }, { "start": 2573.52, "end": 2578.4, "text": "the relevant concept, the interaction statistics of me, all the emails who I talked to,", "confidence": -0.2471172535313969 }, { "start": 2578.4, "end": 2582.96, "text": "I cooperate with who might publish what paper, and then they have the external knowledge graph", "confidence": -0.2471172535313969 }, { "start": 2583.84, "end": 2587.2, "text": "construction. So what is happening to currently in quantum field theory and theoretical physics", "confidence": -0.28009235048757014 }, { "start": 2587.2, "end": 2592.2400000000002, "text": "in computational science, all the interaction node, the concept nodes, concepts we all were", "confidence": -0.28009235048757014 }, { "start": 2592.2400000000002, "end": 2597.68, "text": "encountered. No, then they have category theoretical physics, mathematics, biology, whatever.", "confidence": -0.28009235048757014 }, { "start": 2597.68, "end": 2602.2400000000002, "text": "You know, and then all the semantic relations, remember the co-sense similarity in a normalized", "confidence": -0.28009235048757014 }, { "start": 2602.2400000000002, "end": 2606.88, "text": "vector space. So we have to use the data in a community data and then we bring them all together", "confidence": -0.28009235048757014 }, { "start": 2606.88, "end": 2614.1600000000003, "text": "in a mixer and then we have a personalized agent that is now almost a substitute for this human,", "confidence": -0.17392341240302667 }, { "start": 2614.1600000000003, "end": 2618.8, "text": "but the personalized agent we can develop much faster. No, this will become a machine that is", "confidence": -0.17392341240302667 }, { "start": 2618.8, "end": 2623.52, "text": "much more intelligent than a human user. This is me, by the way. So what would be, we build a", "confidence": -0.17392341240302667 }, { "start": 2623.52, "end": 2628.1600000000003, "text": "semantic memory and say, Hey, I noticed you just talked about this and said, yeah, of course.", "confidence": -0.17392341240302667 }, { "start": 2628.1600000000003, "end": 2632.4, "text": "And then we need an episodic memory and say, Hey, this was the first layer, yes, of course.", "confidence": -0.17392341240302667 }, { "start": 2632.4, "end": 2635.76, "text": "And then we have a community context and I said, what is the surprise? So you see,", "confidence": -0.17392341240302667 }, { "start": 2636.7200000000003, "end": 2642.4, "text": "complete different place at the very same day, they published something that is almost identical.", "confidence": -0.20035273529762446 }, { "start": 2642.96, "end": 2650.6400000000003, "text": "And they now generate here a personalized prompt to then they feed to the LAM to get a real", "confidence": -0.20035273529762446 }, { "start": 2650.6400000000003, "end": 2656.8, "text": "highly specialized personalized response. Now, the beauty of what they do is they work only", "confidence": -0.20035273529762446 }, { "start": 2656.8, "end": 2663.76, "text": "with graph rack. So they are not going here with BM25 or with some dense algorithm. They are here", "confidence": -0.20035273529762446 }, { "start": 2663.76, "end": 2669.28, "text": "on the graph level. They're operational only on the graph level. Real nice. So let's go there.", "confidence": -0.22716630365430696 }, { "start": 2670.0, "end": 2676.0800000000004, "text": "So we have now from a graph topology, what we want is the output in a linearized context here for", "confidence": -0.22716630365430696 }, { "start": 2676.0800000000004, "end": 2681.92, "text": "a stupid LAM. If you want, this is here the braiding mechanism that was already talking about.", "confidence": -0.22716630365430696 }, { "start": 2681.92, "end": 2688.48, "text": "And here again, word, coincidence, I ask here nano banana pro to generate here almost identical", "confidence": -0.22716630365430696 }, { "start": 2688.48, "end": 2695.28, "text": "image here for our braiding process for our machine that brings here everything together.", "confidence": -0.14787868617736188 }, { "start": 2696.56, "end": 2701.68, "text": "Okay, let's start. So what we have again, as I told you, we have now we start not with the", "confidence": -0.14787868617736188 }, { "start": 2701.68, "end": 2707.28, "text": "three levels of memory, but we are now operating here in a graph rack system. So we have here a graph", "confidence": -0.14787868617736188 }, { "start": 2707.28, "end": 2714.16, "text": "and this graph, I have now interaction note of my history. So that I the user right here, now we", "confidence": -0.14787868617736188 }, { "start": 2714.24, "end": 2720.3199999999997, "text": "are somehow in a in a movie. So the ghost and then I watched matrix, I watched matrix again and", "confidence": -0.20886894226074218 }, { "start": 2720.3199999999997, "end": 2726.16, "text": "then I read here a particular book about this and you see, okay, so these are my interaction notes.", "confidence": -0.20886894226074218 }, { "start": 2726.16, "end": 2732.3199999999997, "text": "These are here the things. Then they built here what they call here. Where is it? The concept notes.", "confidence": -0.20886894226074218 }, { "start": 2732.3199999999997, "end": 2738.3999999999996, "text": "These are the triangles. So this goes to Cyberpunk. This goes here to dystopia. This goes here to", "confidence": -0.20886894226074218 }, { "start": 2738.4, "end": 2743.92, "text": "virtual reality and you see we already kind of a hierarchical structure of here of our note layers.", "confidence": -0.2459681374686105 }, { "start": 2744.7200000000003, "end": 2749.6800000000003, "text": "And then we have pure community notes. But these are the global interaction notes.", "confidence": -0.2459681374686105 }, { "start": 2750.64, "end": 2754.56, "text": "In general, all the people in this planet like ghost in a shell or whatever,", "confidence": -0.2459681374686105 }, { "start": 2754.56, "end": 2760.4, "text": "whatever, matrix garden tomato, whatever you like to use here. So you built here a network.", "confidence": -0.2459681374686105 }, { "start": 2761.52, "end": 2764.88, "text": "Now this network has of course, if you want two components,", "confidence": -0.2459681374686105 }, { "start": 2765.52, "end": 2771.84, "text": "but the first component is here my personal stream. Then we have here how did the community,", "confidence": -0.16698582617791144 }, { "start": 2771.84, "end": 2776.7200000000003, "text": "let's go again with the last five years. So how I developed in the last five years and how does", "confidence": -0.16698582617791144 }, { "start": 2776.7200000000003, "end": 2782.7200000000003, "text": "the research community developed in the last five years. And then we have to bring it together", "confidence": -0.16698582617791144 }, { "start": 2782.7200000000003, "end": 2790.0, "text": "in this rating process or by partite fusion operator, whatever you like call it, we go have a look", "confidence": -0.16698582617791144 }, { "start": 2790.16, "end": 2796.08, "text": "in detail what this is doing and how it is doing. But just the idea. And then after we", "confidence": -0.263581154194284 }, { "start": 2796.08, "end": 2802.8, "text": "won't linearize this complexity, we have now for the LLM context window, we can create a system prompt,", "confidence": -0.263581154194284 }, { "start": 2802.8, "end": 2811.44, "text": "we can have a stream A of my personal history and the stream B where I tell the AI, look in this", "confidence": -0.263581154194284 }, { "start": 2811.44, "end": 2817.84, "text": "five years, my sub community theoretical physics developed decent decent decent decent this.", "confidence": -0.263581154194284 }, { "start": 2818.32, "end": 2824.1600000000003, "text": "And now this is the information for you as an LLM. This is my input to you as an LLM and know", "confidence": -0.12298296965085544 }, { "start": 2824.1600000000003, "end": 2831.6000000000004, "text": "you LLM do the job. So you see we are here in the pre-processing of the data to an LLM.", "confidence": -0.12298296965085544 }, { "start": 2833.36, "end": 2841.1200000000003, "text": "So you see that again, looking here at the graph distribution, we have here the user manifold", "confidence": -0.12298296965085544 }, { "start": 2841.1200000000003, "end": 2847.36, "text": "and we have if you want the community manifold. And now these two streams here are brought to", "confidence": -0.12298296965085544 }, { "start": 2847.84, "end": 2855.6800000000003, "text": "together. So I'm not again squeezing everything into a flat one manifold structure, if it's with", "confidence": -0.22638626487887636 }, { "start": 2855.6800000000003, "end": 2862.08, "text": "high dimensional, but I separate here very specific persona. This is the blue stream. This is", "confidence": -0.22638626487887636 }, { "start": 2862.08, "end": 2867.6800000000003, "text": "me, for example, or you too, hey, what is happening in the world? What is happening in the community?", "confidence": -0.22638626487887636 }, { "start": 2867.6800000000003, "end": 2872.96, "text": "If you are an artist, if you are creative, if you are dance, if you music, whatever, what is", "confidence": -0.22638626487887636 }, { "start": 2873.04, "end": 2877.44, "text": "happening in your world? And what you have been doing the last five years and we bring it together", "confidence": -0.16308610192660628 }, { "start": 2877.44, "end": 2885.92, "text": "and we see what emerges. So this persona agent, and this is the complete framework here,", "confidence": -0.16308610192660628 }, { "start": 2885.92, "end": 2890.8, "text": "overcomes now the cognitive flatness that I told you here at the very beginning of this video.", "confidence": -0.16308610192660628 }, { "start": 2891.84, "end": 2897.28, "text": "How we do this through a recursive graph rack that we built. So we use something that we know,", "confidence": -0.16308610192660628 }, { "start": 2897.28, "end": 2902.64, "text": "there's nothing new, there's a little bit new, but everything else is clear. Let's have a look.", "confidence": -0.16308610192660628 }, { "start": 2903.84, "end": 2909.12, "text": "So what I especially found interesting, how would you code a braiding processor? No, in code,", "confidence": -0.25558433126895985 }, { "start": 2909.92, "end": 2916.64, "text": "because what it's doing, it's just a linearization. So it must be real simple. And in standard drag,", "confidence": -0.25558433126895985 }, { "start": 2916.64, "end": 2920.32, "text": "our retrieve log manager generation, the system retrieves the list of documents here from", "confidence": -0.25558433126895985 }, { "start": 2920.32, "end": 2927.76, "text": "external data sources and just paste them into one to one another in the LLM, but this is stacking", "confidence": -0.25558433126895985 }, { "start": 2928.32, "end": 2935.2000000000003, "text": "this is not braiding. So the often the LLM often gets confused by contradictory or irrelevant data,", "confidence": -0.14467850509954958 }, { "start": 2935.2000000000003, "end": 2940.8, "text": "because maybe in the data we brought back from rack is the earth is flat and then the earth is", "confidence": -0.14467850509954958 }, { "start": 2940.8, "end": 2948.4, "text": "not flat. So what to believe? So let's solve this. Braiding is now a much smarter structural", "confidence": -0.14467850509954958 }, { "start": 2948.4, "end": 2953.6800000000003, "text": "merge operation. It doesn't just pile up the data. So the earth is flat, the earth is not flat,", "confidence": -0.14467850509954958 }, { "start": 2953.68, "end": 2961.3599999999997, "text": "the earth is whatever. It leaves now two distinct strands of information together to create a stronger", "confidence": -0.16257658004760742 }, { "start": 2961.3599999999997, "end": 2968.7999999999997, "text": "rope. I hope with this image, I can communicate what I want to tell you. So the strand A is of course", "confidence": -0.16257658004760742 }, { "start": 2968.7999999999997, "end": 2975.52, "text": "the self. So this is my knowledge and a strand B is the community, the world. So strand A more or", "confidence": -0.16257658004760742 }, { "start": 2975.52, "end": 2980.56, "text": "less is, hey, what have I done the last five years in theoretical physics? This is my personal history.", "confidence": -0.16257658004760742 }, { "start": 2981.52, "end": 2985.7599999999998, "text": "It's not a vector, but yeah, it's a high dimensional vector, a tensile structure, okay.", "confidence": -0.17952319795051508 }, { "start": 2986.72, "end": 2992.7999999999997, "text": "And strand B simply, hey, what has everyone else on this planet done and published here on archive?", "confidence": -0.17952319795051508 }, { "start": 2992.7999999999997, "end": 2997.68, "text": "So this is the complete knowledge graph and we have here traversal vector that we can explore", "confidence": -0.17952319795051508 }, { "start": 2997.68, "end": 3003.04, "text": "in the simplest case. So what is this braiding process? It is of course a mathematical function,", "confidence": -0.17952319795051508 }, { "start": 3003.04, "end": 3009.92, "text": "or if you want an algorithm here, that compares these two strands and finds now an interference", "confidence": -0.17952319795051508 }, { "start": 3009.92, "end": 3016.7200000000003, "text": "pattern. You see what? We don't just here add it up. We have a concatenation. No. We have a look now", "confidence": -0.18359569023395406 }, { "start": 3016.7200000000003, "end": 3023.12, "text": "at the interference. So specific points where your unique quirks, my ideas overlap with the", "confidence": -0.18359569023395406 }, { "start": 3023.12, "end": 3030.48, "text": "collective trend here of the research community. Very simple example, but it's the simplest example", "confidence": -0.18359569023395406 }, { "start": 3030.48, "end": 3034.32, "text": "I can think of. Hey, I say at the individual stream is, hey, you like dark chocolate and the", "confidence": -0.18359569023395406 }, { "start": 3034.32, "end": 3038.56, "text": "collective stream is people who buy red wine also buy dark chocolate and guess what they", "confidence": -0.18359569023395406 }, { "start": 3038.56, "end": 3043.92, "text": "separated out, but it's yes, you can imagine this. Now, of course, it is a little bit more complicated", "confidence": -0.2266433652767465 }, { "start": 3043.92, "end": 3050.32, "text": "and it took me again about 20 minutes so that can that nano banana pro generated this image. I", "confidence": -0.2266433652767465 }, { "start": 3050.32, "end": 3055.12, "text": "wanted to have it like a stargate. I don't know if you know this TV series, but exactly. So here we", "confidence": -0.2266433652767465 }, { "start": 3055.12, "end": 3061.04, "text": "have stream a here we have stream B personal vector episodic. So with all our little boxes here", "confidence": -0.2266433652767465 }, { "start": 3061.04, "end": 3066.16, "text": "of knowledge and then here the collective vector, all the publication that have references to all the", "confidence": -0.2266433652767465 }, { "start": 3066.16, "end": 3070.72, "text": "other publications and those reference other publication and those reverence here persona", "confidence": -0.3156711004113638 }, { "start": 3070.72, "end": 3077.7599999999998, "text": "this reference here some tweets or you get the idea. What is happening here? And at first I saw", "confidence": -0.3156711004113638 }, { "start": 3077.7599999999998, "end": 3083.7599999999998, "text": "that I build it like a DNA strand here, a molecular strand, but no, because what I want I want this", "confidence": -0.3156711004113638 }, { "start": 3083.7599999999998, "end": 3090.96, "text": "input and you see here still to do the DNA strand it was not I read it here by nano banana pro, okay?", "confidence": -0.3156711004113638 }, { "start": 3091.04, "end": 3097.6, "text": "Because this is not the input to our LLM. This is just a data process pre-processing for our LLM", "confidence": -0.16466537324508818 }, { "start": 3097.6, "end": 3104.7200000000003, "text": "machine. So I have to bring this to a linearized context tensor that has your particular optimization", "confidence": -0.16466537324508818 }, { "start": 3104.7200000000003, "end": 3113.52, "text": "routine to have your the perfect input to the LLM. So what is this? Now if you are a subscriber", "confidence": -0.16466537324508818 }, { "start": 3113.52, "end": 3118.56, "text": "of my channel, you understand immediately when I tell you, you know, this is nothing else than a", "confidence": -0.16466537324508818 }, { "start": 3118.56, "end": 3127.6, "text": "graph neural network attention mechanism that we apply at inference time. Okay. So what is happening", "confidence": -0.2084512710571289 }, { "start": 3127.6, "end": 3134.08, "text": "here? This is the most important area now. This braiding processor with our logic gate and here", "confidence": -0.2084512710571289 }, { "start": 3134.08, "end": 3140.96, "text": "I free the breed is just that is not as important as just push back in space and we just need here", "confidence": -0.2084512710571289 }, { "start": 3140.96, "end": 3148.24, "text": "the perfect braided here knowledge stream that enters here the LLM as a linearized tensor structure.", "confidence": -0.2084512710571289 }, { "start": 3148.56, "end": 3156.56, "text": "Let's do this. Now if you look at it from a mathematical perspective that I introduced at the", "confidence": -0.14165798636043772 }, { "start": 3156.56, "end": 3160.88, "text": "beginning of this video, you immediately see that this is a dual source manifold alignment.", "confidence": -0.14165798636043772 }, { "start": 3160.88, "end": 3167.68, "text": "The first source is here the episodic stream and the second here is the collective knowledge stream.", "confidence": -0.14165798636043772 }, { "start": 3168.4, "end": 3175.92, "text": "A dual source manifold alignment. So yeah followed by gated linearization. Of course we have", "confidence": -0.14165798636043772 }, { "start": 3175.92, "end": 3181.04, "text": "only have a linear prompt here to our LLM but of course it is not a single equation. It would be", "confidence": -0.19465269608931107 }, { "start": 3181.04, "end": 3186.16, "text": "two easy no come on here. This would be not a topic of one of my videos, but it is a computational", "confidence": -0.19465269608931107 }, { "start": 3186.16, "end": 3192.88, "text": "pipeline to project see a query into two orthogonal vector spaces again and we have individual", "confidence": -0.19465269608931107 }, { "start": 3192.88, "end": 3198.96, "text": "and collective. See hope this visualization helps and computes now their intersection to filter", "confidence": -0.19465269608931107 }, { "start": 3198.96, "end": 3205.6, "text": "out the noise and the rank relevance. So let our domain be defined by heterogeneous knowledge", "confidence": -0.19465269608931107 }, { "start": 3205.6, "end": 3211.04, "text": "graph on all of theoretical physics. Then we define two distinct submanifolds within this", "confidence": -0.19170753065362034 }, { "start": 3211.04, "end": 3216.64, "text": "graph structure. Now you know what it is it is the individual manifold at a local subgraph", "confidence": -0.19170753065362034 }, { "start": 3216.64, "end": 3221.44, "text": "defined here by my little brain and a collective manifold the beauty that everybody else and this", "confidence": -0.19170753065362034 }, { "start": 3221.44, "end": 3227.12, "text": "planet did in the last five years doing research and subgraph reachable through a community traversal", "confidence": -0.19170753065362034 }, { "start": 3227.68, "end": 3236.72, "text": "and now the task is the stream a is an individual resonance score that we can calculate and we", "confidence": -0.14350003081482726 }, { "start": 3236.72, "end": 3242.24, "text": "call this parameter alpha. So this measures how well a candidate node aligns with the user", "confidence": -0.14350003081482726 }, { "start": 3242.24, "end": 3247.92, "text": "established history. It combines the semantic similarity with the historical weights.", "confidence": -0.14350003081482726 }, { "start": 3248.64, "end": 3253.7599999999998, "text": "The stream b is of course the collective feasibility score from the whole community we call", "confidence": -0.14350003081482726 }, { "start": 3253.76, "end": 3260.0800000000004, "text": "this parameter beta and this measures now how strongly the node is supported by the topology", "confidence": -0.18419744454178155 }, { "start": 3260.0800000000004, "end": 3267.0400000000004, "text": "after domain graph itself. So more or less is this a valid node. Am I allowed to sink this in my", "confidence": -0.18419744454178155 }, { "start": 3267.0400000000004, "end": 3272.0, "text": "individual vector stream is this really something that the community recognized as yeah this is", "confidence": -0.18419744454178155 }, { "start": 3272.0, "end": 3278.48, "text": "something an object that you do we worth to investigate. Beta computes here the random work", "confidence": -0.18419744454178155 }, { "start": 3278.48, "end": 3283.1200000000003, "text": "probability of landing on the node and starting from the query concepts within the domain graph G.", "confidence": -0.18419744454178155 }, { "start": 3284.0800000000004, "end": 3291.44, "text": "But we do have two parameter alpha and beta. It's a simplification I know please don't write to me", "confidence": -0.17229704423384232 }, { "start": 3291.44, "end": 3296.8, "text": "but there's another parameter yes I know I just want to be here in the main idea. So how is this fusion", "confidence": -0.17229704423384232 }, { "start": 3296.8, "end": 3302.2400000000002, "text": "how is this braiding kernel now operational. You understand that this is the core process allergic", "confidence": -0.17229704423384232 }, { "start": 3302.2400000000002, "end": 3308.4, "text": "that we are talking about. It is not the sum of alpha and beta. We have to perform here a gated", "confidence": -0.17229704423384232 }, { "start": 3308.4, "end": 3313.04, "text": "fusion operation to reject the hallucination and irrelevant noise.", "confidence": -0.14928731322288513 }, { "start": 3314.32, "end": 3318.48, "text": "You remember in the first part of the video I showed you that the hallucination is here now is", "confidence": -0.14928731322288513 }, { "start": 3318.48, "end": 3325.84, "text": "here this big minus here in the grid. So we have a high individual score and zero collective", "confidence": -0.14928731322288513 }, { "start": 3325.84, "end": 3331.36, "text": "support now. The hallucination is not supported by the research community or published upon it is", "confidence": -0.14928731322288513 }, { "start": 3331.36, "end": 3338.2400000000002, "text": "only apparent here in my individual score. And the irrelevant noise has here high collective", "confidence": -0.14928731322288513 }, { "start": 3338.24, "end": 3343.9199999999996, "text": "scores but zero individual relevance for me. So I don't care for something that is so far away", "confidence": -0.1585774893289084 }, { "start": 3343.9199999999996, "end": 3351.2799999999997, "text": "I don't even understand it. And now we calculate here the braided score S braid.", "confidence": -0.1585774893289084 }, { "start": 3352.16, "end": 3358.16, "text": "And this is now defined since you know the title of this video by a geometric interaction", "confidence": -0.1585774893289084 }, { "start": 3358.16, "end": 3364.3999999999996, "text": "term of two manifolds. So I told you we're going to look here and it is not a good incidence that I", "confidence": -0.1585774893289084 }, { "start": 3364.48, "end": 3369.36, "text": "tried to make this here not as a vector but more like a wave function. We are looking here at the", "confidence": -0.11824637326327238 }, { "start": 3369.36, "end": 3376.32, "text": "interference pattern. So just going to give you the result. The braided score is calculated here", "confidence": -0.11824637326327238 }, { "start": 3376.88, "end": 3382.8, "text": "with an alpha and a beta and in this structure where we have a linear mixture of alpha and beta.", "confidence": -0.11824637326327238 }, { "start": 3382.8, "end": 3387.2000000000003, "text": "So what do I know and what does the community know and a structural gate.", "confidence": -0.11824637326327238 }, { "start": 3388.32, "end": 3393.36, "text": "And this structural gate is now really important. But you know if you look at this and you think", "confidence": -0.11824637326327238 }, { "start": 3393.36, "end": 3399.6800000000003, "text": "about the very first PDF archive that we just talked about the mirror mind you understand wait a", "confidence": -0.1488395540901784 }, { "start": 3399.6800000000003, "end": 3406.96, "text": "minute. If this is not interpretation here for the mixture process I can use this imagination", "confidence": -0.1488395540901784 }, { "start": 3407.76, "end": 3415.6, "text": "come back to the first PDF and also build here the identical formula. And now I say here the", "confidence": -0.1488395540901784 }, { "start": 3415.6, "end": 3423.04, "text": "braided S or further mirror mind is no example it is. Have a look at this. So you see those paper", "confidence": -0.1488395540901784 }, { "start": 3423.04, "end": 3429.52, "text": "not only have a very similar topic but given here the mathematical formula of the first paper", "confidence": -0.1571506353525015 }, { "start": 3429.52, "end": 3438.8, "text": "of the second paper I can induce now a equilibrium no and an almost identical idea where I can come", "confidence": -0.1571506353525015 }, { "start": 3438.8, "end": 3445.44, "text": "up now with the braided score for the mirror mind and you see they are operating now differently.", "confidence": -0.1571506353525015 }, { "start": 3445.76, "end": 3452.88, "text": "Why? Because this has a repulsory effect the first one and this has a structural gate.", "confidence": -0.1617725631337107 }, { "start": 3453.6, "end": 3460.64, "text": "So there is a difference but there otherwise real similar. So what is the critical nuance", "confidence": -0.1617725631337107 }, { "start": 3460.64, "end": 3465.12, "text": "that distinguishes this? I told you mirror mind is for the scientific discovery process here", "confidence": -0.1617725631337107 }, { "start": 3465.92, "end": 3472.7200000000003, "text": "and the persona agent here is of course about a recommendation. While both systems use you", "confidence": -0.1617725631337107 }, { "start": 3472.72, "end": 3478.3999999999996, "text": "the braiding mechanism they use you the individual stream alpha or opposite purposes.", "confidence": -0.16876603244395738 }, { "start": 3479.2799999999997, "end": 3484.7999999999997, "text": "One is respulsion and this is the mirror mind the individual stream acts as a negative constraint", "confidence": -0.16876603244395738 }, { "start": 3484.7999999999997, "end": 3489.2, "text": "where I remember this was the deep blue gravity valve where I told you this is what I knew best", "confidence": -0.16876603244395738 }, { "start": 3489.2, "end": 3496.48, "text": "this is where I'm sitting I'm lazy I don't move at all out of my beauty zone here and I need now some", "confidence": -0.16876603244395738 }, { "start": 3496.56, "end": 3503.36, "text": "powers I'm impetus to move me out of here for the optimal path to P store. So this is now in", "confidence": -0.1703395642732319 }, { "start": 3503.36, "end": 3512.08, "text": "mirror mind a repulsor my alpha. Now of course in this yeah again here this is here the term our", "confidence": -0.1703395642732319 }, { "start": 3512.08, "end": 3517.12, "text": "novelty repulsor if you want to be specific. So you do have an intersection of a high domain", "confidence": -0.1703395642732319 }, { "start": 3517.12, "end": 3524.08, "text": "visibility and a high persona surprise and the optimization objective is to find out the node N", "confidence": -0.1703395642732319 }, { "start": 3524.16, "end": 3530.48, "text": "that maximizes this s-breeded value or in this formulation here for the mirror mind.", "confidence": -0.199476019850055 }, { "start": 3531.7599999999998, "end": 3537.36, "text": "Again alpha the individual nurture measures how similar the idea is to what the scientist what I", "confidence": -0.199476019850055 }, { "start": 3537.36, "end": 3542.3199999999997, "text": "have already written in the last five years and beta is yet a collective validity all the global", "confidence": -0.199476019850055 }, { "start": 3542.3199999999997, "end": 3547.44, "text": "publication here that is what is mathematically possible that has been peer-reviewed that has", "confidence": -0.199476019850055 }, { "start": 3547.44, "end": 3552.48, "text": "been agreed upon yeah this is a real interesting research topic this is yet a wireframe great that", "confidence": -0.199476019850055 }, { "start": 3552.48, "end": 3558.56, "text": "I showed you here in the first visualization here of this video and we want this to be high because", "confidence": -0.10369967342762465 }, { "start": 3559.76, "end": 3566.88, "text": "this is now exactly at the intersection that we're going to optimize. Now of course as I told you", "confidence": -0.10369967342762465 }, { "start": 3566.88, "end": 3572.56, "text": "I will show you here that title in a particular way if you read these two preprints in this sequence", "confidence": -0.10369967342762465 }, { "start": 3573.52, "end": 3577.68, "text": "and I'm just here sorting this out for you that you have an easier learning process", "confidence": -0.10369967342762465 }, { "start": 3578.3999999999996, "end": 3584.48, "text": "I can come up with this idea so to those persons who are really checking here whatever I tell you", "confidence": -0.12761982627536939 }, { "start": 3584.48, "end": 3590.8799999999997, "text": "is this really written down in the PDF no I'm not going beyond both PDF publications I know combine", "confidence": -0.12761982627536939 }, { "start": 3590.8799999999997, "end": 3595.9199999999996, "text": "them since they were published on the same day the authors had no idea from each other so but I", "confidence": -0.12761982627536939 }, { "start": 3595.9199999999996, "end": 3602.96, "text": "now reading those I see they have common ground and so let's do this so my idea careful bugle up", "confidence": -0.12761982627536939 }, { "start": 3602.96, "end": 3610.08, "text": "is we can combine PDF1 mirror mind with the persona agent to get a unified contextualization and", "confidence": -0.22498939915707236 }, { "start": 3610.08, "end": 3618.8, "text": "output so image1 clear now we have p-starter proposed great new idea where I have to go and now all", "confidence": -0.22498939915707236 }, { "start": 3618.8, "end": 3625.68, "text": "I say is listen if I have no this idea I can bring it over now into the persona agent where I told", "confidence": -0.22498939915707236 }, { "start": 3625.68, "end": 3631.04, "text": "you we're working out pure in a graph structure the graph extractor for the persona agent and I", "confidence": -0.22498939915707236 }, { "start": 3631.04, "end": 3637.92, "text": "just bring this over as one node for the network this is it I mean simple come on this is all", "confidence": -0.16200334731846638 }, { "start": 3637.92, "end": 3646.08, "text": "you have to do to have some new insights and I'm trying to be good to combine both coding and I", "confidence": -0.16200334731846638 }, { "start": 3646.08, "end": 3653.12, "text": "mean Gemini 3 pro will do the coding for me and maybe I can build this system operation only let's", "confidence": -0.16200334731846638 }, { "start": 3653.6, "end": 3661.68, "text": "see but of course I can insert any node if I want and why not insert here the perfect research idea", "confidence": -0.10901931712501928 }, { "start": 3661.68, "end": 3668.3199999999997, "text": "node here into the interaction node here of my personal history because this would be my personal", "confidence": -0.10901931712501928 }, { "start": 3668.3199999999997, "end": 3673.2799999999997, "text": "future the very new future where this system tells me integrate this into your", "confidence": -0.10901931712501928 }, { "start": 3673.92, "end": 3678.72, "text": "rough knowledge graph because this is your future that you should research and then", "confidence": -0.10901931712501928 }, { "start": 3679.4399999999996, "end": 3684.48, "text": "I just combine this here with the persona agent as published already with the concept nodes with", "confidence": -0.1375240029640568 }, { "start": 3684.48, "end": 3689.8399999999997, "text": "the community nodes here we have the braiding machine that does here our braiding processing as", "confidence": -0.1375240029640568 }, { "start": 3689.8399999999997, "end": 3695.2799999999997, "text": "I already described to you and then the output what you have is a linearization a linearization", "confidence": -0.1375240029640568 }, { "start": 3695.2799999999997, "end": 3700.3199999999997, "text": "context window where I showed you have the perfect system prompt for me as a persona for me to", "confidence": -0.1375240029640568 }, { "start": 3700.3199999999997, "end": 3705.8399999999997, "text": "be an intellectual sparring partner I have my personal history that I present here to the AI", "confidence": -0.1375240029640568 }, { "start": 3705.84, "end": 3711.36, "text": "the collective signal what has the our community done in the last five years for my particular", "confidence": -0.18999096155166625 }, { "start": 3711.36, "end": 3718.96, "text": "brand new idea and then again now I refine the contextual linear idea this is here the p-star", "confidence": -0.18999096155166625 }, { "start": 3718.96, "end": 3726.2400000000002, "text": "and the collective inside here also from a purely graph structure so you see just", "confidence": -0.18999096155166625 }, { "start": 3726.2400000000002, "end": 3733.92, "text": "braided together everything together and isn't this looking gorgeous now if you want to have to", "confidence": -0.18999096155166625 }, { "start": 3733.92, "end": 3740.88, "text": "go a little bit deeper I further annotated this graph that was built with nano banana pro so here", "confidence": -0.1504410221463158 }, { "start": 3740.88, "end": 3747.6, "text": "you find some additional sorts here from my side but yeah I'm sure you get the idea", "confidence": -0.1504410221463158 }, { "start": 3750.48, "end": 3755.76, "text": "so this image now illustrate here a new solution to the cognitive flatness we want to solve this", "confidence": -0.1504410221463158 }, { "start": 3755.76, "end": 3762.64, "text": "now and we sequentially apply here to simple structural operation we have an optimization as I", "confidence": -0.1504410221463158 }, { "start": 3762.64, "end": 3767.92, "text": "showed you in my own mind so we find a local maximum for novelty within the value constraints", "confidence": -0.1965165592375256 }, { "start": 3767.92, "end": 3774.16, "text": "this is here a blue graph anti contextualization as the second structural operation as I've shown", "confidence": -0.1965165592375256 }, { "start": 3774.16, "end": 3780.4, "text": "today autos of persona agent it so what it is we anchor the maximum if in the heterogeneous", "confidence": -0.1965165592375256 }, { "start": 3780.4, "end": 3786.48, "text": "knowledge graph to ensure it aligns with both the personal history and the social reality of the", "confidence": -0.1965165592375256 }, { "start": 3786.48, "end": 3795.36, "text": "research community take a step back and think about what we have just achieved just reading two", "confidence": -0.12939943166879508 }, { "start": 3795.36, "end": 3804.88, "text": "paper you have read now only two papers structure is the new prompt the intelligence itself is not", "confidence": -0.12939943166879508 }, { "start": 3804.88, "end": 3811.92, "text": "here because this is just the input to the lalm this is not intelligence is encoded in the manifold", "confidence": -0.12939943166879508 }, { "start": 3812.56, "end": 3821.6800000000003, "text": "and in the graph well the lalm serves merely here as a traversal engine that is now computing this", "confidence": -0.11379777060614692 }, { "start": 3823.44, "end": 3830.64, "text": "it is not even computing this because this manifold and the graph are constructing constraints", "confidence": -0.11379777060614692 }, { "start": 3831.28, "end": 3837.52, "text": "on the operational space of the lalm itself so what I want to propose to you", "confidence": -0.11379777060614692 }, { "start": 3838.0, "end": 3847.12, "text": "huh that this shift here defines the next generation of neural symbology why because the locals the", "confidence": -0.24842922149165983 }, { "start": 3847.12, "end": 3853.52, "text": "place of intelligence is shifting now from the parametric knowledge of the lalm the model weights", "confidence": -0.24842922149165983 }, { "start": 3853.52, "end": 3860.96, "text": "the tensor weights itself after vision language model to the non parametric structure to the external", "confidence": -0.24842922149165983 }, { "start": 3860.96, "end": 3869.28, "text": "architecture so for my case this would be here my intellectual landscape with the community landscape", "confidence": -0.18906198229108537 }, { "start": 3869.28, "end": 3876.0, "text": "we process here the path my personal path to my personal optimal idea then I bring it here into a", "confidence": -0.18906198229108537 }, { "start": 3876.0, "end": 3882.2400000000002, "text": "pure graph representation I have the degrading process a computing here this and then I have here more or", "confidence": -0.18906198229108537 }, { "start": 3882.2400000000002, "end": 3890.64, "text": "less all the history of mine and all the intelligence and the development of my scientific ideas here", "confidence": -0.18906198229108537 }, { "start": 3890.96, "end": 3898.3199999999997, "text": "all very presented here so I think we are shifting here more away from the lalm is the only", "confidence": -0.09761649540492466 }, { "start": 3898.3199999999997, "end": 3906.24, "text": "source of intelligence and we have a lot more non parametric structure that will do here in front", "confidence": -0.09761649540492466 }, { "start": 3906.24, "end": 3914.8799999999997, "text": "of the lalm the real intelligence work if you want to call it now now maybe you have seen that", "confidence": -0.09761649540492466 }, { "start": 3914.88, "end": 3920.96, "text": "some days ago I posted here on my channel also here the latest research here from medical about", "confidence": -0.19169202665003335 }, { "start": 3920.96, "end": 3930.56, "text": "manifold learning for medical EEG and I've showed you here publication they discovered it really", "confidence": -0.19169202665003335 }, { "start": 3930.56, "end": 3936.88, "text": "depends here on the mathematical space that we construct and they found that the Euclidean", "confidence": -0.19169202665003335 }, { "start": 3936.88, "end": 3943.84, "text": "latent spaces distorted the true structure of the electro-entervalogram they said with this you", "confidence": -0.19169202665003335 }, { "start": 3943.84, "end": 3950.96, "text": "know this unconstrained vector space this is not optimal we can use AI for medical here because", "confidence": -0.1713170051574707 }, { "start": 3950.96, "end": 3956.56, "text": "near bone neural state may be mapped for a path in this unconstrained vector space irrelevant state", "confidence": -0.1713170051574707 }, { "start": 3956.56, "end": 3963.1200000000003, "text": "may become artificial close what we do not want the attention operates with the wrong metric operator", "confidence": -0.1713170051574707 }, { "start": 3963.1200000000003, "end": 3967.6800000000003, "text": "and the dynamics prediction must learn the geometry from scratch which is unstable in itself", "confidence": -0.1713170051574707 }, { "start": 3968.56, "end": 3972.6400000000003, "text": "and the authors found a solution and they said we have to build a remaining and variational", "confidence": -0.1713170051574707 }, { "start": 3972.7999999999997, "end": 3979.68, "text": "order encoder that will fix this by forcing the complete latent space to have the correct curvature", "confidence": -0.20353867171646714 }, { "start": 3980.3199999999997, "end": 3986.72, "text": "it is just about the geometry of the space and they say once we have fixed the geometry and put on", "confidence": -0.20353867171646714 }, { "start": 3986.72, "end": 3994.4, "text": "constrained on this space the geometry becomes correct the geodesic distance becomes meaningful the", "confidence": -0.20353867171646714 }, { "start": 3994.4, "end": 3999.3599999999997, "text": "geometric attention works properly and neural ordinary differential equation to the trajectory", "confidence": -0.20353867171646714 }, { "start": 3999.36, "end": 4006.08, "text": "becomes smooth consistent and stable and I it is also this paper here that I will show you here", "confidence": -0.23699350135270938 }, { "start": 4006.8, "end": 4011.52, "text": "and I've given you a very short introduction what is a Riemann variational order encoder what is", "confidence": -0.23699350135270938 }, { "start": 4011.52, "end": 4016.32, "text": "the geometric transformers particular the geometric attention height is calculated and why do we", "confidence": -0.23699350135270938 }, { "start": 4016.32, "end": 4023.04, "text": "need manifold constrained neural ODE's but have a look at this paper this is here from Yale University", "confidence": -0.23699350135270938 }, { "start": 4023.84, "end": 4031.36, "text": "Lehigh University, Badley Ham and School of Medicine, Yale University and they all ready and this is", "confidence": -0.1966422363951966 }, { "start": 4031.36, "end": 4039.68, "text": "here just a day before November 20th 2025 and they did something similar not the identical idea", "confidence": -0.1966422363951966 }, { "start": 4039.68, "end": 4044.32, "text": "but they also said hey listen our solution space is too huge is too unconstrained it doesn't make", "confidence": -0.1966422363951966 }, { "start": 4044.32, "end": 4049.92, "text": "sense no which is don't waste energy and everything but it's not stable it is not what we need", "confidence": -0.1966422363951966 }, { "start": 4050.0, "end": 4056.0, "text": "and they built it is a Riemann variational order encoder then they built it a geometric transformer", "confidence": -0.14140576851077197 }, { "start": 4056.64, "end": 4062.2400000000002, "text": "and you see here too we operate here on a very particular manifold with a very particular", "confidence": -0.14140576851077197 }, { "start": 4062.2400000000002, "end": 4068.64, "text": "optimization in a very particular positional encoding if you want here for a path optimization", "confidence": -0.14140576851077197 }, { "start": 4068.64, "end": 4074.8, "text": "problem and then we bring this path optimization problem from a manifold in a pure graph structure", "confidence": -0.14140576851077197 }, { "start": 4074.8, "end": 4079.44, "text": "we do the braiding and then we get a result and this is more or less exactly here", "confidence": -0.15013329001034006 }, { "start": 4080.0800000000004, "end": 4085.36, "text": "and a different complexity level what they did here with their architecture in this particular", "confidence": -0.15013329001034006 }, { "start": 4085.36, "end": 4092.32, "text": "paper and they called it a many fold former the geometric deep learning for neural dynamics on", "confidence": -0.15013329001034006 }, { "start": 4092.32, "end": 4099.12, "text": "Riemannian manifolds and this is now my third paper that I want just to show you because I have a", "confidence": -0.15013329001034006 }, { "start": 4099.12, "end": 4104.88, "text": "feeling this is the way we're going with the completed I system it is not that we're going to have", "confidence": -0.16217673078496406 }, { "start": 4104.88, "end": 4112.0, "text": "the next extremely huge alarm and we put all of the intelligence only in this alarm I think this", "confidence": -0.16217673078496406 }, { "start": 4112.0, "end": 4120.48, "text": "would be the wrong way I don't feel the dizziness the right way to go but of course you could say", "confidence": -0.16217673078496406 }, { "start": 4120.48, "end": 4126.32, "text": "okay this is now your idea but let's increase the complexity because if we are playing around that", "confidence": -0.16217673078496406 }, { "start": 4126.32, "end": 4132.32, "text": "we have no help individualization and I don't have to do this visualization by hand I can now think", "confidence": -0.12405695385403104 }, { "start": 4132.32, "end": 4136.96, "text": "a little bit longer no like any idea it seems a little bit longer in a problem so let's increase", "confidence": -0.12405695385403104 }, { "start": 4136.96, "end": 4144.0, "text": "the complexity further yeah so I found a not only this third paper but I found another paper", "confidence": -0.12405695385403104 }, { "start": 4144.0, "end": 4151.2, "text": "really high level paper that it brings this to a complete new level but it has a coherence in", "confidence": -0.12405695385403104 }, { "start": 4151.2, "end": 4157.2, "text": "the development but I think this is the end of part one I think it the video is already long enough", "confidence": -0.10230496464943399 }, { "start": 4157.2, "end": 4162.48, "text": "but I just wanted to present you some brand new ideas in the eye that I have a feeling will be the", "confidence": -0.10230496464943399 }, { "start": 4162.48, "end": 4169.12, "text": "future of the eye and I have to tell you the next part will a little bit more challenging so I decided", "confidence": -0.10230496464943399 }, { "start": 4169.12, "end": 4176.72, "text": "to do part two of this video and it will be only an expert outlook and I will do it for members only", "confidence": -0.10230496464943399 }, { "start": 4176.72, "end": 4182.16, "text": "because I want to give back to the people to support me with their membership of my channel so I", "confidence": -0.09179196459181765 }, { "start": 4182.16, "end": 4188.16, "text": "want to give back to them and I want to present them just my ideas in the way I see the future of the eye", "confidence": -0.09179196459181765 }, { "start": 4189.68, "end": 4197.2, "text": "so I think part one provides already so many new ideas for the AI community in general but if you", "confidence": -0.09179196459181765 }, { "start": 4197.2, "end": 4203.360000000001, "text": "decided here to support me personally I want to give back to you and therefore part two will show", "confidence": -0.09179196459181765 }, { "start": 4203.5199999999995, "end": 4209.759999999999, "text": "you here my personal thoughts here and we will increase the complexity and we will go a step further", "confidence": -0.11937872223232103 }, { "start": 4209.759999999999, "end": 4214.32, "text": "and I will give you an outlook of the eye that is just what I feel that we are going to move", "confidence": -0.11937872223232103 }, { "start": 4214.32, "end": 4220.48, "text": "together as an AI community anyway I hope you enjoyed it was a little bit longer the video but I", "confidence": -0.11937872223232103 }, { "start": 4220.48, "end": 4227.04, "text": "wanted to show you how amazing it can be if you just read two three four five maybe a hundred new", "confidence": -0.11937872223232103 }, { "start": 4227.04, "end": 4233.6, "text": "PDF papers and you see common patterns you develop here common ground you see that everybody is", "confidence": -0.10074545067047404 }, { "start": 4233.6, "end": 4240.24, "text": "moving in the same direction and I just wanted to make it crystal clear to you where this is now", "confidence": -0.10074545067047404 }, { "start": 4240.24, "end": 4246.32, "text": "going to be but of course it could be that we have a brand new development tomorrow but at least", "confidence": -0.10074545067047404 }, { "start": 4246.32, "end": 4252.0, "text": "let's have fun with AI let's play with it it is so beautiful to discover here complete new ideas", "confidence": -0.10074545067047404 }, { "start": 4252.0, "end": 4256.4, "text": "in other federal intelligence so I hope you enjoyed it maybe you want to subscribe maybe you", "confidence": -0.10074545067047404 }, { "start": 4256.4, "end": 4261.04, "text": "even become a member of the channel anyway I hope I see you in one of my next videos", "confidence": -0.20110115137967197 } ], "language": "en", "confidence": 0.0, "transcription_method": "whisper_base", "transcribed_at": "2025-12-03T13:25:29.092593", "duration": 4261.04, "error": null }, "analysis": { "key_topics": [ "AI models", "RAG", "Empowering", "Experts", "artificial intelligence", "LLM", "Dual", "Perspectives", "Multi-modal model", "VLA" ], "quality_rating": 8.5, "content_type": "educational", "target_audience": "advanced", "technical_level": "basic", "content_summary": "All rights w/ authors:\n\"MirrorMind: Empowering OmniScientist with the Expert Perspectives and Collective Knowledge of Human Scientists\"\nQingbin Zeng 1 Bingbing Fan 1 Zhiyu Chen 2 Sijian Ren 1 Zhilun Z...", "reasoning": "Heuristic analysis based on 8597 views, 63660 chars transcript", "analysis_method": "fallback_heuristic", "analyzed_at": "2025-12-03T13:25:29.093283", "error": null }, "processing_info": { "processed_at": "2025-12-03T13:25:29.094311", "script_version": "2.0", "whisper_model": "base" } }