- Complete planning documentation for 5-phase development - UI design specifications and integration - Domain architecture and directory templates - Technical specifications and requirements - Knowledge incorporation strategies - Dana language reference and integration notes
700 lines
76 KiB
Plaintext
700 lines
76 KiB
Plaintext
============================================================
|
|
YOUTUBE VIDEO TRANSCRIPT
|
|
============================================================
|
|
|
|
Title: AI Dual Manifold Cognitive Architecture (Experts only)
|
|
Channel: Discover AI
|
|
Upload Date: 2025-11-27
|
|
Duration: 01:11:02
|
|
Views: 8,597
|
|
Likes: 452
|
|
URL: https://www.youtube.com/watch?v=8GGuKOrooJA
|
|
|
|
ANALYSIS:
|
|
--------------------
|
|
Quality Rating: 8.5/10
|
|
Content Type: educational
|
|
Target Audience: advanced
|
|
Technical Level: basic
|
|
Key Topics: AI models, RAG, Empowering, Experts, artificial intelligence, LLM, Dual, Perspectives, Multi-modal model, VLA
|
|
Summary: All rights w/ authors:
|
|
"MirrorMind: Empowering OmniScientist with the Expert Perspectives and Collective Knowledge of Human Scientists"
|
|
Qingbin Zeng 1 Bingbing Fan 1 Zhiyu Chen 2 Sijian Ren 1 Zhilun Z...
|
|
|
|
TRANSCRIPT:
|
|
============================================================
|
|
|
|
[0.0s - 3.2s] Hello, community. So great to do you back.
|
|
[3.8s - 8.6s] Today I have a little bit of an EI revolution for you. So at first, welcome to our channel,
|
|
[8.6s - 14.6s] this Kariai. We have a look at the latest EI research paper, the latest three research paper that
|
|
[14.6s - 20.9s] I selected here for this particular video. And I will talk about a dual manifold cognitive
|
|
[20.9s - 27.4s] architecture. And I think this is a little bit of an EI revolution. And I will argue that this
|
|
[27.4s - 33.3s] might be even the future of the complete EI industry. Let's have a look. Now you know what is the
|
|
[33.3s - 39.8s] problem? Our LLAMs operate currently on a single manifold hypothesis. They flatten all the training
|
|
[39.8s - 45.1s] data, all the personal habit, all the individual bias, all the historic facts, and all the collective
|
|
[45.1s - 51.5s] reasoning of um, alpha domain like physics or chemistry into a single high dimensional probability
|
|
[52.5s - 58.3s] and up until now, this was just perfect. It was great. But I'm going to argue that our do
|
|
[58.3s - 66.0s] that our DMCA, our dual magnifold cognitive architecture will define intelligence much better,
|
|
[66.6s - 75.0s] not as a next token prediction like we have currently with our LLAMs, but as a geometric intersection
|
|
[75.0s - 81.0s] of two distinct topological vector spaces that we are going to build. Now have a look at this.
|
|
[81.8s - 89.7s] I'm just amazed what here Gemini 3 pro image preview my little nano banana pro can do.
|
|
[90.4s - 95.7s] And I spent about 20 minutes to describe this image here to nano banana pro. And after three
|
|
[95.7s - 102.6s] times we got this beautiful thing. We gonna go through each and everything. So let's start.
|
|
[102.6s - 108.4s] This is our paper of today. This is here by Jinghua University in China. And November 21st,
|
|
[108.4s - 116.4s] 2025, Miro Mind. And the title tells it all. We want here more or less to Miro a real human mind.
|
|
[116.4s - 123.5s] We want really to understand a certain scientific personality empowering the omniscientist,
|
|
[123.5s - 129.7s] the AI scientist with the expert perspective and the collective knowledge of human scientists.
|
|
[129.7s - 134.7s] So we're not satisfied anymore to build a synthetic AI system, but we want to bring a closer to
|
|
[134.7s - 141.3s] the human scientist. You immediately see that we have a common topic, the AI persona agents.
|
|
[141.3s - 147.4s] Like in one of my last videos I showed you the contextual instantiation here of AI persona agents
|
|
[147.4s - 153.4s] like shown by Stanford University just some days ago. And now we have here the other outstanding
|
|
[153.4s - 160.1s] university, Jinghua University and they have now the same topic. And they tell us, you know,
|
|
[160.2s - 164.8s] when asked to act as a scientist, you know, and have your prompt here to your AI,
|
|
[164.8s - 170.1s] hey, act as a financial broker, act as a medical expert, act as a scientist,
|
|
[170.1s - 176.9s] a standard LLM up until now relies now on a flattened representation of all the textual patterns.
|
|
[176.9s - 183.1s] But you know what, it lacks the complete structural memory of a specific individual cognitive
|
|
[183.1s - 191.0s] trajectory. And this is what Jinghua University is trying to map now to and advance the AI system.
|
|
[191.0s - 198.0s] So what they do, they shift here the paradigm from a pure role playing, you are now a medical
|
|
[198.0s - 202.8s] expert, which is more or less fragile because you have no idea about the pre-training data for this
|
|
[202.8s - 210.8s] particular LLM to a cognitive simulation, which is structured and constrained. I'm going to explain
|
|
[210.8s - 217.0s] why we have structure and what are the mathematical formulas for the constrained we're going to
|
|
[217.0s - 225.0s] impose on a specific LLM. Now, the orders of mere mind organ are that the scientific discovery
|
|
[225.0s - 231.3s] is not just factory retrieval. So as we go here to a very specific case, we go into science and we
|
|
[231.3s - 236.8s] want to have here a discovery process. I want to find new pattern, new interdistinonal
|
|
[236.9s - 242.7s] plenary pattern between physics, mathematics, chemistry, pharmacology, whatever. So it is about
|
|
[242.7s - 248.6s] simulating now the specific cognitive style of a scientist, more or less the individual memory of
|
|
[248.6s - 254.9s] a human that is now constrained by the field norms. This means by the collective memory.
|
|
[257.3s - 261.0s] And I think this is really the end of one size fits all age,
|
|
[261.7s - 267.6s] because all this, more or less, flat generalist framework like Leagley Act or Autogen,
|
|
[267.6s - 273.0s] they all fail in specialized domain and have multiple videos on this. But now we're going to build
|
|
[273.0s - 280.3s] not just the digital twin, but a cognitive digital twin. So they really pushed the boundaries here
|
|
[280.3s - 287.2s] for, well, let's say from simple data repos to a functional cognitive model that can predict
|
|
[287.3s - 292.6s] future EI directions offering here. And this is now the interesting part of a blueprint for an
|
|
[292.6s - 298.0s] automatic scientific discovery. And it's not going to be that simple as we have read here in the
|
|
[298.0s - 305.0s] last publications. So I said, let's start here with our little tiny EI revolution and let's have a
|
|
[305.0s - 313.4s] look. Now, Chingwa tells us, so we have here now the individual level, the human, the singular
|
|
[313.4s - 318.7s] human level. Now we look at the memory structure. And they decide everything that we had up until
|
|
[318.7s - 325.8s] now was not enough. So they go now with an episodic layer of memory with a semantic layer of memory
|
|
[325.8s - 333.0s] and a persona layer. And one layer built upon the other and then we built a gravity well. We built
|
|
[333.0s - 339.8s] here a force field if you want with very specific features. And this is then our first manifold
|
|
[339.8s - 346.4s] for our dual manifold branding. So let's have a look. They start and they say, okay, you know,
|
|
[346.4s - 351.8s] the basic is here the episodic memory, you know, all the raw papers, all the facts, everything
|
|
[351.8s - 357.8s] that you have, the PDF, I don't know, the latest 1000 medical PDFs or the latest 10,000
|
|
[357.8s - 365.0s] publication and theoretical physics. Then we go for an semantic memory. But we do have in,
|
|
[365.5s - 372.2s] if you want, evolving narrative that is now developing of a single person of the author's research
|
|
[372.2s - 379.0s] trajectory. Now, if we go for an individual level, we restrict this here to one person and we just
|
|
[379.0s - 384.7s] look at the temporal distillation pipeline of this single person. What is the author written in the
|
|
[384.7s - 389.5s] first month? What has the author written in the second month? Then we go through all the 12 months,
|
|
[389.6s - 396.5s] we have yearly summaries here and we want to answer how did they thinking evolved of a single
|
|
[396.5s - 405.2s] scientist, not just what he has published. So whenever you know, give here an LLAM or any I
|
|
[405.2s - 412.4s] system that has computer use access to your files and your local desktop laptop, whatever you
|
|
[412.4s - 419.3s] have. Now this is great because now all those data become available every email, every file that
|
|
[419.3s - 425.5s] you worked on, every if you prepared your PhD or your prepared any publication. How many
|
|
[425.5s - 431.3s] month have you been working on this? How many version of the final paper are stored in your
|
|
[431.3s - 438.3s] directories? Now, if any I would have access to this, it would be really able to map your personal
|
|
[438.3s - 447.0s] or my personal thinking process, my mental if you want, evolvement here, how I understand this topic.
|
|
[447.8s - 453.4s] And if we are able to bring this here into a temporal pipeline, we can distill further
|
|
[453.4s - 460.1s] insights. And then if you have this information, let's say of my persona, we have now an agent
|
|
[460.1s - 467.3s] or an LLAM that can build now my persona schema with all my knowledge about mathematics,
|
|
[467.3s - 474.4s] theoretical physics, whatever. So we can build now an abstraction, a dynamic concept network,
|
|
[474.4s - 481.6s] capturing now my let's say also stylistic, but also my reasoning preferences, all my knowledge
|
|
[481.6s - 488.4s] is now mapped to an AI system. Plus we have everything timeline stamped. So we have here, as you see
|
|
[488.4s - 493.8s] here in the semantic layer, perfect time series going on for month or even years, depending how much
|
|
[493.8s - 501.0s] data you have on your computer. So they say, okay, let's start with the individual person and
|
|
[501.0s - 507.0s] let's build this. Let's do this. Let's follow their traces. Okay, the episodic memory
|
|
[507.0s - 514.2s] of the series is here, the very last layer at the bottom. What is it? We have now what they call
|
|
[514.2s - 520.6s] a dual index structure to handle the specificity of the scientific terminology. Now, I didn't know
|
|
[520.6s - 527.0s] about you, but in theoretical physics, we have real long technical terms, also in astrophysics,
|
|
[527.0s - 532.3s] long technical terms, in high energy physics, elementary particle physics, long technical
|
|
[532.3s - 539.7s] terms, thing about medicine, long Latin terms, thing about pharmacology. You understand immediately.
|
|
[539.7s - 545.4s] You are not allowed to make one single type mistake. So you cannot give this to an LLM. So what
|
|
[545.4s - 551.0s] do you do? You build a hybrid regga engine. Of course, our good old friend, the reg machine.
|
|
[551.7s - 559.3s] But now the reg documents are paused into semantically coherent chunks. So what we do now is we have
|
|
[559.3s - 564.2s] a certain chunk. Let's say a sentence or maybe if I have a complete paragraph, it's a very homogenous
|
|
[564.2s - 571.0s] paragraph, then we have to source document. This is in file number, whatever from and we have a
|
|
[571.0s - 576.6s] timestamp. So exactly here, the recording when did I, when did I write down the standards on
|
|
[576.6s - 580.7s] my computer or when did I publish it or when did I just cast it, when send it out in an email
|
|
[580.7s - 587.8s] to my friends, exactly timestamp here, the complexity of a topic. Now, if you do this for
|
|
[587.8s - 594.2s] million and millions and millions of chunk IDs, you got no idea where we are. And may
|
|
[594.2s - 598.7s] remind you order say, hmm, you know what? We looked at all the vector search capabilities
|
|
[598.7s - 605.1s] and they are often too fuzzy for real science. And so what we have to do, we have specific
|
|
[605.3s - 611.4s] acronyms or chemical formulas, they all must be exact. You can't go with an LLM that just has a
|
|
[611.4s - 617.6s] probability distribution here for the next token prediction. So therefore we will choose not an LLM
|
|
[617.6s - 622.8s] but something different. So now they went with the episodic memory, the stores, every chunk of
|
|
[622.8s - 628.0s] information they found, let's say on my computer here, in two parallel searchable indexes.
|
|
[628.6s - 632.9s] And the first is a dense vector index. This is what you know, this is a high dimensional
|
|
[632.9s - 639.8s] embedding via here the encoder model of a transformer for the conceptual similarities.
|
|
[639.8s - 645.4s] So we build a new mathematical vector space and we say, okay, given our dissimantic
|
|
[645.4s - 651.8s] similarity of my, let's say 100 files and the content of these files, we can now place the
|
|
[651.8s - 657.9s] vectors here in the new vector space and we can arrange those vectors that we do have conceptual
|
|
[657.9s - 664.5s] similarity of the technical terms. But talking about technical terms, we now store them separately
|
|
[664.5s - 671.4s] because we say, hmm, we use now a sparse inverted index. So this is a standard BM25 index for an
|
|
[671.4s - 677.8s] underlying exact, exact, laxical matching. So we have absolute the keywords, the symbols, the
|
|
[677.8s - 683.0s] technical term that we have and they go in a separate index. So there's no mixing up and there's
|
|
[683.0s - 688.1s] no hallucination by any LLM. We cannot afford this in physics or chemistry or medicine.
|
|
[689.5s - 697.0s] And then, since we have now two specific scientific indexes, we can merge the result via a rank
|
|
[697.0s - 703.6s] fusion, a reciprocal rank fusion. And this is the way they set up here the episodic memory
|
|
[703.6s - 708.6s] of a single researcher. So this is here all the scientific content over the last five years that
|
|
[708.7s - 715.2s] I have here, let's say on my laptop. Right. The next step is here the semantic layer, as you can
|
|
[715.2s - 721.5s] see, you know, the semantic memory builds on the episodic layer and performs what they call now
|
|
[721.5s - 727.3s] a cognitive distillation. If you're familiar with map reviews from the very early days of EI,
|
|
[727.3s - 732.1s] you know exactly what we're looking at. Map reviews this deal pipeline. This is all there is.
|
|
[732.1s - 738.3s] So let's see, they use any LLM to transform them. Now all the definition from the
|
|
[738.3s - 744.1s] episodic layer come up. And now just give you an example. I say, analyze the cognitive evolution
|
|
[744.1s - 751.5s] focus on any moderation of ideas of this stupid human, any conceptual shift that you can detect here
|
|
[751.5s - 756.6s] on all the hundred and thousand files on his notebook or any changes in the research focus of
|
|
[756.6s - 762.6s] this personal or the methodology he uses. Or why suddenly in, I don't know, April 19, I decided
|
|
[762.6s - 767.4s] to go from a particular branch of mathematics to a more complex branch of mathematics because
|
|
[767.4s - 773.8s] the complexity of my problem suddenly increase. And LLM should now distill from all the episodic
|
|
[773.8s - 781.9s] layer elements with the timestamp here. As you see here, the map reduce pipeline. And if we have
|
|
[781.9s - 786.6s] this information, you know what we're going to build, we're going to build a trajectory. As you see
|
|
[786.6s - 794.5s] here, we have a trajectory of time of trends of keywords, topics here, whatever clusters you can
|
|
[794.5s - 800.0s] define your clusters, if you're particular looking for some quantum field theoretical subtopics
|
|
[800.0s - 805.7s] here. So you see exactly how my knowledge evolved here over the last five years, and I have to
|
|
[805.7s - 811.9s] nothing, I just give you my laptop and this is it. Now, they model a cognitive trajectory. So they
|
|
[811.9s - 818.4s] say now we distill not as semantics. So the system now understands the reasoning link that I had in
|
|
[818.5s - 826.2s] my mind between paper, I published a file, a on my laptop under the file B. So what it does,
|
|
[826.2s - 832.5s] it captures now, and what they call the cognitive inertia of my intellectual topics.
|
|
[834.7s - 838.9s] Now, this is interesting. You see, we have now a five year timeline of my scientific work.
|
|
[838.9s - 844.2s] We have nine, the semantically at a complete time series. And guess what we do next?
|
|
[844.4s - 851.2s] Yeah, if you want to very simply find explanation, think of a semantic memory as a biograph,
|
|
[852.0s - 856.6s] AI system. Now, look, so everything that I published on my computer and says, okay,
|
|
[856.6s - 862.3s] there's this fellow. Oh, no, there's no way he's doing science now. So trends isolated time
|
|
[862.3s - 870.5s] stem into a cohesive intellectual history. And if we have this, the next step is, of course,
|
|
[870.5s - 876.1s] and you already guessed it, we have now a mathematical transformation. We have now the next step
|
|
[876.1s - 883.6s] and we go to the persona layer. Now, I am modeled in my, what do I call this, scientific intellectual
|
|
[885.0s - 891.6s] development. We are now here transforming this here from a temporal flow from the time series
|
|
[891.6s - 896.4s] into a topological structure. And the simplest topological structure that we know is here,
|
|
[896.4s - 902.9s] knowledge graph with specific weights here. So we have here particular focus on some topics
|
|
[902.9s - 908.6s] and I'm going to explain what I mean in a second. The simplest way to explain this is with an
|
|
[908.6s - 915.4s] example. Let's see, the input signal now entering here, the persona layer is now, let's say in 2023,
|
|
[915.4s - 921.1s] the order moved away from his CNN's convolutional neural networks and started focusing heavily on
|
|
[921.2s - 926.4s] graph neural networks. Now, you know, this is not true because we did this in 2021 to get on this
|
|
[926.4s - 931.8s] channel, but just to be here on the safe side, it's just an example. And we did this for more
|
|
[931.8s - 937.4s] like color modeling, see my videos from 2021. Okay, great. So what we do now with this.
|
|
[940.2s - 944.5s] The system now understands looking here at the centers that comes up from the semantic layer,
|
|
[944.5s - 948.2s] and says, okay, we have to create some nodes. Now we have to build a topological structure. Let's
|
|
[948.3s - 955.2s] have here knowledge graph. So what is new? We have here CNN's, we have here the GNN's and we have
|
|
[955.2s - 961.8s] molecular and we have modeling. So let's build this. Now, particular of interest is of course the
|
|
[961.8s - 968.6s] quality of the nodes. GNN's are not just a subtopic, but it's a main and major topic. No graph,
|
|
[968.6s - 974.1s] neural networks. So it becomes a concept node. Moleicles, there are thousands and millions of
|
|
[974.1s - 979.4s] different molecules. So it becomes a concept node again. So you see, we already introduced here
|
|
[979.4s - 988.2s] kind of a hierarchical structure in our knowledge graph. And now we have here a certain wing
|
|
[988.2s - 994.2s] that we're going to do because it might decay or lower now the centrality. This is a graph
|
|
[994.2s - 1000.1s] theoretical feature that I explained in one of my videos of the particular nodes here. And because
|
|
[1000.1s - 1007.4s] it is stated falsely that in 2023 and it was 2021 that I moved away from CNN's. So currently
|
|
[1008.0s - 1016.8s] the centrality, the importance here on all the sub-nets here of my graph, CNN's are somewhere
|
|
[1016.8s - 1024.3s] lower in the importance. No, they're not as important right now. They calculate this with the
|
|
[1024.9s - 1030.4s] centrality measures. And if we have this and here you see it here, the persona layer,
|
|
[1030.4s - 1035.7s] this is not my profile. I have a profile, a machine learning. These are my sub topics. I studied,
|
|
[1035.7s - 1041.1s] I learned, I published, I wrote code. I did not publish and just have on my computer, whatever.
|
|
[1041.1s - 1046.1s] And then we have something in bioinformatics to work. I've done done something whatever,
|
|
[1046.1s - 1051.2s] other topic you have. How strong are the interlinks? How strong are the edges between these
|
|
[1051.3s - 1057.8s] topics? So we build a knowledge of my temporal scientific evolution as a scientist.
|
|
[1059.5s - 1065.5s] But you are not happy with this, because we are going to map this further. So in this step,
|
|
[1065.5s - 1071.3s] we mapped it from the temporal flow of the semantic layer of the time series into a topological structure.
|
|
[1071.3s - 1077.8s] But this topological structure is not really the word we can have a smooth transition and inter-gurls.
|
|
[1078.2s - 1083.4s] This is a graph. Come on, this is bulky. This is not elegant. So what we're going to build is a
|
|
[1083.4s - 1088.5s] gravity well. We're going to build a field representation. This is here the blue heat map that
|
|
[1088.5s - 1095.8s] you see on top. And this shifts now the sender. Let's say somewhere, there was G&N. Now shifts
|
|
[1095.8s - 1103.4s] here the sender here to G&N. So you see, we have a lot of mapping here to have here the
|
|
[1103.4s - 1109.6s] internal individual, my personal evolution. But this is not all done by the eye.
|
|
[1111.0s - 1116.6s] So now the eye says, okay, let's do some inference. Now it looks like the new topology of the graph
|
|
[1116.6s - 1124.0s] and ask, given this new shape, what kind of scientist is this person now? If I don't know,
|
|
[1124.0s - 1129.3s] some AI says, okay, who is this person that does hear all these beautiful YouTube videos?
|
|
[1130.1s - 1137.0s] What is now his actual current characteristics? And now the system might update here if it's working
|
|
[1137.0s - 1143.0s] now for me, the system prompt in a way that it says now him, okay, listen, if you work with this guy
|
|
[1143.5s - 1149.9s] as an AI, your style has to be highly theoretical based on first principle reasoning.
|
|
[1150.6s - 1157.2s] So you see, all of this just took a rive at this simple sentence as that, the eye has now a perfect
|
|
[1157.2s - 1163.7s] characteristic of my actual learning experience, understanding what I know, what I do not know,
|
|
[1163.7s - 1169.9s] and now the AI is the perfect intellectual sparing partner for me. Now the CI system is the perfect
|
|
[1169.9s - 1176.9s] professional AI companion for theoretical physics, for bioinformatics or whatever. So what we have
|
|
[1176.9s - 1184.9s] achieved is not only build me as a perfect mirror mind for the eye to understand, but the eye
|
|
[1184.9s - 1193.2s] can now decide to find the perfect complement to my intellectual morphism. So it is the perfect
|
|
[1193.2s - 1199.4s] partner for me to have here an augmentation here of our an acceleration of the research.
|
|
[1200.7s - 1204.2s] Now you can look at this of course from a mathematical point of view and say, why was this
|
|
[1204.2s - 1210.4s] necessary? I mean, look at this, we went through a four different mapping. Why? Well,
|
|
[1210.5s - 1217.2s] Adolams cannot calculate a similarity against a story against my learning. They can calculate it
|
|
[1217.2s - 1221.9s] against a vector or a graph state. It is a simple mathematical operation. And now by converting
|
|
[1221.9s - 1227.8s] the trajectory into a weighted graph, the system can now mathematically compute, hey, if I get a new
|
|
[1227.8s - 1235.4s] idea, how close is this to the current network to the current, if you want gravity value here
|
|
[1235.4s - 1240.1s] after what we call this scientific intellectual capacity of this person.
|
|
[1242.5s - 1249.0s] Now we can calculate it. And then if we can calculate it, we can code it in Python C++, whatever you
|
|
[1249.0s - 1255.4s] like. Now I have been already talking here about this gravity value. And I just call it a gravity
|
|
[1255.4s - 1259.5s] value, call it whatever you like it. But it's just important that you understand the idea.
|
|
[1260.1s - 1264.6s] What is it? And now if we change the framing, we look at it from a little bit more of a mathematical
|
|
[1264.6s - 1270.6s] perspective, you immediately see it's a probability density field that we derive from the topology
|
|
[1270.6s - 1276.6s] of the persona graph. Persona graph allows us this mapping here into a n-dimensional gravity value.
|
|
[1278.2s - 1285.3s] So how we do this? I mean, how can you have just a stupid graph, a flat planner graph,
|
|
[1286.1s - 1289.5s] and suddenly you have a three-dimensional beauty of a manifold?
|
|
[1290.5s - 1296.2s] You ought to tell us the way they decided to go. So here they say, okay, first the system calculates
|
|
[1296.2s - 1303.4s] the mass of every existing node in our network. And we are in mind determines the mass using here
|
|
[1303.4s - 1310.7s] a particular graph-specific centrality measure. This is the way they determine now the mass of
|
|
[1310.7s - 1316.6s] every node, or if you would say the importance of, mean, the current temporal
|
|
[1316.6s - 1321.9s] involvement of my scientific knowledge. And then they define also the distance.
|
|
[1322.7s - 1328.1s] The distance you notice is of course, and then by the space one minus cosine similarity beautiful.
|
|
[1328.1s - 1334.1s] If we go here for an Euclidean simple distance, I have later we are going to discuss some other
|
|
[1334.1s - 1342.2s] hypothetical spaces, then it becomes a little bit more difficult. Now this blue gravity well is,
|
|
[1342.2s - 1349.0s] let's go to the next step of abstraction, a kernel density estimation over the embedding space
|
|
[1349.0s - 1355.0s] of the persona graph. Now I have multiple videos here on this kernel density estimation,
|
|
[1355.0s - 1362.0s] but in summary, you can say that the gravity intensity G at a point Q here in my blue gravity field,
|
|
[1362.6s - 1368.2s] and let's say Q is now a new idea, is the sum of the influences of all the nodes in the graph,
|
|
[1369.2s - 1373.8s] exponentially decaying with distance. I mean, this is the simplest thing you can think of,
|
|
[1373.8s - 1378.6s] right? Everything has to contribute to this, but we have an exponential decay function so that
|
|
[1378.6s - 1383.6s] not everything is contributing here in equal matters here to this particular, that the points
|
|
[1383.6s - 1388.6s] are the closest are the most influential. I mean, it couldn't be easy, you know? And here we have
|
|
[1388.6s - 1394.9s] this simple formula that the students here, the experts here from Jinghua University, show us.
|
|
[1395.0s - 1402.1s] Great. So what did you do? This deep blue visualizes not a specific region of a, let's call it a
|
|
[1402.1s - 1408.8s] latent space, where the outer fields, or I feel most comfortable, you see here in this dark here,
|
|
[1408.8s - 1415.0s] I called it more of the same. This is my expertise. This is what I know is exceptional,
|
|
[1415.0s - 1421.4s] need well to do. I've worked the last two years only on this dark area here in this gravity well.
|
|
[1421.4s - 1429.3s] Those are my topics. This is I know well. But of course, if I want to have a brand new discovery,
|
|
[1429.3s - 1435.5s] now they argue, hmm, maybe it is not exactly in the same old thing that you do for two years,
|
|
[1435.5s - 1439.3s] because otherwise you would have discovered it. So maybe there's somewhere else.
|
|
[1441.0s - 1446.3s] And they say now, okay, so what we have to do now is find a mathematical algorithm,
|
|
[1446.3s - 1453.4s] a repulsive force that acts on this, if you want gravity well structure, to bring me out of my
|
|
[1453.4s - 1461.5s] minimum over the mountains and somewhere beautiful new. So what I need is a novelty repulsor.
|
|
[1462.2s - 1468.6s] I have to have a force acting on me sitting here, boring and doing the same thing over and over again,
|
|
[1468.6s - 1475.5s] and not this carrying anything new. So push me out here of this and let's go somewhere we have
|
|
[1475.5s - 1483.6s] never been before. So you see, it wants here to simulate here the discovery, not the repetition.
|
|
[1483.6s - 1489.4s] Repetition is done in the blue. And therefore the algorithm treats here my order persona graph,
|
|
[1489.4s - 1496.6s] not as a target to hit, but it is exactly the negative, as a penalty zone to avoid. Now the
|
|
[1496.6s - 1500.8s] thing becomes interesting because yeah, you can push me out with any force out of here my stable
|
|
[1500.8s - 1506.2s] position at a minimum, but in what direction do you push me, where should I go and continue my
|
|
[1506.2s - 1513.3s] research on. And now, think about this covers here, where says, well, what we have is the second
|
|
[1513.3s - 1520.6s] manifold is an external manifold. And this external manifold is here, let's say here open Alex.
|
|
[1520.6s - 1525.8s] So this is the knowledge of all, I don't know, one million published paper in my topics that I
|
|
[1525.8s - 1531.8s] research on, it's a free and open source database of scholar research paper, author, institution,
|
|
[1531.8s - 1536.6s] everything is there. And let's say, okay, this is not the outside world. This is not a second
|
|
[1536.6s - 1543.6s] manifold. This is here my personal manifold. And this is here the community manifold in total,
|
|
[1543.6s - 1549.4s] the global science community, where they are, what they have done, what their examine, where do you
|
|
[1550.4s - 1556.8s] feel. And they say, let's do this. And they build now simple idea, a wireframe grid. So you don't
|
|
[1556.8s - 1562.4s] have to build a real a smooth manifold, a wireframe grid is enough. You just have some estimation points
|
|
[1562.4s - 1568.7s] and you can connect this net in the, in real, isn't it? So what do we add here to my stupidity here
|
|
[1568.7s - 1574.2s] on the left side in the blue valley here? We add if you want a social connection to my social
|
|
[1574.2s - 1580.2s] community, this is here, the research community from astrophysics and some new ideas might come from
|
|
[1580.2s - 1586.9s] astronomy, some new idea might come from medicine, whatever. So we have now from simple
|
|
[1586.9s - 1594.2s] approach here to an interdisciplinary approach. So we have here now one manifold, the second manifold,
|
|
[1594.2s - 1599.4s] and the second manifold is also constructed that we clearly can detect hallucination. Because if
|
|
[1599.5s - 1606.8s] the LLM suddenly does some hallucination, we can pocket him here into this rabbit hole and say,
|
|
[1606.8s - 1612.7s] okay, let's forget about this hole. What we are interested here is the maximum of the community
|
|
[1612.7s - 1618.7s] knowledge. Can I contribute with my knowledge here to the open problem started here at the top
|
|
[1618.7s - 1624.8s] of the mountain here, this particular sweet spot? And you see, told you a force has to push me out,
|
|
[1624.8s - 1631.0s] and this is not a path to optimal research, an optimal research idea P star.
|
|
[1632.2s - 1639.4s] As easy as can be. And again, thank you to my nano banana pro, because about 20 minutes, it took me
|
|
[1639.4s - 1644.3s] that I put all the data in, I said, hey, this play the summary, I want this and this position
|
|
[1644.3s - 1650.3s] over there. And it just, it just did it. There was not one mistake here. Okay.
|
|
[1650.5s - 1658.9s] Now, this is now the story, this is my story, no, it's a scientist. But now, of course, we have to
|
|
[1658.9s - 1664.0s] code this. So if you want to code this, we have to work with agents, we have to work with LLM,
|
|
[1664.0s - 1668.2s] we have to work with networks, we have to work with different mathematical operations,
|
|
[1668.2s - 1674.6s] like mapping functions, so let's do this now. Okay. So what we have is the order say,
|
|
[1674.7s - 1681.2s] so we need to have a super, I know we have an interdisciplinary level where the super
|
|
[1681.2s - 1688.4s] coordinator agent is supervising everything notices the mastermind. And this coordinator agent
|
|
[1688.4s - 1695.9s] decomposes now an incoming query and roots them to particular domain agents that are navigating
|
|
[1695.9s - 1702.7s] here the open Alex concept graphs or building the graphs or the author agents that understand,
|
|
[1702.7s - 1709.0s] now my scientific personality, no? So the system solves now proposing complementarity
|
|
[1709.0s - 1715.7s] or ideas as a dual constraint optimization. I have both manifolds and in both manifolds,
|
|
[1715.7s - 1720.8s] I have constrained. And now I have to do a dual constraint optimization process in mathematics.
|
|
[1721.3s - 1729.1s] Couldn't be easier, no? It is just the perfect path. Let's do this. So the idea is, or if you want to,
|
|
[1729.9s - 1737.2s] optimal idea that I'm as a researcher looking for, P-Star, is forced to exist in the Goldilocks
|
|
[1737.2s - 1742.6s] zone right on the Ramzer. It has to be valid science that is accepted by the scientific community,
|
|
[1743.3s - 1748.7s] but also real close to my particular areas of expertise, so what I'm as an author,
|
|
[1749.5s - 1755.8s] almost developed, but almost thought of, but I just didn't do this little tiny baby step.
|
|
[1755.8s - 1763.8s] So what we are going for is the easy wins. The I would analyze, hmm, this particular guy here
|
|
[1763.8s - 1769.4s] with his YouTube channel, he did some research here and he was almost there to discover something
|
|
[1769.4s - 1776.3s] that the community also indicated there might be some new element. So let's tell him, hey, go in this
|
|
[1776.3s - 1782.4s] direction, learn this and this and this, and then you will make a significant step in your
|
|
[1782.4s - 1790.1s] knowledge and discover a new element. So this is now, and now I need a little bit feedback from
|
|
[1790.1s - 1796.6s] my viewers, because I'm now trying to accelerate my learning, but at the same time, I'm trying to
|
|
[1796.6s - 1803.4s] accelerate my understanding of a visualization so I can communicate better with you, my viewers,
|
|
[1803.4s - 1808.4s] my subscribers, and you're the members of my channel. And this is the first time I really
|
|
[1808.4s - 1815.5s] invested heavily into the visuals here with Nanobanana Pro, for example, to build a visualization
|
|
[1815.5s - 1824.0s] of a complex tier rim that is more than 4050, 100 papers and I try to bring it here just on one
|
|
[1824.8s - 1831.6s] simple image. It is not easy, but I will try this if you as my viewer, you'll like it and you have
|
|
[1831.7s - 1841.7s] this additional visualization. So mirror mind here and the next paper, what we call person-agent,
|
|
[1841.7s - 1846.8s] demonstrate now that the vector databases here are simply insufficient for complex reasoning.
|
|
[1847.4s - 1853.4s] But what we need, we need more complex graph structure and mapping from graph to graph
|
|
[1853.4s - 1859.3s] to represent new and established relations between the different memories. And in mirror mind,
|
|
[1859.3s - 1862.3s] I showed you the temporal evolution of my scientific mind.
|
|
[1865.3s - 1872.0s] Now, if you have a closer look at this, especially the semantic memory now, it explicitly models how
|
|
[1872.0s - 1879.5s] a scientist's mind changes. But you know, understand what is happening now? We break with one of the most
|
|
[1879.5s - 1885.1s] important theorems that we had in artificial intelligence. And this was that everything is a
|
|
[1885.1s - 1891.8s] macovian system. And suddenly, it is not that I just can look at the system and say, this is the
|
|
[1891.8s - 1899.4s] current state of the system. And it is not depending on the history. Because now that you mirror a
|
|
[1899.4s - 1906.4s] human brain, a human mind, it is very well-depending on my personal history, where I started to learn
|
|
[1906.4s - 1912.0s] mathematics, then physics, then whatever. And then, you know, bit by bit, I'm a little bit better here.
|
|
[1912.6s - 1918.6s] You have to understand here the time evolution. So suddenly, we break with a macovian state.
|
|
[1920.2s - 1926.6s] This means that all algorithms that we have in LLM also break and become invalid, inoperable.
|
|
[1927.7s - 1930.6s] So now these things become really interesting.
|
|
[1933.4s - 1939.0s] And now you might ask, hey, I'm just here to learn how to code an agent. Do agents do any of those
|
|
[1939.0s - 1944.4s] operations you are asking for? Are you doing? And I say, it's so glad that you ask this question.
|
|
[1944.4s - 1949.8s] No, because now I can tell you about the multi-agent interact on pattern here in the work done
|
|
[1949.8s - 1956.3s] with the coding here by Jinghua University. And I want to focus here on the multi-agent cognitive
|
|
[1956.3s - 1963.6s] engine. As I told you, we have here an interdisciplinary coordinator here, our super-yide understands
|
|
[1963.6s - 1967.3s] everything can sort everything can plan everything can execute everything great.
|
|
[1968.2s - 1975.6s] So what it does, it gets in here my human query. Hey, I don't know, find me the next research topic
|
|
[1975.6s - 1979.4s] because I'm as a human. I'm too stupid to know where I want to go for two.
|
|
[1979.9s - 1985.4s] Okay, so this here I says, okay, I signed out two query vectors. I send a query vector now to,
|
|
[1986.1s - 1991.7s] you know, now I exchanged here the manifold. This is here my human learning manifold on the right side.
|
|
[1992.3s - 1998.2s] And on the left side, they sent here the same query vector in an embedding here in a mathematical
|
|
[1998.2s - 2005.0s] tensor structure now to the other side. And this is here the objective visibility, so all the
|
|
[2005.0s - 2010.3s] hundred thousand of research paper that are now suddenly in the brain of any system. Of course,
|
|
[2010.3s - 2015.0s] so this is the collective domain of theoretical physics of medicine. You got the idea.
|
|
[2015.6s - 2020.5s] But let's say we have here built a holographic wireframe wall. So this is my idea. Please
|
|
[2021.3s - 2026.6s] go with whatever you like. This is just an illustration. I try to find to explain this area to you.
|
|
[2026.6s - 2032.0s] And let's say we have here a domain agent. And the domain agent is just reading every day here,
|
|
[2032.0s - 2037.2s] the latest AI research publication that has to do anything with theoretical physics. And then we
|
|
[2037.2s - 2042.6s] have here an agent. This is reading here every single scientific paper that has to do with biology.
|
|
[2043.5s - 2049.4s] And they built here their internal representation and their network here, their wireframe here,
|
|
[2049.8s - 2055.4s] after complexity of the topics of the dependencies here in science. Great. So if you want,
|
|
[2055.4s - 2059.7s] we have here the domain knowledge graph of physics combined with biology.
|
|
[2061.4s - 2065.6s] And now the query vector comes in. This is a very specific query vector with a brand new idea.
|
|
[2066.2s - 2073.8s] And this is now, hey, does the general global research community as ever heard of this idea that I
|
|
[2074.6s - 2079.7s] how I should develop as a human? Is there anything related to it? Is there any publication that
|
|
[2079.7s - 2085.7s] gives me a help? Is there any publication that guides me in my personal development? Anybody
|
|
[2085.7s - 2091.4s] has tried something crazy enough or similar enough. And now we are again working with a cosine
|
|
[2091.4s - 2097.7s] similarity in a normal vector space. You see, explore the space and says, yeah, we found some
|
|
[2097.7s - 2102.7s] path of augmentation that your idea is not as stupid as you think, but maybe it's a valid idea.
|
|
[2102.7s - 2108.4s] And we provide now from the complete, if you want knowledge graph of the world,
|
|
[2109.2s - 2115.8s] we provide now the particular output here. This is the green beam. We provide now as an output.
|
|
[2115.8s - 2121.2s] But at the same time, of course, this query vector was sent here to my personal learning manifold.
|
|
[2122.6s - 2128.7s] Now, I told you I have a repellent force field here. Now, this is an orange here.
|
|
[2128.8s - 2134.7s] But I do not want that if this square vector comes in, it is already the same as I've already
|
|
[2134.7s - 2139.9s] doing. So more of the same, I don't want this. I want to go here for a scientific discovery,
|
|
[2139.9s - 2145.4s] go where no one has ever gone before and you know the story. Now, so if this vector here
|
|
[2145.4s - 2150.4s] crashes through my force field, it has to have a certain, let's call it impulse impetus.
|
|
[2151.0s - 2156.3s] And then I will analyze this. Now, and I just explained to this here all the different layers here
|
|
[2156.3s - 2164.2s] of the individual personality of my mirror mind. And now I now discover is this something,
|
|
[2164.2s - 2170.6s] is this an idea that would push me out of my deep blue gravity well into a new direction.
|
|
[2171.9s - 2175.8s] And I send out, hey, yeah, this sounds absolutely interesting. This is absolutely normal.
|
|
[2175.8s - 2183.4s] T I have my experience in the topic A, B and C. And now I say, hey, this is my specialization.
|
|
[2183.4s - 2189.7s] I have sent out the orange beam to novelty. So now we have here the knowledge integrator,
|
|
[2189.7s - 2196.1s] which is something beautiful. This is now where the braiding is going to happen. We combine now the
|
|
[2196.1s - 2202.2s] green beam and the orange beam into something completely new and the output of this will be my new
|
|
[2202.2s - 2207.7s] research direction, my new research title, where I should move to have a scientific discovery as
|
|
[2207.8s - 2215.1s] decided by the AI system. Oh, wow. Okay, let's go with this. I hope I'm clear as
|
|
[2216.2s - 2222.0s] or as right now. If not, I just want to give you an example. How does it work? Let's say we have
|
|
[2222.0s - 2227.7s] the idea, hey, let's build a narrow morphic battery. No, battery is always our topic on case. So
|
|
[2228.2s - 2234.4s] how is now the flow diagram? Now, we have a coordinated HN and takes in here my crazy idea,
|
|
[2234.4s - 2240.2s] building here an our morphic battery. So the coordinated AI say, okay, I activate now an
|
|
[2240.2s - 2245.8s] auto agent to or already if I'm already mapped in the system, if not, you can build here.
|
|
[2245.8s - 2252.5s] Your auto agent, if you say, hey, build me, yeah, you get the idea. And a domain agent for biology.
|
|
[2252.5s - 2259.3s] Great. So if you want, this is me and then here we have here agent here for biology. Great.
|
|
[2259.8s - 2265.4s] Activates and creates here agents. Then your agent, the individual, if you want person,
|
|
[2265.4s - 2271.4s] builds now our excesses, I have has access to your persona graph to the history, whatever I've
|
|
[2271.4s - 2277.4s] already researched and cut out and electrolytes in voltage fade, all the constraints here and do
|
|
[2277.4s - 2283.4s] whatever I do every Tuesday that I build better cathodes. Okay. So I say, don't go there because
|
|
[2283.4s - 2288.2s] this is what he is already doing and it has not having any discovery at all. So he pushes me away
|
|
[2288.3s - 2295.2s] from those areas that I already do. Then the domain agent, if you want to collective agent here,
|
|
[2295.2s - 2301.0s] we're guarding biology looks now at all the publication, the biology concepts related to energy.
|
|
[2302.2s - 2307.8s] Finds here neural glia cells, the concept to ion regulation here returns now. Yeah, there's
|
|
[2307.8s - 2313.4s] something like ion regulation biology to an electric light transport in batteries. Maybe there's
|
|
[2313.4s - 2318.8s] some hidden patterns here in the understanding and the reasoning in the, I don't know, molecular
|
|
[2318.8s - 2325.5s] transport architecture that we can use now from biology now in battery technology. And then comes
|
|
[2325.5s - 2330.2s] here the cooperation phase, the optimization as a studio in the blue well. The coordinator asks,
|
|
[2330.2s - 2335.1s] hey, is this a valid path? The domain agent says yes, but I mean, actually I showed here reading
|
|
[2335.1s - 2341.2s] here 50,000 publication that we have here. The other agents say I've never mentioned glia cells
|
|
[2341.3s - 2346.7s] in my last 50 paper. So this now for me is a complete new topic, but a new everything about
|
|
[2346.7s - 2353.0s] science. No, I just never focused on this particular point of research. So let me do this.
|
|
[2353.4s - 2359.4s] And then it scores here a novelty score and they try to maximize the novelty score. So the
|
|
[2359.4s - 2367.2s] eyes are not going to give me a brand new topic. And the integrator now generates it a final output.
|
|
[2367.5s - 2372.5s] And the integrator says, hmm, after having looked at all the AI research paper and what have you
|
|
[2372.5s - 2379.2s] learned in your last 18 years, I give you now a proposal, design a self regulating electorate
|
|
[2379.2s - 2385.0s] gale that mimics an ion buffering capacity of a neural glia cell to prevent voltage spikes.
|
|
[2386.0s - 2393.6s] This is your topic. This is your PhD. Do it if you solve it. You gonna spend or an millions of
|
|
[2393.6s - 2398.5s] dollars. Right. Yeah, you're gonna spend millions of dollars too for a computer button. Now I'm
|
|
[2398.5s - 2405.0s] mind about this. But it was the first paper. And I know I told you, I want to accelerate my learning.
|
|
[2405.0s - 2409.4s] I want to accelerate my explanation and we can go in higher complexity because now with nano banana
|
|
[2409.4s - 2416.3s] pro, hopefully I have a tool to to to show you my ideas, how I see things and maybe it becomes
|
|
[2416.3s - 2421.2s] clear to you or say, Hey, buddy, no way what you are thinking. So let's increase here the speed,
|
|
[2421.2s - 2427.0s] let's increase here the acceleration. And let's go to another paper. And you see I place it here
|
|
[2427.0s - 2432.2s] and this is also a paper by November 21st. This is here from Purdue University, our state
|
|
[2432.2s - 2438.5s] University, Columbia University. And they have a topic pair zone agents with graphrag.
|
|
[2438.5s - 2443.4s] Our good old friend graphrag. So what they build is a community of their knowledge graph for
|
|
[2443.4s - 2450.7s] personalized LLM. And you might think this sounds real similar to what we just did. All of course,
|
|
[2450.7s - 2455.4s] what coincidence that I selected this paper, but we published on the very same date.
|
|
[2456.7s - 2462.2s] Okay, they tell us just is this raw reading? They say, Hey, our method improves the data
|
|
[2462.2s - 2468.1s] organization here that if one score by 11% and for the movie tagging is now improved by 56%
|
|
[2468.1s - 2474.5s] and I say, Okay, if this is the step in the improvement, if we use this, let's have a look at this paper.
|
|
[2475.0s - 2484.1s] So, persona agents. So let's say you want to build here the little Einstein. No problem.
|
|
[2484.1s - 2490.7s] So you ought to see our tell us, Okay, our framework generates personalized prompts now for any
|
|
[2490.7s - 2497.0s] eye systems by combining here a summary of the user's historical behavior. Let's take again
|
|
[2497.0s - 2502.1s] me as a user. So my historical behavior and the preferences extracted from the knowledge graph. So
|
|
[2502.1s - 2507.6s] what I'm doing, so if I have multiple AI systems from I don't know, and tropic, open AI, and Google,
|
|
[2507.6s - 2512.9s] and to meter and Microsoft on my computer and all of those AI have access to my complete computer
|
|
[2512.9s - 2518.6s] and to my complete documentation. Everybody has my data. Great. So what did you do it? And then we
|
|
[2518.6s - 2524.5s] have a mixture and then we have also the global interaction patterns that we see, let's see on social
|
|
[2524.5s - 2531.5s] media, all the scientific publication and who is referencing what other paper. So we have to
|
|
[2531.5s - 2537.4s] complete social interaction. Let's go only on the science level. And this can be identified
|
|
[2537.4s - 2543.4s] through a graph based community detection. So social media. We bring it all together. We have
|
|
[2543.4s - 2549.2s] to compute power. No problem. No problem at all. Let's go with the complete science community.
|
|
[2549.2s - 2555.3s] And let's build here with this user history who is definitely not an Einstein. How can he become
|
|
[2556.2s - 2563.1s] a simple topic now? So they tell us here and this is not mine, not a banana, but this is done here
|
|
[2563.1s - 2569.0s] by the orders here. You see here that it's not as beautiful. They say we have a user profile
|
|
[2569.0s - 2573.5s] construction. And I would explain everything to you. You know, I have a personal preferences,
|
|
[2573.5s - 2578.4s] the relevant concept, the interaction statistics of me, all the emails who I talked to,
|
|
[2578.4s - 2583.0s] I cooperate with who might publish what paper, and then they have the external knowledge graph
|
|
[2583.8s - 2587.2s] construction. So what is happening to currently in quantum field theory and theoretical physics
|
|
[2587.2s - 2592.2s] in computational science, all the interaction node, the concept nodes, concepts we all were
|
|
[2592.2s - 2597.7s] encountered. No, then they have category theoretical physics, mathematics, biology, whatever.
|
|
[2597.7s - 2602.2s] You know, and then all the semantic relations, remember the co-sense similarity in a normalized
|
|
[2602.2s - 2606.9s] vector space. So we have to use the data in a community data and then we bring them all together
|
|
[2606.9s - 2614.2s] in a mixer and then we have a personalized agent that is now almost a substitute for this human,
|
|
[2614.2s - 2618.8s] but the personalized agent we can develop much faster. No, this will become a machine that is
|
|
[2618.8s - 2623.5s] much more intelligent than a human user. This is me, by the way. So what would be, we build a
|
|
[2623.5s - 2628.2s] semantic memory and say, Hey, I noticed you just talked about this and said, yeah, of course.
|
|
[2628.2s - 2632.4s] And then we need an episodic memory and say, Hey, this was the first layer, yes, of course.
|
|
[2632.4s - 2635.8s] And then we have a community context and I said, what is the surprise? So you see,
|
|
[2636.7s - 2642.4s] complete different place at the very same day, they published something that is almost identical.
|
|
[2643.0s - 2650.6s] And they now generate here a personalized prompt to then they feed to the LAM to get a real
|
|
[2650.6s - 2656.8s] highly specialized personalized response. Now, the beauty of what they do is they work only
|
|
[2656.8s - 2663.8s] with graph rack. So they are not going here with BM25 or with some dense algorithm. They are here
|
|
[2663.8s - 2669.3s] on the graph level. They're operational only on the graph level. Real nice. So let's go there.
|
|
[2670.0s - 2676.1s] So we have now from a graph topology, what we want is the output in a linearized context here for
|
|
[2676.1s - 2681.9s] a stupid LAM. If you want, this is here the braiding mechanism that was already talking about.
|
|
[2681.9s - 2688.5s] And here again, word, coincidence, I ask here nano banana pro to generate here almost identical
|
|
[2688.5s - 2695.3s] image here for our braiding process for our machine that brings here everything together.
|
|
[2696.6s - 2701.7s] Okay, let's start. So what we have again, as I told you, we have now we start not with the
|
|
[2701.7s - 2707.3s] three levels of memory, but we are now operating here in a graph rack system. So we have here a graph
|
|
[2707.3s - 2714.2s] and this graph, I have now interaction note of my history. So that I the user right here, now we
|
|
[2714.2s - 2720.3s] are somehow in a in a movie. So the ghost and then I watched matrix, I watched matrix again and
|
|
[2720.3s - 2726.2s] then I read here a particular book about this and you see, okay, so these are my interaction notes.
|
|
[2726.2s - 2732.3s] These are here the things. Then they built here what they call here. Where is it? The concept notes.
|
|
[2732.3s - 2738.4s] These are the triangles. So this goes to Cyberpunk. This goes here to dystopia. This goes here to
|
|
[2738.4s - 2743.9s] virtual reality and you see we already kind of a hierarchical structure of here of our note layers.
|
|
[2744.7s - 2749.7s] And then we have pure community notes. But these are the global interaction notes.
|
|
[2750.6s - 2754.6s] In general, all the people in this planet like ghost in a shell or whatever,
|
|
[2754.6s - 2760.4s] whatever, matrix garden tomato, whatever you like to use here. So you built here a network.
|
|
[2761.5s - 2764.9s] Now this network has of course, if you want two components,
|
|
[2765.5s - 2771.8s] but the first component is here my personal stream. Then we have here how did the community,
|
|
[2771.8s - 2776.7s] let's go again with the last five years. So how I developed in the last five years and how does
|
|
[2776.7s - 2782.7s] the research community developed in the last five years. And then we have to bring it together
|
|
[2782.7s - 2790.0s] in this rating process or by partite fusion operator, whatever you like call it, we go have a look
|
|
[2790.2s - 2796.1s] in detail what this is doing and how it is doing. But just the idea. And then after we
|
|
[2796.1s - 2802.8s] won't linearize this complexity, we have now for the LLM context window, we can create a system prompt,
|
|
[2802.8s - 2811.4s] we can have a stream A of my personal history and the stream B where I tell the AI, look in this
|
|
[2811.4s - 2817.8s] five years, my sub community theoretical physics developed decent decent decent decent this.
|
|
[2818.3s - 2824.2s] And now this is the information for you as an LLM. This is my input to you as an LLM and know
|
|
[2824.2s - 2831.6s] you LLM do the job. So you see we are here in the pre-processing of the data to an LLM.
|
|
[2833.4s - 2841.1s] So you see that again, looking here at the graph distribution, we have here the user manifold
|
|
[2841.1s - 2847.4s] and we have if you want the community manifold. And now these two streams here are brought to
|
|
[2847.8s - 2855.7s] together. So I'm not again squeezing everything into a flat one manifold structure, if it's with
|
|
[2855.7s - 2862.1s] high dimensional, but I separate here very specific persona. This is the blue stream. This is
|
|
[2862.1s - 2867.7s] me, for example, or you too, hey, what is happening in the world? What is happening in the community?
|
|
[2867.7s - 2873.0s] If you are an artist, if you are creative, if you are dance, if you music, whatever, what is
|
|
[2873.0s - 2877.4s] happening in your world? And what you have been doing the last five years and we bring it together
|
|
[2877.4s - 2885.9s] and we see what emerges. So this persona agent, and this is the complete framework here,
|
|
[2885.9s - 2890.8s] overcomes now the cognitive flatness that I told you here at the very beginning of this video.
|
|
[2891.8s - 2897.3s] How we do this through a recursive graph rack that we built. So we use something that we know,
|
|
[2897.3s - 2902.6s] there's nothing new, there's a little bit new, but everything else is clear. Let's have a look.
|
|
[2903.8s - 2909.1s] So what I especially found interesting, how would you code a braiding processor? No, in code,
|
|
[2909.9s - 2916.6s] because what it's doing, it's just a linearization. So it must be real simple. And in standard drag,
|
|
[2916.6s - 2920.3s] our retrieve log manager generation, the system retrieves the list of documents here from
|
|
[2920.3s - 2927.8s] external data sources and just paste them into one to one another in the LLM, but this is stacking
|
|
[2928.3s - 2935.2s] this is not braiding. So the often the LLM often gets confused by contradictory or irrelevant data,
|
|
[2935.2s - 2940.8s] because maybe in the data we brought back from rack is the earth is flat and then the earth is
|
|
[2940.8s - 2948.4s] not flat. So what to believe? So let's solve this. Braiding is now a much smarter structural
|
|
[2948.4s - 2953.7s] merge operation. It doesn't just pile up the data. So the earth is flat, the earth is not flat,
|
|
[2953.7s - 2961.4s] the earth is whatever. It leaves now two distinct strands of information together to create a stronger
|
|
[2961.4s - 2968.8s] rope. I hope with this image, I can communicate what I want to tell you. So the strand A is of course
|
|
[2968.8s - 2975.5s] the self. So this is my knowledge and a strand B is the community, the world. So strand A more or
|
|
[2975.5s - 2980.6s] less is, hey, what have I done the last five years in theoretical physics? This is my personal history.
|
|
[2981.5s - 2985.8s] It's not a vector, but yeah, it's a high dimensional vector, a tensile structure, okay.
|
|
[2986.7s - 2992.8s] And strand B simply, hey, what has everyone else on this planet done and published here on archive?
|
|
[2992.8s - 2997.7s] So this is the complete knowledge graph and we have here traversal vector that we can explore
|
|
[2997.7s - 3003.0s] in the simplest case. So what is this braiding process? It is of course a mathematical function,
|
|
[3003.0s - 3009.9s] or if you want an algorithm here, that compares these two strands and finds now an interference
|
|
[3009.9s - 3016.7s] pattern. You see what? We don't just here add it up. We have a concatenation. No. We have a look now
|
|
[3016.7s - 3023.1s] at the interference. So specific points where your unique quirks, my ideas overlap with the
|
|
[3023.1s - 3030.5s] collective trend here of the research community. Very simple example, but it's the simplest example
|
|
[3030.5s - 3034.3s] I can think of. Hey, I say at the individual stream is, hey, you like dark chocolate and the
|
|
[3034.3s - 3038.6s] collective stream is people who buy red wine also buy dark chocolate and guess what they
|
|
[3038.6s - 3043.9s] separated out, but it's yes, you can imagine this. Now, of course, it is a little bit more complicated
|
|
[3043.9s - 3050.3s] and it took me again about 20 minutes so that can that nano banana pro generated this image. I
|
|
[3050.3s - 3055.1s] wanted to have it like a stargate. I don't know if you know this TV series, but exactly. So here we
|
|
[3055.1s - 3061.0s] have stream a here we have stream B personal vector episodic. So with all our little boxes here
|
|
[3061.0s - 3066.2s] of knowledge and then here the collective vector, all the publication that have references to all the
|
|
[3066.2s - 3070.7s] other publications and those reference other publication and those reverence here persona
|
|
[3070.7s - 3077.8s] this reference here some tweets or you get the idea. What is happening here? And at first I saw
|
|
[3077.8s - 3083.8s] that I build it like a DNA strand here, a molecular strand, but no, because what I want I want this
|
|
[3083.8s - 3091.0s] input and you see here still to do the DNA strand it was not I read it here by nano banana pro, okay?
|
|
[3091.0s - 3097.6s] Because this is not the input to our LLM. This is just a data process pre-processing for our LLM
|
|
[3097.6s - 3104.7s] machine. So I have to bring this to a linearized context tensor that has your particular optimization
|
|
[3104.7s - 3113.5s] routine to have your the perfect input to the LLM. So what is this? Now if you are a subscriber
|
|
[3113.5s - 3118.6s] of my channel, you understand immediately when I tell you, you know, this is nothing else than a
|
|
[3118.6s - 3127.6s] graph neural network attention mechanism that we apply at inference time. Okay. So what is happening
|
|
[3127.6s - 3134.1s] here? This is the most important area now. This braiding processor with our logic gate and here
|
|
[3134.1s - 3141.0s] I free the breed is just that is not as important as just push back in space and we just need here
|
|
[3141.0s - 3148.2s] the perfect braided here knowledge stream that enters here the LLM as a linearized tensor structure.
|
|
[3148.6s - 3156.6s] Let's do this. Now if you look at it from a mathematical perspective that I introduced at the
|
|
[3156.6s - 3160.9s] beginning of this video, you immediately see that this is a dual source manifold alignment.
|
|
[3160.9s - 3167.7s] The first source is here the episodic stream and the second here is the collective knowledge stream.
|
|
[3168.4s - 3175.9s] A dual source manifold alignment. So yeah followed by gated linearization. Of course we have
|
|
[3175.9s - 3181.0s] only have a linear prompt here to our LLM but of course it is not a single equation. It would be
|
|
[3181.0s - 3186.2s] two easy no come on here. This would be not a topic of one of my videos, but it is a computational
|
|
[3186.2s - 3192.9s] pipeline to project see a query into two orthogonal vector spaces again and we have individual
|
|
[3192.9s - 3199.0s] and collective. See hope this visualization helps and computes now their intersection to filter
|
|
[3199.0s - 3205.6s] out the noise and the rank relevance. So let our domain be defined by heterogeneous knowledge
|
|
[3205.6s - 3211.0s] graph on all of theoretical physics. Then we define two distinct submanifolds within this
|
|
[3211.0s - 3216.6s] graph structure. Now you know what it is it is the individual manifold at a local subgraph
|
|
[3216.6s - 3221.4s] defined here by my little brain and a collective manifold the beauty that everybody else and this
|
|
[3221.4s - 3227.1s] planet did in the last five years doing research and subgraph reachable through a community traversal
|
|
[3227.7s - 3236.7s] and now the task is the stream a is an individual resonance score that we can calculate and we
|
|
[3236.7s - 3242.2s] call this parameter alpha. So this measures how well a candidate node aligns with the user
|
|
[3242.2s - 3247.9s] established history. It combines the semantic similarity with the historical weights.
|
|
[3248.6s - 3253.8s] The stream b is of course the collective feasibility score from the whole community we call
|
|
[3253.8s - 3260.1s] this parameter beta and this measures now how strongly the node is supported by the topology
|
|
[3260.1s - 3267.0s] after domain graph itself. So more or less is this a valid node. Am I allowed to sink this in my
|
|
[3267.0s - 3272.0s] individual vector stream is this really something that the community recognized as yeah this is
|
|
[3272.0s - 3278.5s] something an object that you do we worth to investigate. Beta computes here the random work
|
|
[3278.5s - 3283.1s] probability of landing on the node and starting from the query concepts within the domain graph G.
|
|
[3284.1s - 3291.4s] But we do have two parameter alpha and beta. It's a simplification I know please don't write to me
|
|
[3291.4s - 3296.8s] but there's another parameter yes I know I just want to be here in the main idea. So how is this fusion
|
|
[3296.8s - 3302.2s] how is this braiding kernel now operational. You understand that this is the core process allergic
|
|
[3302.2s - 3308.4s] that we are talking about. It is not the sum of alpha and beta. We have to perform here a gated
|
|
[3308.4s - 3313.0s] fusion operation to reject the hallucination and irrelevant noise.
|
|
[3314.3s - 3318.5s] You remember in the first part of the video I showed you that the hallucination is here now is
|
|
[3318.5s - 3325.8s] here this big minus here in the grid. So we have a high individual score and zero collective
|
|
[3325.8s - 3331.4s] support now. The hallucination is not supported by the research community or published upon it is
|
|
[3331.4s - 3338.2s] only apparent here in my individual score. And the irrelevant noise has here high collective
|
|
[3338.2s - 3343.9s] scores but zero individual relevance for me. So I don't care for something that is so far away
|
|
[3343.9s - 3351.3s] I don't even understand it. And now we calculate here the braided score S braid.
|
|
[3352.2s - 3358.2s] And this is now defined since you know the title of this video by a geometric interaction
|
|
[3358.2s - 3364.4s] term of two manifolds. So I told you we're going to look here and it is not a good incidence that I
|
|
[3364.5s - 3369.4s] tried to make this here not as a vector but more like a wave function. We are looking here at the
|
|
[3369.4s - 3376.3s] interference pattern. So just going to give you the result. The braided score is calculated here
|
|
[3376.9s - 3382.8s] with an alpha and a beta and in this structure where we have a linear mixture of alpha and beta.
|
|
[3382.8s - 3387.2s] So what do I know and what does the community know and a structural gate.
|
|
[3388.3s - 3393.4s] And this structural gate is now really important. But you know if you look at this and you think
|
|
[3393.4s - 3399.7s] about the very first PDF archive that we just talked about the mirror mind you understand wait a
|
|
[3399.7s - 3407.0s] minute. If this is not interpretation here for the mixture process I can use this imagination
|
|
[3407.8s - 3415.6s] come back to the first PDF and also build here the identical formula. And now I say here the
|
|
[3415.6s - 3423.0s] braided S or further mirror mind is no example it is. Have a look at this. So you see those paper
|
|
[3423.0s - 3429.5s] not only have a very similar topic but given here the mathematical formula of the first paper
|
|
[3429.5s - 3438.8s] of the second paper I can induce now a equilibrium no and an almost identical idea where I can come
|
|
[3438.8s - 3445.4s] up now with the braided score for the mirror mind and you see they are operating now differently.
|
|
[3445.8s - 3452.9s] Why? Because this has a repulsory effect the first one and this has a structural gate.
|
|
[3453.6s - 3460.6s] So there is a difference but there otherwise real similar. So what is the critical nuance
|
|
[3460.6s - 3465.1s] that distinguishes this? I told you mirror mind is for the scientific discovery process here
|
|
[3465.9s - 3472.7s] and the persona agent here is of course about a recommendation. While both systems use you
|
|
[3472.7s - 3478.4s] the braiding mechanism they use you the individual stream alpha or opposite purposes.
|
|
[3479.3s - 3484.8s] One is respulsion and this is the mirror mind the individual stream acts as a negative constraint
|
|
[3484.8s - 3489.2s] where I remember this was the deep blue gravity valve where I told you this is what I knew best
|
|
[3489.2s - 3496.5s] this is where I'm sitting I'm lazy I don't move at all out of my beauty zone here and I need now some
|
|
[3496.6s - 3503.4s] powers I'm impetus to move me out of here for the optimal path to P store. So this is now in
|
|
[3503.4s - 3512.1s] mirror mind a repulsor my alpha. Now of course in this yeah again here this is here the term our
|
|
[3512.1s - 3517.1s] novelty repulsor if you want to be specific. So you do have an intersection of a high domain
|
|
[3517.1s - 3524.1s] visibility and a high persona surprise and the optimization objective is to find out the node N
|
|
[3524.2s - 3530.5s] that maximizes this s-breeded value or in this formulation here for the mirror mind.
|
|
[3531.8s - 3537.4s] Again alpha the individual nurture measures how similar the idea is to what the scientist what I
|
|
[3537.4s - 3542.3s] have already written in the last five years and beta is yet a collective validity all the global
|
|
[3542.3s - 3547.4s] publication here that is what is mathematically possible that has been peer-reviewed that has
|
|
[3547.4s - 3552.5s] been agreed upon yeah this is a real interesting research topic this is yet a wireframe great that
|
|
[3552.5s - 3558.6s] I showed you here in the first visualization here of this video and we want this to be high because
|
|
[3559.8s - 3566.9s] this is now exactly at the intersection that we're going to optimize. Now of course as I told you
|
|
[3566.9s - 3572.6s] I will show you here that title in a particular way if you read these two preprints in this sequence
|
|
[3573.5s - 3577.7s] and I'm just here sorting this out for you that you have an easier learning process
|
|
[3578.4s - 3584.5s] I can come up with this idea so to those persons who are really checking here whatever I tell you
|
|
[3584.5s - 3590.9s] is this really written down in the PDF no I'm not going beyond both PDF publications I know combine
|
|
[3590.9s - 3595.9s] them since they were published on the same day the authors had no idea from each other so but I
|
|
[3595.9s - 3603.0s] now reading those I see they have common ground and so let's do this so my idea careful bugle up
|
|
[3603.0s - 3610.1s] is we can combine PDF1 mirror mind with the persona agent to get a unified contextualization and
|
|
[3610.1s - 3618.8s] output so image1 clear now we have p-starter proposed great new idea where I have to go and now all
|
|
[3618.8s - 3625.7s] I say is listen if I have no this idea I can bring it over now into the persona agent where I told
|
|
[3625.7s - 3631.0s] you we're working out pure in a graph structure the graph extractor for the persona agent and I
|
|
[3631.0s - 3637.9s] just bring this over as one node for the network this is it I mean simple come on this is all
|
|
[3637.9s - 3646.1s] you have to do to have some new insights and I'm trying to be good to combine both coding and I
|
|
[3646.1s - 3653.1s] mean Gemini 3 pro will do the coding for me and maybe I can build this system operation only let's
|
|
[3653.6s - 3661.7s] see but of course I can insert any node if I want and why not insert here the perfect research idea
|
|
[3661.7s - 3668.3s] node here into the interaction node here of my personal history because this would be my personal
|
|
[3668.3s - 3673.3s] future the very new future where this system tells me integrate this into your
|
|
[3673.9s - 3678.7s] rough knowledge graph because this is your future that you should research and then
|
|
[3679.4s - 3684.5s] I just combine this here with the persona agent as published already with the concept nodes with
|
|
[3684.5s - 3689.8s] the community nodes here we have the braiding machine that does here our braiding processing as
|
|
[3689.8s - 3695.3s] I already described to you and then the output what you have is a linearization a linearization
|
|
[3695.3s - 3700.3s] context window where I showed you have the perfect system prompt for me as a persona for me to
|
|
[3700.3s - 3705.8s] be an intellectual sparring partner I have my personal history that I present here to the AI
|
|
[3705.8s - 3711.4s] the collective signal what has the our community done in the last five years for my particular
|
|
[3711.4s - 3719.0s] brand new idea and then again now I refine the contextual linear idea this is here the p-star
|
|
[3719.0s - 3726.2s] and the collective inside here also from a purely graph structure so you see just
|
|
[3726.2s - 3733.9s] braided together everything together and isn't this looking gorgeous now if you want to have to
|
|
[3733.9s - 3740.9s] go a little bit deeper I further annotated this graph that was built with nano banana pro so here
|
|
[3740.9s - 3747.6s] you find some additional sorts here from my side but yeah I'm sure you get the idea
|
|
[3750.5s - 3755.8s] so this image now illustrate here a new solution to the cognitive flatness we want to solve this
|
|
[3755.8s - 3762.6s] now and we sequentially apply here to simple structural operation we have an optimization as I
|
|
[3762.6s - 3767.9s] showed you in my own mind so we find a local maximum for novelty within the value constraints
|
|
[3767.9s - 3774.2s] this is here a blue graph anti contextualization as the second structural operation as I've shown
|
|
[3774.2s - 3780.4s] today autos of persona agent it so what it is we anchor the maximum if in the heterogeneous
|
|
[3780.4s - 3786.5s] knowledge graph to ensure it aligns with both the personal history and the social reality of the
|
|
[3786.5s - 3795.4s] research community take a step back and think about what we have just achieved just reading two
|
|
[3795.4s - 3804.9s] paper you have read now only two papers structure is the new prompt the intelligence itself is not
|
|
[3804.9s - 3811.9s] here because this is just the input to the lalm this is not intelligence is encoded in the manifold
|
|
[3812.6s - 3821.7s] and in the graph well the lalm serves merely here as a traversal engine that is now computing this
|
|
[3823.4s - 3830.6s] it is not even computing this because this manifold and the graph are constructing constraints
|
|
[3831.3s - 3837.5s] on the operational space of the lalm itself so what I want to propose to you
|
|
[3838.0s - 3847.1s] huh that this shift here defines the next generation of neural symbology why because the locals the
|
|
[3847.1s - 3853.5s] place of intelligence is shifting now from the parametric knowledge of the lalm the model weights
|
|
[3853.5s - 3861.0s] the tensor weights itself after vision language model to the non parametric structure to the external
|
|
[3861.0s - 3869.3s] architecture so for my case this would be here my intellectual landscape with the community landscape
|
|
[3869.3s - 3876.0s] we process here the path my personal path to my personal optimal idea then I bring it here into a
|
|
[3876.0s - 3882.2s] pure graph representation I have the degrading process a computing here this and then I have here more or
|
|
[3882.2s - 3890.6s] less all the history of mine and all the intelligence and the development of my scientific ideas here
|
|
[3891.0s - 3898.3s] all very presented here so I think we are shifting here more away from the lalm is the only
|
|
[3898.3s - 3906.2s] source of intelligence and we have a lot more non parametric structure that will do here in front
|
|
[3906.2s - 3914.9s] of the lalm the real intelligence work if you want to call it now now maybe you have seen that
|
|
[3914.9s - 3921.0s] some days ago I posted here on my channel also here the latest research here from medical about
|
|
[3921.0s - 3930.6s] manifold learning for medical EEG and I've showed you here publication they discovered it really
|
|
[3930.6s - 3936.9s] depends here on the mathematical space that we construct and they found that the Euclidean
|
|
[3936.9s - 3943.8s] latent spaces distorted the true structure of the electro-entervalogram they said with this you
|
|
[3943.8s - 3951.0s] know this unconstrained vector space this is not optimal we can use AI for medical here because
|
|
[3951.0s - 3956.6s] near bone neural state may be mapped for a path in this unconstrained vector space irrelevant state
|
|
[3956.6s - 3963.1s] may become artificial close what we do not want the attention operates with the wrong metric operator
|
|
[3963.1s - 3967.7s] and the dynamics prediction must learn the geometry from scratch which is unstable in itself
|
|
[3968.6s - 3972.6s] and the authors found a solution and they said we have to build a remaining and variational
|
|
[3972.8s - 3979.7s] order encoder that will fix this by forcing the complete latent space to have the correct curvature
|
|
[3980.3s - 3986.7s] it is just about the geometry of the space and they say once we have fixed the geometry and put on
|
|
[3986.7s - 3994.4s] constrained on this space the geometry becomes correct the geodesic distance becomes meaningful the
|
|
[3994.4s - 3999.4s] geometric attention works properly and neural ordinary differential equation to the trajectory
|
|
[3999.4s - 4006.1s] becomes smooth consistent and stable and I it is also this paper here that I will show you here
|
|
[4006.8s - 4011.5s] and I've given you a very short introduction what is a Riemann variational order encoder what is
|
|
[4011.5s - 4016.3s] the geometric transformers particular the geometric attention height is calculated and why do we
|
|
[4016.3s - 4023.0s] need manifold constrained neural ODE's but have a look at this paper this is here from Yale University
|
|
[4023.8s - 4031.4s] Lehigh University, Badley Ham and School of Medicine, Yale University and they all ready and this is
|
|
[4031.4s - 4039.7s] here just a day before November 20th 2025 and they did something similar not the identical idea
|
|
[4039.7s - 4044.3s] but they also said hey listen our solution space is too huge is too unconstrained it doesn't make
|
|
[4044.3s - 4049.9s] sense no which is don't waste energy and everything but it's not stable it is not what we need
|
|
[4050.0s - 4056.0s] and they built it is a Riemann variational order encoder then they built it a geometric transformer
|
|
[4056.6s - 4062.2s] and you see here too we operate here on a very particular manifold with a very particular
|
|
[4062.2s - 4068.6s] optimization in a very particular positional encoding if you want here for a path optimization
|
|
[4068.6s - 4074.8s] problem and then we bring this path optimization problem from a manifold in a pure graph structure
|
|
[4074.8s - 4079.4s] we do the braiding and then we get a result and this is more or less exactly here
|
|
[4080.1s - 4085.4s] and a different complexity level what they did here with their architecture in this particular
|
|
[4085.4s - 4092.3s] paper and they called it a many fold former the geometric deep learning for neural dynamics on
|
|
[4092.3s - 4099.1s] Riemannian manifolds and this is now my third paper that I want just to show you because I have a
|
|
[4099.1s - 4104.9s] feeling this is the way we're going with the completed I system it is not that we're going to have
|
|
[4104.9s - 4112.0s] the next extremely huge alarm and we put all of the intelligence only in this alarm I think this
|
|
[4112.0s - 4120.5s] would be the wrong way I don't feel the dizziness the right way to go but of course you could say
|
|
[4120.5s - 4126.3s] okay this is now your idea but let's increase the complexity because if we are playing around that
|
|
[4126.3s - 4132.3s] we have no help individualization and I don't have to do this visualization by hand I can now think
|
|
[4132.3s - 4137.0s] a little bit longer no like any idea it seems a little bit longer in a problem so let's increase
|
|
[4137.0s - 4144.0s] the complexity further yeah so I found a not only this third paper but I found another paper
|
|
[4144.0s - 4151.2s] really high level paper that it brings this to a complete new level but it has a coherence in
|
|
[4151.2s - 4157.2s] the development but I think this is the end of part one I think it the video is already long enough
|
|
[4157.2s - 4162.5s] but I just wanted to present you some brand new ideas in the eye that I have a feeling will be the
|
|
[4162.5s - 4169.1s] future of the eye and I have to tell you the next part will a little bit more challenging so I decided
|
|
[4169.1s - 4176.7s] to do part two of this video and it will be only an expert outlook and I will do it for members only
|
|
[4176.7s - 4182.2s] because I want to give back to the people to support me with their membership of my channel so I
|
|
[4182.2s - 4188.2s] want to give back to them and I want to present them just my ideas in the way I see the future of the eye
|
|
[4189.7s - 4197.2s] so I think part one provides already so many new ideas for the AI community in general but if you
|
|
[4197.2s - 4203.4s] decided here to support me personally I want to give back to you and therefore part two will show
|
|
[4203.5s - 4209.8s] you here my personal thoughts here and we will increase the complexity and we will go a step further
|
|
[4209.8s - 4214.3s] and I will give you an outlook of the eye that is just what I feel that we are going to move
|
|
[4214.3s - 4220.5s] together as an AI community anyway I hope you enjoyed it was a little bit longer the video but I
|
|
[4220.5s - 4227.0s] wanted to show you how amazing it can be if you just read two three four five maybe a hundred new
|
|
[4227.0s - 4233.6s] PDF papers and you see common patterns you develop here common ground you see that everybody is
|
|
[4233.6s - 4240.2s] moving in the same direction and I just wanted to make it crystal clear to you where this is now
|
|
[4240.2s - 4246.3s] going to be but of course it could be that we have a brand new development tomorrow but at least
|
|
[4246.3s - 4252.0s] let's have fun with AI let's play with it it is so beautiful to discover here complete new ideas
|
|
[4252.0s - 4256.4s] in other federal intelligence so I hope you enjoyed it maybe you want to subscribe maybe you
|
|
[4256.4s - 4261.0s] even become a member of the channel anyway I hope I see you in one of my next videos
|