๐ŸŽค Voice Memos

54 recordings

Thursday, February 26, 2026
3:23 PM ยท 3:05
Essence

This memo outlines revisions to a technical specification, focusing on simplifying data ingestion for a 'timing lane' by directly copying its database structure, clarifying quality checks for summaries, and detailing a phased backfill strategy with increasing test complexity.

Summary

The speaker is revising a technical specification, specifically for a 'timing lane' where they've decided to remove the requirement for it to be queryable. Instead, they'll directly copy its existing database structure to get the raw data, processing it later, and only ingesting collected data, not app-specific settings. They also want to add a note that the quality check step verifies summaries for coherence. For the backfill strategy, they're adding more detail about using automated testing, natural testing, validation drivers, and smoke tests. The plan is to start with simple, individual tests for each modality (e.g., one photo description, one voice memo transcription) and then gradually build up to full-length tests. If these initial tests are successful, they'll proceed with the full backfill process using yesterday's data, which will serve as an additional validation of the system's functionality.

View full transcript
Reading further through the specification under schema requirements by lane for the timing lane, we wrote that it must be queryable in certain ways. Let's just remove that. For that one specifically, it's already a database. We just got to essentially copy the database structure that it already uses because then we just get the most raw view of the data and we can figure out how to process it ourselves later in a future iteration. It's just, we don't need to ingest absolute everything because some of the contents are like, here's my settings and preferences in the app, which I never even open the app anymore. It's just a software running in the background on my computer. So we want to take the data that it collects, but not the data like about the app itself, if that makes sense. No processing needed by that. For the step where it like verifies, does the quality check itself, just mention also that it checks the summaries and that they make sense. It's maybe it would be inferred already, but just add that. It's like one or two words to add. Specify that. Under the backfill strategy, specify slightly more with like Cody automation, with natural testing, validation, driver and smoke tests. And then once we like think and you know, do also for each of the modalities where we're processing, we'll do like individual tests. So like try one photo description, one video, transcript and description, one voice memo transcription, single YouTube video ingestion. You know, like simple tests. And then we're slowly gonna build up to full length length tests. So after that, if still everything looks good, that's when he tries doing all the stuff from yesterday. Then it should already expect it to be really kind of finished and ready for tonight. But then we kind of start the backfilling process. And then that works as an extra validation that it's actually working. So then you run essentially like a real end-to-end test on the stuff from yesterday where you actually do it. I think that works and it continues. This is already specified.
40dac9a9f8841f1c1c99070a824ad5a2ab065259d2e14726eb9703087695af05_461078e1c60e.m4a
Wednesday, February 25, 2026
11:18 PM ยท 18:09
Essence

The speaker is grappling with feelings of stress, inadequacy, and an existential crisis about his life's direction, particularly concerning his career, finances, and personal growth, while also battling self-imposed pressures and anxieties about productivity and social connection.

Summary

The speaker reflects on a day spent with a friend, training and discussing AI, which leads to a "dreaming session" about his current anxieties. He feels stressed and like he's falling behind in his AI work, struggling to make progress with a specific cloud project. He questions the value of being in "cool communities" for information, believing official releases and YouTube are sufficient, and acknowledges the main benefit is meeting like-minded people. He's experiencing a loop of unproductive thoughts, feeling lost and unsure of his life's purpose, especially regarding his career and financial future. He struggles with the pressure to make money, feeling a stigma about relying on family, and questions his confidence without traditional metrics of success. He also describes a psychological trap during workouts with others, feeling anxious about not strictly following his routine and fearing wasted time, even when the session is largely productive and enjoyable. He contemplates Naval's definition of intelligence as getting good things out of life, questioning whether to pursue intelligence or simply a good life. He describes an "existential crisis" while walking alone, feeling a core of anxiety about his life's direction. Despite these struggles, he tries to zoom out and be positive, noting he's broken old bad habits and that his current "vices" โ€“ training with a friend and socializing โ€“ are actually beneficial. He expresses frustration with the cloud project, finding it difficult to achieve the results he sees online despite his technical skills, leading to wasted time and a struggle to disengage from the computer. He recognizes a lack in his social life, which feels fine in the moment but becomes a source of inadequacy when comparing himself to others. He concludes by acknowledging the high expectations set by online showcases that don't always deliver, and plans to be more present in the real world and avoid getting sucked into unproductive pursuits.

View full transcript
Yo, I have a little bit of a dreaming session now, as you can see. So we're home from the city. Was uh hanging out with Flynn today, so we uh went for training before the session, we just some face place, got some coffee, we just like film the conversation. Go by me after the training. Albert randomly texted me during the session about this cloud called me. I wanted to go there with him. He like it was coming most of the time to the house, like the stuff that we're presenting, I'm already seeing on YouTube, or at least I don't know. I think for most of the people there, they would like being ahead to watch it, but for me, I actually felt like being behind, because I do think the approach I have is the best, it's actually like you've seen the best news information just watching the official releases from the big companies, of course, like OpenAI and stuff, and then just watching stuff on YouTube or on X, and I don't think you need to be in any of these cool communities and stuff, because it's just gonna come on YouTube. There's not like secret information there for the most part. I think the best value is just that you uh you meet other people that are interested in the same thing. Anyways, I'm not gonna spend so much effort documenting stuff like that. I'm just like gonna talk about my like thoughts, which, you know, it's what I have documented at least, because I have data on all this stuff. It's a weird dilemma. And now, all the time to panic. I'm feeling stressed, like I'm missing out, I'm too slow. I'm trying to work on this AI stuff, and I'm working on the cloud code. I feel like it's going too slow and I'm not making any progress, not getting anything made. I was like taking a walk after the AI meetup before I met with Flynn, I just had some time to think, but I feel like you're not like that. I'm not able to read what you're sending me, like, thoughts. I feel like I'm just gonna stop. It's like the same thoughts, it's just loop. Maybe. Or the thoughts like drift, but they're not concrete. I don't know, like, I didn't find I was able to think productively about anything, to be honest. That's kind of why I'm also recording right now, I think it's gonna help me think more with the direction. Yeah, I feel stressed out, but I mean my family's still at home and it kind of seems like I have no plan. Like, what the fuck am I doing? I'm trying to like pursue some interest, do some come out, but what the fuck is it like completely? Nobody knows, not even I know. I got a lot tomorrow, I don't know. I wanna be able to just lay that question to the side, like how are you gonna make money? I wanna just not worry about that. I wanna just worry about focusing on what you're interested in, like keeping up with that, trying to build something, trying to learn and grow, help other people, and that should be it. And I can't see reasons why people do that, and because they have the financial freedom to just pursue whatever they want, and they do that and then they end up making money off that thing as well, even though it's not the goal. So I'm thinking I could do the same, but I mean, I am in a different situation because I already got their finances covered, and I don't, I have a higher pressure to make money. I can kind of rely on my family, but it's like, it's technically works, but it's socially a stigma and like personally for kind of my soul and my mentality. I don't know if it's good, to be honest. Like, how can I have confidence to speak about anything when I have like none of the metrics of success? That's a stupid statement, I do have some success in terms of appearance, attractiveness, like physique, health. I'm doing well, presence and conversations, but many other places where I have no success, especially like career wise or money wise, you know I don't do it. Job wise, no fucking job. Fucking job. Something else. During the workout today with Flynn, something I felt many times during the workouts, but again now, when I train with someone else, I also get this anxiety because I'm not following my training plan. It's not exactly my usual session. I'm afraid I'm wasting my time if the training is not productive and that I'm not following the exact routine. And so it's not as loggable data wise. And it's kind of like I feel like I wasted this session even though I fucking went to the gym and trained. And there's this part of my mind that tells me like it wasn't real training because I didn't 100% follow. It's like normal alone session that I do. Which is weird. So I did fucking train. And for the most part, it's a very similar session, it's slight differences. I may have pushed slightly less to failure than, or done slightly fewer sets. But it depends on how I feel that session when I'm training alone as well. Like sometimes I will also not go as hard. It was really a very normal session, to be honest. It's just, I just, I just feel like almost because I had more fun that it doesn't count. It's almost like, I, does that, I feel like I didn't do properly, but I think when I think about it objectively, I realize that no, I did actually do it almost exactly the way I usually do it, to be honest. Yeah, so that's a weird psychological trap I'm in, I guess. I've been thinking about this quote from Naval, you know, very wise dude that a lot of people listen to online and I'm inspired by. Like his wisdom. He says the best definition of intelligence is that you get good things out of life. So it's not that, you know, you do life successfully, or it's that you get what you want because you can maybe infer then from that definition that some people have been super rich but not happy, then they're not intelligent from that definition. Or it's not a good life. I don't know if I should be pursuing intelligence or just like a good life. I should probably just be pursuing a good life, right? I don't know, intelligence feels important, but is it really? It does feel important to me. I really on a small scale felt today like this existential crisis, just what is life, what am I doing? As I was just walking around feeling kind of lost after the event before I met with Flynn. I was like back and just walking endlessly around the city and not having anyone to talk to or anything to do with my phone or something like that. I feel like at that moment it like surfaces what is never usually there, but like this just core of massive anxiety in the core of my being for some reason. I don't know why and I'm exaggerating a little bit. Mostly I was just normal, fresh in person, but I just have all these thoughts like, fuck, what am I even doing with my life? I don't know, man. Anyways, I can also zoom out, be objective, you know, what's the truth? Or to be positive. Well, I'm very locked in by the way I usually measure it. And with the thing of like new bad habits that I keep saying, I stop with that whole bad habits with the Netflix addiction or whatever. And if you're gonna have bad habits, like my vice is at least fix a new one. And I do have that now. I don't indulge in any of the shit I did before. And my bad habit is that I trained the session that wasn't as per plan because I did it with a friend, so I had more fun. And that I spent this afternoon not working on building something, but instead just being social. That's actually great. That's actually what I need, you know. Yeah. Most of the time I'm just frustrated because I feel very stuck with this open claw thing where it's supposed to do all this amazing shit and I just cannot get it to do it. I feel like it's hard to even articulate what that problem is. It's that I've seen showcases online of doing amazing stuff and people having set up search cool systems with it. And it seems to be decently easy to do. And then when I try it myself, despite the fact that I am quite technical and know how to program, still when I try to do it myself, I'm just not able to achieve the same results. We just run into bugs and I just end up spending a lot of time and not getting as much done as I thought. And I get frustrated. And I struggle to put the computer away even when I pass like the time. I know I don't wanna spend that much time on the computer. I wanna live a balanced life. I just struggle with that. I always in the moment wanna just do more on the computer, more on the computer, more on the computer. Then I know my social life is lacking, which feels fine in the moment. So okay, I don't worry about it. But when I see every mind of traces of how other people live much more rich social lives, then I feel like I have such a huge lack. Suddenly I feel like I have nothing. I don't know, it's interesting how that's relative because I just gotta get used to it and not do it, I guess. I just don't worry about it. And in fact, there's a nice peace to it, right? And I can really focus on what I wanna do. I mean, I know I've heard people say like, it's hard to do it in the moment, but really like, even if I'm only chapter or living with your family or whatever, you should appreciate that because some other people that have the different social life, which I'm wishing for, a part of them is also gonna wish for what I have right now, which is just any chill and quiet to

Let's see. Anyway, I didn't find it. It's annoying, man. It's just only, I get set up in a way and talked about in a way where you just put high expectations and then it doesn't quite deliver in the end. But it's all fixable. I would just, I'm kind of tricked myself. So am I done? I don't know. I'm gonna think, I'm gonna think that I anymore now. I'll just wanna, I'm home soon. I'm gonna just vibe with some music, go home, go to bed. And then I got tomorrow what I'm gonna do, but I'm gonna need to make extra, be extra worried I'm not getting sucked into stuff. Like stay very present in the real world. It was annoying because I feel like I don't want this shit done. I don't know what I'm doing.
258ace3fa363951165521ca1fd52e31db037bcf2d6c50985f79a8b8a6032beb6_7ee494d0b915.m4a
Tuesday, February 24, 2026
11:34 PM ยท 12:03
Essence

Despite a day of personal wins, the speaker is frustrated by an obsessive, manual, and seemingly inefficient struggle with OpenClaw, questioning its current capabilities and their own approach, while acknowledging the need to prioritize rest.

Summary

The speaker is frustrated by their inability to disengage from OpenClaw work, feeling stuck in a cycle of manual debugging and questioning if they are the bottleneck or if the technology itself isn't ready for the advanced automation they're attempting. They admit to an unhealthy obsession with the project, working on it all day despite a plan for limited engagement, driven by the perceived immense potential just beyond their current reach. They wonder if their struggles are due to a psychological illusion, their own lack of skill, or if the online claims of advanced AI automation are exaggerated myths. Despite this tech-related frustration, the rest of their day has been positive, marked by successful adherence to personal plans, productive work, effective training, and a growing sense of belonging at their gym, which they find incredibly rewarding after a period of loneliness. They plan to meet a friend for training tomorrow, but for tonight, they recognize the urgent need to stop micromanaging the OpenClaw project, go to bed, and approach it with more balance and a focus on real-life priorities tomorrow.

View full transcript
Yo, quick day update. I really need to go to bed now, but I'm frustrated because I don't feel finished with the open cloud work for today and I like, I just keep going, keep going. It's just like error debugging, whatever. I feel like a lot of what I'm doing is like unnecessary manual work. Like I'm the bottleneck, but like, I need to stay with it all the time. Uh, sorry, Rick, I'm just gonna read some response there and then, I'm still working with it. So I'm in dialogue right now, which is while I was working, I'm recording this voice memo. Yeah, I think it's still going. Um, so yeah, so I'm, I'm frustrated, but I just really need to put away the computer right now and go to bed. I'm starting to feel a little bit tired, which is good. It's way past my usual bedtime last days, but then also the work I was later today. So kind of makes sense. But yeah, with OpenClaw today, you know, I've so failed my plan of like only working like one 90 minute block at the start of the day. I've been using OpenClaw all day, all day. If you check my screen time from today, I have a shit ton of screen time on both my computer and my phone, but it's almost all like Telegram or the OpenClaw dashboard. Like there's no content conceptions. There's no like degenerate stuff. It's all kind of work. It's all OpenClaw, but it's just way too much. I'm like obsessed. I'm not able to put it away. So it's a, uh, kind of like an addiction in a bad way. And like, you know, bad habits considered, it's like you'd want to have it like work, right? It's kind of productive. It's cool. I'm creating something, but still I want to stay in control, have a good balance, stay in real life. I just get so excited by this because I feel like the potential is so huge and I just need to get like a little bit further, a little bit further for it to become so powerful. Right now it's just kind of done, but if I just get it, I feel like there's some point I need to get it past, which is not too far away where it's going to become like super powerful and I can use it more hands-off and while like living in real life. I feel like I'm just trying to push for that point, but maybe it's not even real. Maybe it's just a psychological illusion. I don't know. I'm really trying to get this like voice mode to work, but yeah, I don't know. Um, since we have so many areas now, I'm wondering if maybe I should have, I just ditched it from the first part. Maybe that technology is just not ready yet. And I should just wait for them to release the mobile app or something. But I mean, I feel like it should work. So I don't know. There's definitely some workflow issues here. Anyways, otherwise the day has been good, man. Like I've been, you know, following my plan, doing everything properly. I've been doing real work. I did my training properly and the warm up. I also did snow shuffling at home. I had a new outfit today. Like I'm looking attractive. My hair is good. My skin is, I still have some dryness issues, but it's good if I use products. Um, eating clean, everything's good. I'm avoiding the oats for breakfast this week. So today for breakfast, I had like, what's it called in English? In Norwegian, we say rundstykker. So it's kind of like bread, but kind of like buns. I don't know, but it's not like, it's like bread buns that you usually have for breakfast. They're like small round breads, essentially, that you usually cook and then you cut one in half. And then, yeah. And then you eat a couple of those. I guess it's, it's similar to a bagel maybe. Uh, I had that and then with butter and scrambled eggs. And then I ate some more like bread and butter and cheese throughout the day, protein shake and then workout and then mom's dinner later was like fish and salad. It's like fish cakes, probably not so much carbs. They're more of like keto dinner. Um, yeah, workout, especially now, you know, I've been going to this multiple times. I'm really, you know, now know some of the people there, see some familiar faces. I'm really starting to feel more at home there and part of the group and like, I know people and that's really fucking cool. And it's something I was thinking about how like, oh shit, I remember when I was so lonely, like a month ago. And I like heard this advice, like, okay, just go to pick a training thing, go there consistently and then you don't even need to do anything for the first time. Just over time, you will become a familiar face and get to know people. And I was like, yeah, sure, buddy. Like I need to do something more extreme, but actually now I didn't even do something that extreme. I just stopped worrying about it and now it's like naturally worked out. And now I like know more and more people there every day and now I kind of feel at home there, which is super fucking cool. So, um... Yeah, that was cool. I would maybe wanna yap more about that, but I'm just tired right now. I don't wanna talk about it. Tomorrow, change of plans. You know, I'm doing the training still, but now I decided to meet Flynn as well. He like asked me if I wanted to train together, so I get some social input there. That's cool. I really wanna make progress on this OpenClaw stuff. I really wanna get it to a point where you can like do long stuff overnight, but I guess just tonight isn't the night. And I think I could try hacking away at it more, but I think at this point I should just go to bed, man. And just deal with it tomorrow. And try to stay more balanced and in real life and do more real life thinking and less like micromanaging. It's just, it's really bad. I should be fucking hyper allergic to this micromanaging. Like whenever I detect it happening a little bit, just zoom out, say like, no, I don't wanna do this shit. How can we make sure I don't have to do this shit? I tried doing some browser automation with it, but it seems to be failing. And I tried making a Google account for it and got like blocked as a bot account. There's these things which I heard here online that you can do and then I tried and practiced and I just don't get it to work. I feel like the technology is not ready for it. The websites don't want you to automate them and use scripts. Like that part of the web is not made for like agentic automation yet. But then people tell me that their agents are doing it. They tell me online, but I don't know. I feel like I'm doing something wrong, but I don't know what it is. I need a better browser automation setup maybe and then a better like OpenClaw setup where I like, you can do more stuff autonomously so I can stay less in the loop. I don't know. Like people say on podcasts, like, oh, you just set up this Mac mini, tell it to like make your business and you just give it some credentials and it just does it. That's not my experience. They tell me like, oh, it has to call the restaurants so it figures out, you know, how to call it, gets like a number for itself and gets the like text to speech to like fake that it's a human calling. Like, is this real? Has it really been able to automate all that? Because a lot of those steps require going through like websites that don't have APIs. Like you have to do browser automation. It might require some like real identity identification for the number. I don't know. Like payment card to add for services. Like, is it real that they're actually doing this? I mean, I'm told it. I guess I haven't inspected it too closely. Like, is it actually real or is this just kind of like a myth that's kind of appeared and then just kind of like spreads online with like these extreme use cases. And actually like, it's kind of true, but it's actually, you know, not really explained how that person that said it was quite technical and they've given like their OpenClaw a very, a lot of like certain setup with credentials or browser automation to really enable that. And for like most, most, most people, it's really not gonna work like that. Maybe that's the case. I don't know. Or I'm just literally using it wrong or like not smart enough and I just don't know. So I would need someone to be like aware of my current workflow and then see how the best people, their workflow is and point out flaws in my workflow. And I mean, I wish OpenClaw could do that itself, but it's not set up yet. But I mean, if you OpenClaw processes voice memo in the future when we get this set up, then that's one of the things you can detect. Like if I ask you to help me just figure out new ways to like help me or the first to improve together and then this is clearly a frustration I'm having right now, right? Then you could do some research around that. Like, use the context from the frustration I'm having right now and then also what you can tell about our workflow, which you could infer from what I'm saying, but also you could check our chat history, you know, from that today or different days or the memories or whatever and what we've been working on. Especially the chat history might actually be relevant. The like actual chat history, session transcripts
a323137f219be4c713e2b0285cded2763c84856e69ddbc5bc01dbd6b1202e270_b26f8e6c8fd7.m4a
Tuesday, February 24, 2026
5:27 PM ยท 7:03
Essence

I want to implement a voice call feature with my AI assistant, OpenFlow, ensuring the main agent remains unblocked and responsive, and I need to determine the best way to achieve this without compromising future updates.

Summary

I'm looking to set up a voice call capability with OpenFlow, similar to what we discussed earlier today. The key is that the main agent should never be blocked; it should only delegate tasks to subagents and synthesize reports, always remaining present and responsive. I've seen others online implement voice call support for OpenFlow, but I'm unsure if these are official features or individual hacks. I suspect it should be a core system feature given its utility, like the existing Telegram channel integration. My concern is whether to build a custom solution now, risking future update compatibility, or wait for an official release. I need to research the current state of voice call support in OpenFlow's documentation, installed system, and online communities to understand if it's widely available or desired, and how to integrate it without creating future conflicts. It's also crucial that this voice mode works seamlessly from my phone, similar to how I currently use Telegram for text and voice messages, allowing for delegation to subagents while keeping the main agent unblocked. Ideally, the voice mode would be always-on, intelligently discerning when I'm addressing it versus speaking to others, and offer natural human-like communication with appropriate handling of pauses. This initial step involves research and planning to determine the best approach, or if waiting is the recommended course of action.

View full transcript
Yeah, I want to set up the ability to call with you, Java in voice mode or a voice call. We discussed it earlier today. I don't know if you have any memory of it, of like how I want the experience to be. Very briefly, the most important thing is that in a voice mode, it's very important that the main agent I'm talking to is never blocked, so it's essentially the only thing it does is delegation and like read reports and synthesize that into its uh presence, so that it's like always present, never busy, but still can like run any work that OpenFlow can, just through issuing subagents either to do work or just to even aggregate information or and it can tell me if something, even if something takes like 10 seconds, it can tell me, yeah, I'm working on that thing, and I'm always working on that, it's really in a subagent or a subprocess or something, so I can keep talking with it, and it'll just let me know when that result is ready and we can like naturally integrate it into the conversation. Now, I've seen some people online showcase how they've gotten their OpenFlow to build like a voice call support, but I don't know if this is a very official thing or if everyone has done it very like individually in their own way, like just hacked it onto OpenFlow because of course it's so powerful now that you can kind of have OpenFlow upgrade its own system like that, but it's also a feature that almost everybody would want, which is why I really think if either it's part of the core system already and I'm just not aware, and it's like maybe in an update I don't have or it's in like an official skill I haven't installed or a plugin or something, or if not, I would expect it to become to come just in an update at some point, to be honest, because they already have like the Telegram channel part of the core system, and it works so well. I think like everybody would want a voice call like this, so I'm would assume they're gonna add it through these common channels also in the future, which is why if that's the case, I'm not sure if I want to hack on my own solution because then I don't know if I can like update it properly in the future because it's gonna kind of crash. But at the same time, I do want voice call capability right now, like I don't wanna wait if it's potentially waiting for more than a week. So uh we discussed this a little bit earlier today and you told me what the options were, but I wanna have this checked one more time, so I want it checked in the docs and whatever is installed in like the current OpenFlow system and also with online information, like online research, like what is the state of this? Is this something a lot of people have or not? Is this something a lot of people want? And how am I supposed to put this into my own system, given this concern of like I don't wanna hack in or engineer something that I could just get for free with no work from the like official system and I don't want it to crash with my stuff, but at the same time, if I'm likely to have to wait for a long time, then I'll just do it myself now. Um, and then it's important that it works, you know, kind of like from my phone as well, so right now I'm using Telegram obviously to send text messages or send voice messages and it's working great and I can even still like delegate tasks to subagents that are running in the background so that I keep the main agent unblocked, so it's already working great. We even changed the behavior slightly today, so it's gonna more automatically assume to delegate something if it's a larger task without me even specifying it, which is nice, so it stays unblocked, so I don't block it by accident. Um, yeah, so I wanna know how should we, how should I think about this or how should we proceed with getting a voice mode? Yes, I want it to also be accessible, you know, even if I'm not at the same place as the OpenFlow instance, if I'm just out and about with my phone, as long as the OpenFlow is online and connected and my phone is online and connected, I should be able to call it or to activate voice mode somehow. Also, ideally not required, but ideally if it could be kind of uh just smart, like it depends on how rigid you need to be in the dialogue or whether it could be very loose, because the coolest is if I could have it kind of always on where it's just a lot of the day, it's on like via my phone or something, but it's kind of in the background, like it understands it's I'm not talking to it right now, so it's just listening. But then I could like talk to it at any points and it's listening all the time, it could understand kind of intuitively, oh now I refer to it, as opposed to just talking out loud or talking to my friend or something. And ideally it has a very good like natural human talking slash communication understanding in terms of like pauses, maybe I pause in the middle of the sentence to think and I don't want it to suddenly jump in and interrupt me, but yeah, essentially the more like human or natural it can feel like giving the better you, best UX the better, but there might be some limitation in here just like literally in the capabilities of AI models or AI voice models or the system, I don't know. It's also something to investigate. And then specifically of course in the context of OpenFlow and how people are using it or upgrading it. So we're gonna probably implement this, but and this again now another subagent, which is like probably a session we're gonna iterate in on over time and get results, just report the results here, results here back in the main thread, so I can think about it and then uh send that same session for the next iteration. So this first step is just like a research and inform me and slash planning step about how it would suggest to proceed with this. Or if the suggestion is literally just to wait, but I mean, I doubt that.
703452603dd6d1aa302fb405f810ce731c466ce062dc19d368892b8eb2aa572e_f84800b71113.m4a
Tuesday, February 24, 2026
5:09 PM ยท 16:07
Essence

The speaker is delegating tasks to set up automated, semi-live ingestion of personal data (voice memos, photos/videos, activity tracking) from their Mac's cloud storage bucket into a Supabase database for processing and analysis.

Summary

The speaker wants to delegate several parallel tasks, emphasizing that all current tasks should be delegated to keep them available for main chat. They anticipate multiple iterations for these tasks, so any sub-agent or session should be persistent. The immediate goal is to grant access to a cloud storage bucket, whose credentials are found in the `Jarvis` project's `.env` file on their Mac. This `Jarvis` project currently syncs iCloud data like voice memos and photos/videos to the cloud bucket semi-live, and also syncs activity data from the "Timing" app. The main job is to ensure their OpenClosest mirror has access to this cloud bucket and then set up nightly cron jobs to ingest this data into a Supabase database. There will be separate jobs for voice memos, photos/videos, and Timing data. The speaker notes that while some tables for this data exist in Supabase from past attempts, they are outdated, and it's fine to ditch existing data and rewrite the schema. For voice memos, the nightly job should transcribe them using the Whisper API (chunking long memos to avoid errors), store the transcript, and also generate a high-level summary using the OpenAI API. For photos and videos, the ingestion will involve creating textual descriptions using multimodal AI (like GPT-4 or Gemini for video) and storing these descriptions in the database. For videos, a transcript (ideally with speaker separation) and a visual description combining audio and video elements are desired. The speaker suggests starting with one sub-agent to handle the common initial exploration and understanding of the cloud bucket and Supabase structure, and once that's complete and approved, then instantiate three parallel sub-jobs for each of the data sources.

View full transcript
Now I want to kick off a couple of jobs in parallel. Essentially, every task you do now should get delegated so that you always stay available here in the main chat with me. Secondly, some of these things we're doing, I assume we're going to take multiple iterations to work on. So whatever you do for like a different session or a sub-agent or whatever it is, if there's a setting for it, it should be like a persistent session and not assumed to be like a one-off thing. It should be assumed that it finishes and then it reports back to you, you ask me for further feedback, which you then pass for it to continue or iterate. So, first of all, there's a cloud storage bucket I have set up for some files, which you haven't yet gotten access to. We have discussed it somewhat in our previous conversations. I don't know if you've saved any memories of it, but you've not gotten access yet. But now I want you to have access to it. The way you're going to get that is that there's a project on my Mac right now, locally, in the developers directory on my Mac, the developer directory. The project is called Jarvis, and inside there, in the ENV file or ENV local file, you'll find all the credentials you need to connect to the cloud bucket. There's also scripts there that sync data to the bucket and potentially also read from it. You can look at those if you want for understanding, but they're not really relevant to the task. Then the system that is there right now, what it does from that Jarvis project, well, it does multiple things. But one of the things it does is that it uses my Mac as a gateway to sync some iCloud data to the cloud bucket to make it available for any other service that I want to connect, because usually iCloud is a little bit locked down. So it sends my voice memos there, I think live, as they're created and stored in iCloud or with a quite high frequency. So much more frequent than like a nightly sync. It's more like, I usually record them on my phone, they sync with iCloud, I think pretty much instantly, therefore also download to my Mac. If my Mac is like awake and online, I think it pretty quickly just downloads them as well in the background and then the script syncs them to the cloud bucket. So it's not instant, but it's kind of like live, it's like semi-live. Secondly, it does the same thing with my iCloud photos or like photos or videos, everything from my camera roll, which is usually captured from my iPhone. Then it syncs to iCloud and then there's a script there on my Mac, which checks iCloud. When there's anything new, it like downloads it locally to the Mac first, then syncs to the cloud bucket. It's like temporary download to the Mac, sends it to the cloud bucket. And it also does that like semi-live, I think, with like an additional at nighttime, it just double checks that it's everything it's got in there. So that's just for context and you can look at the scripts if you want to, but it's not that important. What's important is knowing what's in the cloud bucket and what's like synced there automatically. So that's why I'm giving the context. Third thing that's getting synced to the cloud bucket automatically is something called timing, and it's a database. It's an application, application on my Mac that tracks my activity on my Mac, like which apps I'm using or browsers or even within the browser, like which windows I'm using or like tabs I'm on. It also integrates with iOS screen time, so it'll actually also pull in data from my iPhone, the screen time data into that one, which is cool. And that's also being synced to the cloud bucket. So in the cloud bucket, there's like two top-level folders there. One is for these automatic syncs, which follow this structure. And there's another one, which is more just like random stuff that's dumped there. You can ignore that for now. We're focusing on the automatic syncs that we have set up. Now the actual job that we're going to do is to make sure our OpenClosest mirror has access to the cloud bucket, and then we're going to run automations for ingesting this stuff, similar to how we're, we already have kind of a script for ingesting some data from my YouTube watch history and it runs like every night. Now, since the voice memos and photos, they do sync like semi-live, we could ingest it on a higher frequency, but for now, we're going to leave it to be like a nightly job, I think. Just for simplicity. So it's going to be similar to what we set up with some other things. It's going to be like a cron job that runs at night that checks the cloud bucket for new items. And there's going to be a separate job for each of these types, like one for the voice memos, one for the photos library. So it's, it's weird when I say photos, it's really photos and videos, but like photos is the brand name of the iCloud photos, right? And then another job for the timing data that we're going to ingest into our super-based database. For each of these, I have kind of done this in the past and I have somewhat had automations for it in the past, but then I ended up stopping using it. Therefore, our super-based database does already have like tables for some of this stuff, but I think it's kind of outdated, not really updated. And for our purposes, we should not worry about legacy compatibility. It's completely fine to just ditch everything that's currently in the database, rewrite the entire schema and just have it work from now on into the future. Although if the data is already there, is like high value and we're just going to reprocess some of the stuff into the exact same thing, then we can might as well keep the stuff we already have there. Now... For the voice memos, I want the processing that should be done at night, kind of that cron job that runs for those. It should find them in the cloud inbox, which everyone's are not processed yet, which are gonna be new from that day. It should transcribe them using the whisper API. Make sure there's an error, which I feel like the AI keep making is if they're too long, longer than the AI allows, then it's gonna error. And you just work around that by like chunking, like splitting it into chunks, and then you just merge the transcript after. And then that should be stored in our database. And in general, for anything we ingest like this, we're gonna keep a note of what the original data was. And then since what we're storing is more processed, it's like a transcript, for example, we're gonna store what method was used to create the processed results. So in that case, it would be like, you know, it was originally a voice memo, and it was transcribed using OpenAI Whisper, or if it's processed some other way, perhaps we write the name of that type of processing or the script that was used or whatever. And in addition to storing this transcript, we'll also with that store another field, which is more like a summary or explaining the voice memo at a higher level. It explains kind of what this voice memo is and what's being discussed, or it could be a lot of different things, right? Like a lot of them are me thinking out loud kind of journaling. Some of them might be though, just a background recording while I'm like living my life. Could be different things. So that's why we say this like summary thing, it's like a little bit easier to know what the content is without looking through the whole transcript. Summary can also be created with that OpenAI API call. Then for the media, like the photos and videos, when we're ingesting those, so that's a separate automation and cron job for ingesting those. Finds them in a bucket, the new entries from the day, ingests them. Make sure we're always tracking like what media for all the data sources that has been ingested yet, processed yet or not, so that, you know, we don't repeat processing and the actions are indempotent. Yeah. The ingestion we're going to do for the media will keep the originals, you know, just in the cloud bucket. We're not gonna duplicate them or anything, but we want to save in our database. We want to create kind of a textual description of what's there. So for every item like photo or video, we'll do image analysis with an appropriate way. I don't know what that would, what the best way is, but I know, for example, if you use like GPT, what is it, 4.0 or whatever, like multimodal thing where it can like natively understand images and then create a textual description. It's like very good at understanding what the actual image is. Then we'll use that to create the description. And then we're gonna store that in the database. Now, if it's a video again, I would like to have the video transcribed, but also described more like visually or it's like a combination of the audio and the visuals, right? So we can't need a model that can natively understand video from what I'm not that well-informed on this, but from what I know, that's a little bit more rare, but I think some of the Gemini models can like actually do that. They like natively understand video. So that's something to, if you don't know, then just look that up in the implementation process, like which models should we use. They think the Gemini model could do that. So I would like to store in the database the transcript of the video, ideally separated by speakers if possible, but also like a description of the video, like what it is or what's happening in it or what the contents are, you know, based on like native video understanding, which should like combine both the audio and the video elements for like

These are, in a sense, they're kind of three different jobs. It's like one for each data source, for like setting up the system and the contract and everything. But there are also some common elements in that. First, we need the cloud bucket access and kind of understand it, understand the existing superbase structure. So I think what you can do is start one sub-agent for the kind of whatever is common between the tasks, the initial exploration and understanding a little bit better. And then eventually, once that is finished, if it looks good and we like it right through me, then if that looks good, then from that kind of one thread, which is already kind of a sub-agent or a separate process, we're again going to instantiate three parallel sub-jobs under that one for the work for the three different data sources.
556eb722c39fa1f002aa905028c33b8f4516843f989f6c1f6c10bd0c0235d3c7_60469362efb5.m4a
Monday, February 23, 2026
8:13 PM ยท 78:27
Essence

The speaker is excited about integrating their OpenCore AI assistant with Strava to automate workout activity management, aiming to streamline their content creation process and daily tasks.

Summary

The speaker is confirming the functionality of their voice memo recording and then delves into their AI assistant's interaction with Strava. They're trying to get the assistant to pull more details about their workouts from Strava, specifically from the 'Heavy' app, and are troubleshooting why it hasn't responded yet. They also briefly mention organizing socks and preparing training gear for the next day. A significant portion of the memo is dedicated to the speaker's efforts to move their OpenCore instance, considering options like a cloud VPS or Mac Mini, and the complexities of reconfiguring OS permissions and API keys. They're unsure about how various web services handle device fingerprints or IP addresses when transferring keys. The speaker then returns to the Strava integration, noting that while the AI's analysis of their workout data is technically correct, it doesn't fully capture the nuances of intensity or actual progress. Despite this, they're enthusiastic about the natural language interface with the Strava API, which allows them to chat with their workout data and change activity types, a feature they previously achieved with other tools but now have more control over within their own ecosystem. This new capability is a source of excitement, especially for creating content, as it provides a concrete, cool thing to showcase, alleviating the daily pressure of finding new material. They successfully test changing a Strava activity type via the AI. The speaker then outlines their vision for the AI to proactively manage their Strava, automatically editing titles and descriptions based on their training plan, which the AI already knows. They discuss the technical challenges of setting this up, particularly regarding webhooks and their local OpenCore instance not always being online. They consider using a cron job to periodically check Strava or setting up a separate server to receive webhooks, but acknowledge the complexities of integrating a remote server with their local AI agent. The speaker gets distracted by organizing their room but refocuses on the Strava automation. They decide to try sending a message to their AI, Claude, to automate Strava management for the week's treadmill runs, including setting titles and descriptions based on their planned workouts. They plan to set this up as a cron job to avoid constant notifications and ensure it runs in the background. Before sending the message, they check their Codex subscription usage, noting they're running low on credits and plan to upgrade soon. Finally, they send a detailed message to Claude, instructing it to act as a Strava manager for the week, focusing on treadmill runs and a high-rock session, while ignoring strength workouts. They specify that Claude should check Strava every 30 minutes for new activities, process them in the background without notifications, and log all events. They also provide details on how to identify different workout types based on their Apple Watch and Strava classifications, and how to apply titles to the treadmill runs based on their current manual process.

View full transcript
Okay, so again that since when I started recording the voice message, it stopped the voice memo recording, which is expected to be honest, but it's nice to confirm that. It's asking for some more details on my workouts. It successfully found it on Strava, but it didn't check heavy, so now I'm asking it to check heavy as well. It should have like already understood that. Okay, let me organize my socks. I might need to buy new socks, but I have already ordered some and then I pulled out a lot of old ones from storage, so I actually have plenty. I gotta figure out which kind I like the best. I don't wanna pack, have like my training gear ready for tomorrow, even though I would have plenty of time to do it tomorrow. I think it's nice to do it today. Let's make sure my mic doesn't sleep again. That's a little bit of a mess with having it on my MacBook, like my personal MacBook is that it's not always gonna be plugged in and on unless I set it up like that, but it's not really, that's not how I usually use it. But I might start using it like that or get this set up on a different computer. Then I have to do the setup process again, which is a little bit annoying. If I wanna move my OpenCL instance either to a cloud VPS or a separate like Mac Mini or something, a lot of the configuration can be copied, but some things just, especially with like the OS permissions, need to be reconfigured. And then for like external application access, it might all be covered if I just transfer the API keys and tokens and stuff like this, but I'm not sure if they, some of these API or web services also do check like the device fingerprint or IP address or something, and therefore like even if I copy the keys to a new instance, I would have to like re-authenticate it. I'm not sure how that works. I think for most you don't, you can just continue with the keys you have, but for some services like Google Cloud Console, I'm guessing maybe you would have to run it again. I'm really not sure. I got no response yet. It should have responded by now. I'm guessing it has, but that my phone is just not throwing the notification for some reason. Yep, it has. Just didn't make any noise. Okay. Let me take a picture just for fun. See, compare my per session to my last per session actually in heavy, which is cool. Cool proactive step, but this analysis, it's not actually published at all. Look at these percentage improvements, but it's more of a random variation and it doesn't like cover everything about intensity in the session. And it would be way too big of a leap to be like actual progress. So it's just like saying some numbers, which like the analysis is technically correct, but it's just not actually like true that I've, you know, performed better like this. Like there's more factors and you got to look at a much larger, larger timeframe and average out the values a little bit and stuff. And this is cool as work. Like it's actually really cool. I can now chat with my workout data and it works across platforms, which is cool. Now I could kind of already do this a long time ago with, for example, the Bevel app, but now I have it more in my own ecosystem. It's more flexible, you know, kind of in other ways. So it's cool. I'm actually gonna try one more message since I'm gonna change the workout type from walk to I'm just typing a message to OpenCore right now. I'm asking it to change the workout type and I think it should be able to do that. And now it's cool. Now I actually have a natural language interface with the Strava API, which is sick. And I again kind of had this before. I did it through, I've done it through an agent in make.com, but this here is more reliable and more under my control. So this is actually sick. I might make my short video tomorrow about just this because it's just kind of a nice thing. And it was kind of simple. Like I didn't even think about it that much, but it's actually one of those like nice cool things that improves the UX. Now I don't actually need to open the Strava app anymore. And I still might do it, but now I can actually just dump all my info into my OpenCore instead and have it change the Strava activity for me if I want to. That's pretty cool. It's nice because I was kind of afraid of coming tomorrow when I need to make my content again. As every day, it's just so annoying and I don't know what to make. And I feel like I have nothing cool to show off. Now I actually have this one thing, which is kind of cool. It's still a small thing because I could kind of do this a long time ago and whatever, whatever. But yeah, there is no, you know, requirement for the quality on these videos. And this is at least more cool than just showing that I've faced error messages for three hours. Awesome. I did it. The Strava activity for me. Let me just double check in the app. Yep. I did it correctly. Cool. And then what I want to do, which I haven't set up yet, but which would also be feasible to set up very, or right now, or very soon. But like it's not that much work. I don't wanna, I want it to kind of proactively be managing my Strava in the sense that I want it to do what I do, which is like when I finish a workout, I then open the Strava app to edit the title, maybe the description, maybe the workout type. Maybe I'll like mute it or something. Usually I just change the title, maybe the description. Now the clock could definitely do that for me. And that was hard with the AI agent before because it didn't know ahead of time what kind of workout I was doing unless I like explicitly told it. And that would be like extra management. But now this assistant can finally actually know because I've told it ahead of time what the training plan for the week is. So you can just put in that and just assume that that's what I did unless I tell it that I've done something else. So that's pretty cool. I could set that up right now actually with a quick prompt just to test it. I think I should do that. So how's that gonna work? Because Strava can throw webhooks when something happens, but my client is not always awake, so it can't receive webhooks. And I don't think I have a public endpoint for it right now. I guess it still receives the Telegram messages somehow. I don't know how that works. I guess there's one like public endpoint for the gateway or something. I don't know how it works. So either I have to set up as like a cron job or with the heartbeats that it checks my Strava every time. That's kind of... I guess on a cron it wouldn't be that bad, it's just a simple API call. Or I would have to set it up to receive webhooks whenever something activity happens on my Strava. But then I would have to publish it as a service somewhere, probably Railway. I know it's a good platform for that. It's probably other ones. I've used Railway before. I know it's modern. I know it works well. But then would that even help? Because even if I could have a Railway like server running, listening for that webhook, it still doesn't have access to my OpenCL since it's local on my Mac unless I again make sure it's always on so it can receive messages. So I guess it doesn't solve anything. It makes sure I can receive the Strava webhook, but then my OpenCL agent for managing it is still unavailable, so... I would have to put a Strava manager agent on that running server and it would have to know the relevant context about my training plan or whatever. So I would have to inform my OpenCL that I want to set up this server for listening for Strava new Strava activities. And when they come, run a Strava agent to just for like social posess, potentially add like data to the activity, like a title, maybe title, maybe the description, maybe image, although it's not really possible with the API, so I could do it with a browser automation workaround. That's like a more complex thing. And also I do kind of have it working already though. I think I even set it up so nice that it's literally just a webhook that you can send images to. And when they receive my automation runs and runs like a batch automation to put up a new Strava activity. But I haven't checked it in a long time, so I don't know. But if we ignore the image part, that server would have to be able to run an LLM agent itself, like an LLM agent, which it could probably do. I don't know how exactly I would set that up. Like if I openCore figure that out for me. I don't know if that's if you run like the Codex CLI or the cloud code CLI or if there's something else. Like you have more of an agent SDK, like integrated in somehow. I really don't know how that works. I just need to just have an AI LLM model of some sort that you can use to think and reason or just natural language explain what you should put as a title and description for the workout. Then, um... Fucking my thinking has just stopped. I'm kind of distracted, I guess. I'm trying to organize my room at the same time. It's going so slow because I'm trying to think like I'm really not doing any room organizing right now. I'll take a picture again. The socks are in this. I just starred a little bit and then forgot about it again, thinking about this. Would this even be useful? No, not really. It's just kind of cool. I guess for

You set like the title and description that is like a treadmill and the pace and incline and distance that I'm doing, even if the watch metrics might say slightly differently, I know exactly what I programmed into the treadmill. And then I can subjectively maybe add how I feel if I want to. And I can add that myself or message it to Claude. Both will work, but I could set the title automatically. Yeah, I'm gonna try to send a message to Claude to automate this for the week. I'm gonna ask you to do this for the treadmill runs that I do in this week. It already knows my training plan. We'll just have to note that there's gonna be an extra one, which is tomorrow I'll do like a warm up before the higher rock things. That's also gonna be a treadmill run, but it's not gonna be as long as the other ones. I'll just tell it, you know, for the standard ones to once they're posted, I think I'll just set this on a cron job where it checks and runs like a sub-agent for that. Because I don't think the token user is just gonna be too heavy anyway, so I think it's fine. Yep, I'm gonna try that now, actually. Send that message to it. I could actually record it on the Mac so I can keep recording this voicemail on my iPhone, just keeping it continuous. That's cool. First I'm just checking my Codex subscription usage because it's essentially out or it's already out and I'm now overflowing using credits instead. So I should just upgrade my Codex subscription, to be honest. But I told myself I'm first allowed to do it tomorrow. I have 59 credits left. I think it should suffice. And then tomorrow I'm allowed to upgrade the subscription if I want to. Okay. I'll send the message. Yo, Claude, in the future, I imagine one of the many things you're gonna do is just to be my next Strava manager agent just to kind of manage my Strava profile, which is just kind of like a social display of some of my activities. And so this week we're gonna test this a little bit, a little bit. You already know my training plan for the week and all these activities get automatically put on my Strava. But now, but I sometimes go in and I get the title or the description a little bit and I wanna automate this because you can do this for me. So just this week we're gonna play with it and then, you know, maybe we can, we can review how it went maybe. So for now, when the automation was set up, let's make sure it doesn't, like it kind of ends when the week ends. And in case I forget to tell you to like stop it or something, we're gonna have that as an actual stopping point. And for the workouts I'm doing this week, there's two kinds, right? There's the strength workouts in the gym and then there's more like endurance type stuff there where I like run on the treadmill. And then we also actually have a maybe a third kind is the high rock session which is, I guess, kind of strength and endurance. It's like high intensity. For these strength sessions, you don't need to do anything. You can just ignore them because they're already kind of automated. But for the treadmill runs and the high rock group session, I want you to do something. And so Strava can send webhooks notifications whenever something is posted. However, I think since you're just running on my MacBook right now, which is, I sometimes put to sleep and stuff, you're not always online and you're not on the server. So I think we cannot actually set up a reliable listener to receive the webhooks. So I think what we're gonna do therefore instead, please correct me if I'm wrong on that. But otherwise, what we're gonna do instead is just have you kind of check on a schedule because I think the check would be so cheap anyways that it doesn't really matter. So you can just like every 30 minutes, just check my Strava if there's any new activity posted. And if not, then it's fine. Just the job kind of instantly finishes. But if there is, then it can continue. And also I wanna make sure whenever this happens, I don't wanna receive a notification. Like my primary communication with you is the telegram chat. I don't wanna receive a notification every time it checks, but I do like after the fact, if I review this in the weekend, I wanna have some like log of all the events that have happened and stuff. Well, that might be already automatically logged or just make sure that it's getting logged. But I don't wanna see this every time. And I also don't want it to block you. So if I potentially am having a conversation with you right then, and then suddenly this job kicks off, I don't want that to mess with our dialogue here. So I don't know how that works in OpenClo, but it should be in just like a background process or a separate agent or a subagent or something. Maybe a separate agent. I'm not exactly sure. Yeah, so the first part is essentially just a script that just like calls the Strava API, see if I have any new activities there. And if I don't, then it's fine. It just stops. If I do, then look at the type. If it's a strength workout, again, just stop, just ignore it. But if it's not, if it's one of these treadmill runs, which I track as a, you can see how it's tracked on my Strava already because it's gonna follow the same format. Like what I did last week, it's gonna follow the same format in terms of like types, workout types. The treadmill runs I record on my watch as an indoor run and I'm not exactly sure which activity type is used on Strava. If they have the same one or a different one because Apple system has their own like set of types and Strava has their own set of types. So sometimes they match, but sometimes not. And then for the, um, high rocks session, I do workout type other on my Apple watch because that one tracks like heart rate, but doesn't do GPS. I could also do like indoor run or something, but it's an indoor run. I feel like it would be kind of misleading. So I do just other and on Strava, I think that corresponds to just like workout. Like there's a workout type that's just called workout, which is just like generic. Usually it doesn't have GPS data, but I'm not sure. So you can check that on my Strava account from last week. Um, so that's how you identify what workout type it was once they're posted this week. And for each of these indoor runs that I'm doing for my plan, for my training plan this week, the easy runs, you'll see how I put the title on it today. And this is something I do manually after it's finished. After I've done the session, it syncs automatically pretty quickly and then I go on my phone and update the title. But now I want you to do this automatically for the other like easy treadmill runs that I'm gonna do this week as per the plan. So it's like the title with the duration and the distance and the speed and the incline, something like that. You'll see how it is, how I put it. Just put it in the same way and then kind of do this because the watch data can be a little bit misleading because it does, it's hard for it to track on treadmills like indoors. So I trust the treadmill numbers more and that's why I kind of write this because the watch usually overestimates the distance a little bit. So the actual activity data will show like a little bit further distance, but that's why I write it in the title, what the actual distance should be. There will also be an extra run probably, which will be a little bit of a wildcard. It's before the high rock session, so tomorrow I'm gonna do a warmup. So I'll do, we'll see what my, what I get time from, but I'll do maybe like 20 minutes on the treadmill beforehand as a warmup. So that one should not get, you know, the title of a one hour run or whatever because it's clearly different. So you should recognize that compared to the training plan just based on the day and also the duration that is shorter, that one should just be titled warmup instead. But for that one, I am actually going to use the same speed and incline on the treadmill as I'm using for the easy runs. So on that one, just call it warmup. But in the description, you can write the same metrics about don't mess with the duration because the duration is gonna be shorter. Just follow the real workout duration, but you can put in text like what the actual speed and incline was since I told you now. And yeah, that's it. So that's the whole plan. That's this automation that I want to be ran successfully this week. Please set it up. And I was thinking whether this actually even needs kind of an AI agent like LLM agent or whether it could be like a pure programmatic script. And I'm realizing, I guess it could actually be a pure programmatic script, but it could also have like an AI agent that works with it a little bit. Both are fine. I don't really care. Whatever works the best, but also like we've got to think about reliability. And yeah, it's such a small job anyways. So just whatever is like the easiest and most reliable or whatever is like how OpenClo is supposed to work, how it makes sense for you to set up this job and automation. And yeah, I don't want you to normally notify me when this happens, but, or every time you check, but you could send me just a quick message on Telegram every time you have actually updated a title or description or like a workout. That would be useful for me to know just like a quick acknowledgement, like, Hey, I saw this new thing was posted on Strava and I understood

Yeah, or if you drink too much, I heard somebody, people warned you you can get this like clicking sensation in your jaw, and then if you continue pushing through that, then it's really bad. Um, yeah, I think it's just over. Like fear mongering. I'm quickly gonna do some vacuuming also. So that's gonna sound horrible in the voice recording, but again, I think it's fine to just let it run, actually. Why not? Oh, my claw has responded. Let's see what it wrote. Strava manager update. Wait, edit activities now. See, that's not what I wanted at all. It was supposed to run for the future activities this week. Damn, okay. That's the first time I really mess up with that claw. Or, or no, maybe it hasn't. It's just making the rule. Oh yeah, that's something I didn't specify. I would like them, I see the muting. Um, damn, I'm a little bit confused right here whether it went ahead and edited existing Strava activities or not. Yeah. Yeah, okay, it warned me, at least that's good. Okay. So it's set up a cron job. Okay, that's kind of what I expected. Only it sends me messages when actually it is an activity. Log state locally. Okay, that's good. Background job, actually the session won't interrupt our chat. Okay, that's good. It logs in these two files. Okay, hard stop after this week. That's good. But then it says important transparency on first run. It backed to the reason activities and edited 12 older entries to a new naming scheme. And then it says it noticed that and already tightened the scope guard, so from now on it only processes activities from this evening onward. And then it suggests to do a cleanup. Okay, that's actually perfect. And now I'm gonna dictate a message back. Oh yeah, I definitely didn't want it to edit any existing activities, only what happens from now on and in the future for this week. So yeah, you got to undo all those edits. Also, there's one thing I forgot to mention, which I think is not breaking, but it's important to know. So you might make a small edit to the automation for this. It's that as you see, a lot of the activities have this kind of mute emoji in the title. And not all of my activities have these, but a lot of them have it. And the activities that have that in the title, they are muted, which is a property on Strava activities, which you can choose to toggle on or off on an activity. And when it's muted, it's not gonna appear in your followers' feeds, but it's still public. It's kind of somewhere in between public and private activities, where it's still public if they go on your profile, they can see it, but it does not appear in their feeds. And I'm using this on a lot of my boring activities on Strava. So all of my gym sessions, which don't even really belong on Strava, I'm undecided, but for now I'm putting them there. And as I mentioned previous, I have a separate kind of automation on them. But also I do it on like indoor runs, on treadmill runs, and I also do it on the group sessions, sometimes, not always, it depends a little bit on how I feel. I think right now I have it automated to do it. But when I run like outside, for example, I'm not gonna mute it. When I do that again, outside activity, you know, it's part of the Strava social network or social media that you post on public and then let them appear in people's feeds. So it's important for the automation to know that, you know, some activities are gonna have this in the title and they're gonna be like muted. But really the work it should do should just be, it should just ignore this. So if it's not in the title already, then our automation should just, you know, edit it, put the new title or whatever if it's editing. But if that is in the title, then it should leave that to still be in the title at the start and then it just kind of edits the title that comes after that mute emoji and the space. I'm not sure if there's a space. Yeah, I think there is. So when I set edit the title, it should kind of like leave the emoji if it's there or if it's not there, then it doesn't matter, but edit like the title that it just after. In the same way as you'll see that I've done it this week, for example. Because the way they get muted, I'm not even sure if this is available through the API. I think last time I checked, it actually wasn't available through the API. But I'm using a different tool called Straw Automator to do this. And I'm not sure how it does it if it's not enabled through the API. I actually don't know how they do it. Maybe they got a unique permission from Strava or they have some cool unique work around. But I have it set to automatically mute certain types of activities like strength workouts or just like any session without GPS data. And it's usually very quick because it listens on webhooks. So it will probably always have done this before our automation checks on workout, although it's not guaranteed. So there could potentially be a race condition issue, but I don't think it's something we need to worry about in practice. But yeah, that's just important context to know. So just make sure that automation is going to work given this fact. Awesome. I just sent that prompt as well. I also see it didn't use that in my credits, so I'm still not gonna use up my usage probably before the weekly usage limit resets in, or it's in two days actually. So if I want to chat more with my open cloud tomorrow, then I would definitely use it up. So at that point, I might just upgrade the subscription. Tonight will be the last night I'll have to like think about it a little bit. Also, I got to finish up my stuff because I need to go to bed soon. Okay. Some more vacuuming. Cool. Finished the vacuuming. Just put in the socks. I'm realizing time is a little bit running away from me now. That's all right, man. Because I have been very proper today. Stayed off the screen or lost a lot of time in the beginning of the day, but in general, then, you know, not too much on screens. Got real life stuff done. Just doing this final texting with OpenCloud now, but then I'll put the computer away and just do a little bit more real life stuff. Maybe prepare some outfits and gym fits and just gym bag in general for tomorrow. I'm realizing as I'm doing this with the Meta glasses, since it was such a nice combo of being able to record audio all the time and I cannot do like video all the time in a smooth format. But since I can do audio essentially all the time and then very easily take a quick video or take like a lot of photos with the POV, we're starting to get closer to this. Like, I'm just thinking about how AI is, we're mostly seeing developments with these LLMs, but there have also been some huge developments, I think, especially from Google in these like world models where like kind of an AI model actually kind of understands like the 3D world. And so if I now started like capturing shit pictures and videos POV, then that's like data that's, it's much further away, but it's eventually gonna feed into what's gonna be like this kind of detailed world model that I can make from my POV. Or make a world model for like the environment I'm usually in. So it's actually data that's just kind of cluttered right now, but that will eventually be easier to process and actually use for. Okay, let's see, Chloe responds. Reverted that edits. I should have checked my Strava profile before actually to see the incorrect edits that it made, but now it's too late. Now it's fixed already. Yeah, it looks like the same as it was, so. By the way, I don't remember if I mentioned like the case where the activity I post is the kind of high rocks group session. Did I tell you how to edit the activity in that case? Like did we set that up in the automation or as of how it's set up right now, how would it, what's it like gonna do in that case from the setup right now? I just ask one more voice message. I just got uncertain about this one case because I think I forgot to think about it. I think I mentioned the runs and the strength sessions and the warmup for the high rocks, but not the actual high rock session. I'm gonna send one more dictation message. Alright, okay, for that session specifically, I want it to the title to be set the way I did it this week on my Strava. So go ahead and check that and then program that into the automation. Or sorry, I mean the title should be the way I set it this week. It's Norwegian's like sats gripetimme and then in the description, it should write that I tied up the session, which, so it's just gonna be performance high rocks in the description. And I see the dictation actually spelled high rocks wrong. So I'm wondering now if it's gonna understand that it's with a, because my dictation wrote it with an I, but it's actually with a Y. Let's see the response. And yeah, then I need to put the computer away. I keep saying that I'm gonna do it and then I don't do it. I'm gonna ask you one more thing actually. By the way, since I said I was uncertain about whether this is like an AI agent doing the job or just a pure script, could you tell me a little bit more in detail about how you chose to set this automation up and how it's gonna work in practice? Let's see what it writes.

Now I'm thinking in the back of my mind, okay, I need to get off the computer, I need to get off the computer, I need to get off the computer. I'm reading you this once. Yeah, it wasn't completely clear, but I think it's like more of an agent prompt rather than like script instructions, but it's like a structured agent prompt. I have another question now that I just realized, so I'm going to dictate this. Okay, by the way, what happens if I put my Mac to sleep for many hours, so my open cloud also goes to sleep while it's sleeping, I'm doing the workouts, and then let's say like a couple hours later, I wake up my Mac again, and the open cloud wakes up. Like what happens when this automation runs? Is it gonna see that there's older, like is it gonna only check the last 30 minutes, or is it gonna understand that it's been longer and therefore it should consider a longer time frame, or how does it work? It fetches 20 latest activities, that's ridiculous. Yeah, but yeah, it doesn't matter. It's gonna work. That's why it went and edited the previous activities, but then it set in the filter. And I'm gonna set it at that parameter of like fetching the 20 latest activities is ridiculous since I'm only doing how many, maybe like 10 in a week. But I mean, since you put the filter for like this week and you have the logs and the files for checking, which I'm gonna process so far, I guess it doesn't matter. It's gonna work anyways. So I just wanted to note that it was kind of ridiculous, but I think we can just leave it as this because it's gonna work, right? Awesome. Let me take a picture of this just for fun. I do feel a little bit like Iron Man when I'm like taking these POV pictures. Let me also check the open cloud dashboard actually for the cron job just for fun. Yeah, I keep saying I'm gonna put the computer away and then I'm gonna do one more thing, one more thing, okay. One job next week in 10 minutes. Here I can make a job actually on Agent Message cool. Here's the job manager. Okay, after I can see the prompt. That's cool. And this one has ran once. It was set up 15 minutes ago. And then it did this, but then we didn't want that, so we have undone that later and that's fine. There we go. It stops itself after this week. That's cool. And then it's just a prompt with the number of like rules. And so it's gonna run an agent. Okay. This is like natural language programming. Cool. So I'm not reading the full thing. I'm just assuming that the logic is correct, which I think I can trust the AI for that. Let me take a picture of this as well. Just for fun. All right, take a picture with the meta glasses. Does this appear in my photo library as I'm recording this? I think it's actually going to. Yep, it does. Not the latest one, but some of the ones I took. And these are from an hour ago. These are from before I started this voice recording. So maybe the Bluetooth channel is busy now. That's why it's not happening. Then I think the videos only happen if I connect my phone to the Wi-Fi from the glasses, which is like a manual step, or it's like when I put them to charge, then they connect to the Wi-Fi in my house and then they sync it. Which is also pretty cool. All right, put the computer to sleep then. OpenClaw is getting more powerful and this crawler thing I can do for the short form video tomorrow. Just something simple. Charger. I'm going to clean it up just for fun again. I'm going to take off the glasses quickly because I wanted to take off my shirt. I wonder how that's going to work with the microphone selection as I take the glasses off. If they're going to disconnect. Let's see. Yeah, it looks like they're still connected. They're just lying on the table right now. That's cool. Maybe it preserved the connection. Maybe that's more for the night features and when I get off, maybe it's always connected to the phone via Bluetooth and the microphone is available even when you take them off. I don't know. I heard the activation noise. Now I'll take a picture of how my screen looks with the audio selection. Let's take a picture of how I look as well. Cool. Just gonna take a piss quickly. Looking jacked. I've made super little progress in actually doing stuff in my room because I'm like preoccupied my thoughts with doing this recording and the open cloud messages. And so now I'll maybe yap a little bit less and just try to do more stuff. I'm trying to prepare a little bit for tomorrow. I need to brush my teeth and then I'm ready to go to bed. And I'm starting to feel pretty sleepy, which is good. So I think I'm going to bed within the next hour probably. So I can wake up nice and early tomorrow as well. Then tomorrow is a little bit of a mess since the group session is kind of late. Like it literally ends, yeah, almost now. It ended like 30 minutes ago or just tomorrow, but yeah. So it kind of messes up with my normal sleep schedule that I want to have. But it's a good social opportunity and now I'm a little bit of a familiar face there. I am a little bit of a familiar face there, so. And I see other familiar faces as well. So that's good. Some more thoughts from today, by the way. I woke up, had a few more pimples today. They're like small, not really making me that self-conscious, but it's just like annoying. You just feel so unnecessary. And I have issues with the skin in my face being kind of dry and my lips, lips more dry, skin in the face a little bit. Especially like when I wake up in the morning and then I like go and eat, I'll like kind of see dry dryness, dry like flakes, or it almost looks like snow or powder around my mouth or my cheeks and stuff from the dry skin. Hit it. And also I guess after I came home, like after the gym and then the warm shower there and then the cold dry air outside, I saw the same kind of dryness around me after I came home and ate. I also wanted to know and document what I ate today. Uh, it was a little bit of a mess today actually. Um, because I didn't know how fast I was going to get out of the house, but since the first thing was going to be running, I didn't want to eat too close to that. And initially when I woke up, I really wasn't hungry, so I thought I was just going to skip breakfast, but then I ended up sitting at home working on Claw longer than expected on this bug. Therefore I wanted to eat something, and so I just had fruit. I just ate like uh an orange and an apple, I think. It was kind of my breakfast. And then I stayed up even a little bit longer before I finally did the workout. So I really ate very little before that session. Then after the run, I drank the recovery shakes, of course, but I haven't had any like hunger or energy issues, so. I don't know, I think I ate a lot of food in the weekend, especially Sunday. I was kind of like stuffing myself a little bit with a proper dinner from my mom because I figured I wanted to like fuel the body and whatever, and it was good food and protein rich, so I just like kind of forced down maybe a little bit more than usual. So maybe I just like therefore naturally didn't need to eat as much today. Also my weight was like alarmingly high when I weighed myself. Yesterday, I think I failed. I weighed myself today, but I think this is like weird bug on the watch when it's like in sleep mode where it just like, it kind of cancels right before it's logged. So I don't remember what the weight was today. I think it didn't get logged, but from yesterday, it was kind of like scarily high, I think. Like it's been increasing a little bit faster than I would like, but still I have no way it might just be the creepy team. So I'm not going to worry too much about it, just keep making measurements for every day for like a week before I can think more critically about it, or at least like I think a week, definitely maybe a little bit earlier, like half a week from now. Maybe like once Wednesday comes around, I can like start thinking about a little bit more, but at least not now. And maybe just wait till next week and it's easier.

Jo. Sadeon er det magt, eller? Det er helt umenneske. Takk. Jeg har jo magic touch med teknologi, har du ikke fรฅtt med det? Men prรธvde du veldig mye? Jeg skjรธnner ikke denne bekymringen med at hvis den fรธrst ikke starter, hva er bekymringen med รฅ ikke bruke opp mer strรธm? Altsรฅ, er den ikke allerede tom nรฅ, liksom? Hva er poenget med รฅ ikke fortsette? Ja. Nรฅ var det kanskje det som skjedde? You just got forwarded.

Hรฆ? mรฅ du ogsรฅ historie ting. Er det en forventning at alle er ute? Nei. I lรธpet av alle dag for eksempel, eller akkurat nรฅ? Hva med kvinner? Det er vel mindre, eller hva? Seksene? Setter den? Nei, ikke blant unger, altsรฅ. Interessant. Interessant, altsรฅ. I was just there walking into the living room and in the kitchen to put away some stuff and then I talked a little bit with my family. And now we're doing. I don't know if the audio got captured since my phone was in the room, so I don't know how the Bluetooth worked. Actually, I want to go and listen back to that just to like see. I'm guessing I actually probably captured everything fine. And also I think it makes sense for me now. It's just this recording like this has me distracted a little bit. I think it's because I'm doing it too actively. I could leave it more passive. Like I could just talk less, stay more in reality, but still say things once they come to mind. But I think I'm just talking a little bit too much for now, so I'm gonna stop it. And then I still have my phone if I wanna record some more notes, but I might just put it away for the day and maybe try to go to bed soon and just read a little bit to fall asleep. I packed my training bag, I'm gonna do a little bit more like, yeah, reorganization in the room or planning of like outfits. I don't know. I'm gonna stop the recording here.
3d30577e80c9a043588ae93ddef5d8439ec1e4ad084b63a7f11414736d876d8c_71c9feaa090a.m4a
Monday, February 23, 2026
8:11 PM ยท 1:36
Essence

The speaker is troubleshooting an OpenAI instance that went to sleep, trying to wake it up and test its interaction with a voice memo recording.

Summary

The speaker encountered an issue where starting a voice message automatically stopped their voice memo recording. They also realized their OpenAI instance was sleeping because their Mac had been closed and wasn't charging. After reopening their Mac, they observed no activity in Telegram from the instance and wondered how to wake it up, noting it usually wakes automatically. They checked the web dashboard and confirmed no Telegram messages had been received. The speaker then took a POV picture using a new workflow, and shortly after, the OpenAI instance woke up, indicated by typing activity in Telegram, which they clarified meant it was working, not necessarily typing a response yet. They then planned to send a new message and test keyboard dictation while continuing to record the voice memo.

View full transcript
Okay, I couldn't do both at the same time when I started recording the voice message, it stopped the voice memo automatically. Also realized OpenAI instance was sleeping because I closed my Mac and it's not charging, but I just opened it again, so I don't know if it's going to wake up automatically. It doesn't look like there's any activity in that telegram. Hmm, I wonder what I need to do to like wake it up. Usually it just wakes up automatically, I feel like. I will check the new web dashboard. It doesn't look like it's received my telegram message. I'm gonna take a picture right now. This is a nice work clause to be able to take my POV picture instead of taking a picture with my phone. Oh, now it's woken up, I see it's typing in telegram, which is a little bit misleading because it's not necessarily typing the response yet, it's just doing work, but at least shows that it's awake. Oh, now it actually was typing. I'm gonna send a new message to it. Now I'm going to try the keyboard dictation and see how that interacts with the voice memo that I'm recording. So I'll start keyboard dictation in 3, 2, 1.
83bff1c5ba3827dd1467e6cd1dfa6894d76c8268d263079717f1cab6af37e2bb_087213b5bc9f.m4a
Monday, February 23, 2026
7:58 PM ยท 12:14
Essence

The speaker is organizing their room while testing the Meta glasses' recording capabilities, realizing a powerful combination for continuous audio and intermittent video/photo capture, and reflecting on their progress and future plans for their 'open cloud' personal AI system.

Summary

The speaker is tidying their room and experimenting with their Meta glasses, discovering that they can use the glasses as an external microphone for continuous iPhone voice memo recordings while simultaneously capturing video and photos with the glasses' built-in features. This realization excites them as it offers a discreet, hands-free method for all-day audio recording, potentially replacing their current logging methods and integrating with their personal AI system. They envision using this setup for activities like gym workouts, allowing for continuous audio capture with easy photo/video additions. They then shift to thoughts on their "open cloud" system, inspired by a YouTube interview with someone who has built a highly functional personal AI named Felix. The speaker is particularly interested in improving their system's memory and knowledge organization, having learned about a framework for memory enhancement from the interview. They've made progress by setting up access to the YouTube Data Portability API to ingest their watch history and video transcripts, aiming to process this data overnight. Their goal is to enable their open cloud to answer questions about their consumed content, allowing them to query specific videos or topics they've watched. They also briefly mention wanting to chat with their workout data in natural language, and attempt to send a voice message while recording to test multi-tasking capabilities.

View full transcript
And now I'm gonna do a recording, dump some thoughts as I'm just doing some room organization. First of all, now I did test the Meta glasses. I'm wearing them right now recording with the Meta glasses, or with my iPhone voice memos app, but then it's using an external microphone, which is the glasses. So I guess I'm kind of recording with the phone, but using the glasses as a microphone. Interestingly enough, I tried the video recording feature on the glasses, and it captured also audio in parallel while still leaving the voice memo running also with audio. So it's like none of them blocked each other. So that was really cool. But I'm just gonna, yeah, keep recording this. So thoughts around the open cloud system, I guess. I just gotta figure out how I'm gonna organize the room. Take a picture right now. Things start to get interesting, okay. So, yeah, I have some training clothes from today, which have aired out a lot, so they're done now, I think. And then socks from the laundry yesterday, which now need to be organized. I just took a picture of that. We have the, what's the name, called my drawers now. Boxes, shelves, so we've got a bunch of stuff. Take a picture of that. And then we have back wall here. Just a gym bag on the floor to clean up, and the shirt maybe, and the hangers. Volume is a little bit loud on the glasses, but I think I can live with it for now. It's just when I take the photo, I hear this confirmation sound, but I don't really know how to control it. Let me try a picture now. Still super loud. Whatever, it's fine. There's this like touch surface on the right side, but I haven't understood the gestures fully or I'm not precise enough with them, I don't know. I'm realizing right now this could be a cool combo. Like the Meta glasses have a limit on video recording, only a couple of minutes at a time. But you can actually run continuous voice memo recording, so audio recording, essentially all the time. I didn't realize before right now actually. That's one way to have audio recording. Oh, fuck, that's a cool realization. Because the light indicator on the glasses doesn't light up for this audio recording, since it's really my phone doing it. And, well really, this solves kind of my problem, because I was thinking about maybe buying a device that can like record audio all day. This doesn't quite solve it, but it's a good step in that direction. I can really just use this combo, my iPhone with the glasses, or potentially maybe even Apple Watch with the glasses, if it connects via Bluetooth, but I maybe don't think it does. And then you can record for probably like an hour at a time, audio, and you have the mic like right on your face, so it's not obstructed by anything, but still nobody's going to think about it, because it's just glasses. And then you can easily add photos or videos by just using the button to click for photos or hold for videos. And you still have the continuous audio recording, but you can also get a video for a couple of minutes if you want to, or a picture. Then that's actually cool. I should try this like in the gym next or something, maybe. Yep, it could replace my whole like app logging flow. Oh, that would be cool. You can like wear your assistant on your face and then you can do the call also with the assistant from the glasses. Then it starts getting kind of real. And then if I could integrate that with a heads-up display, that would be sick, but I don't have that in these glasses, so. Future concern. That was a cool realization actually, these glasses, specifically since they have a microphone and speaker in them. Especially the microphone, or the camera also. It makes some kind of elite for the system that I'm trying to set up, actually. Okay, that's cool. So I need to actually clean the room though, or just like get my stuff done. So one of the YouTube videos I watched today, or I listened to it as I was not planning. It was kind of like an interview. I think it was just like a Zoom call or something. It was kind of an interview. So the interviewer, I don't know anything about him, but there was some like Asian dude who was like playing with the open cloud and hasn't really gotten it to do anything powerful. Then the interview subject was this dude who's gotten his open cloud to do a ton of stuff. His open cloud is called Felix. It has like its own Twitter account that has a lot of, that has gotten a lot of attention. It, um, there's like an own crypto coin for Felix, although I don't think he made it himself. Like the crypto community did. So he makes a little bit of money off of it, although it wasn't really his thing. Which is good because I think the whole like crypto thing, I'm not paying attention to it and also a lot of it is like scammy in its nature, which is not good. And, um, and this dude had his open cloud like spinning up new apps all the time, like full mobile apps and web apps making products. And also you have like a very powerful memory system in there and he had ported a lot of his knowledge and second brain from Notion Obsidian into it. And he had improved the memory system a little bit over the default setup. And he's also like selling a service where he kind of sells his open cloud setup to people who just want it more powerful quickly. I don't think I need that for myself, but he did mention some things in the video about how he's made the memory better and it seemed quite valuable. But you know, since I'm just listening to this in the background, I didn't capture all the details. And he mentioned this framework from some name of some person who, which like, I don't know what this is. I'm guessing it's a framework for either like knowledge organization in general memory organization, or it's specifically a framework like in the age of AI, like a memory framework for an LLM maybe. I don't know. Could be easily inferred by looking at the transcript of the video. So that's what I'm hoping I can get my open cloud system to do, but it's not really set up yet. Well, I have now set up the access to the data portability API, so it can get my YouTube activity, which contains my YouTube watch history. But there's a limit on it because it's not really meant. I'm not sure exactly what it's meant to be used for and what's like really breaking it. But I think there's a max frequency of around like once per day that you can request it. And since I was messing with it this morning, I can't do it again. But I think I should probably set it up. I mean, that's like the max frequency that put you on a cooldown. Maybe it's 24 hours, although I hope it's like 23 hours or something so that I can actually run it at the same time every day. So I'll have to test that tomorrow, although I can't test it tomorrow during the day because then there'll be a new cooldown to the next day. So I think I wanna set it up, like instruct my open cloud tomorrow so that when the night comes and it's like I'm clearly asleep, it's like 2 a.m. or something, or 4 a.m. maybe, then it goes ahead and issues the API request, gets my YouTube history from the previous day, gets the transcript from every video, stores it in the database, also just stores in the database like which videos I watched, of course, so it would never need to request that data from YouTube again. But also does some further like processing or ingestion where it like gets the transcript from each video so you can know more about what I've consumed. And also in addition to this just transcript perhaps, also there's another processing step of like summarizing it or putting a key points and stuff and putting that maybe in a separate text field in the database. And then, well, the database is cool and then we at least have the data, but really I want it to be just as like integrated in the memory of open cloud as anything else or at least a way for us to search it quickly so that I can ask it a question anytime from, you know, the next day or later, which is like about this one video I watched, describe kind of what it is, what the content is or the title or perhaps just when I watched it, like describe the video in any way where it should be able to determine which video I'm referring to. And then I can ask it about anything I heard in that video to expand on it or to reiterate or to do more research on that thing. I really want this workflow, but I don't have it yet. I need to get that set up. Then I do also just wanna be able to like chat with my workout data in natural language, but I guess I have that already. Let's actually test it right now. Could I send a voice message while recording these memos and stuff? I mean, likely not. Let me just try them. I'm in the Telegram app. I'll start holding the voice message button in three, two, one.
c9f21ac552fbf89461505c84ac5976d4b226ca6d7c8048729168c0e15b4cab14_a2d58a7806b9.m4a
Monday, February 23, 2026
7:52 PM ยท 1:37
Essence

The user is testing the audio recording capabilities of Meta glasses in conjunction with their phone, specifically how audio is captured when moving away from the phone and when recording video with the glasses.

Summary

The user is conducting an audio test using Meta glasses, with their phone nearby initially. They test moving away from the phone, even into another room, to see if the glasses continue to record audio. They also experiment with the glasses' voice commands, noting that "Hey Meta" seems disabled but the button still works for taking pictures. The user then attempts to record a video with the Meta glasses, wondering if this action would stop the ongoing audio recording on their iPhone. They express uncertainty about whether the video will have audio, or if both devices will capture sound, and plans to check the results.

View full transcript
Okay, this is just a quick audio test, recording audio with the Meta glasses and my phone close by. And now I'm talking really close into the phone, which shouldn't matter because it should be using the glasses. Then I'll move far away from the phone. I'll even go into another room. And as I'm walking into the bathroom, close the door, still I'm just talking. Hey, Meta. As now Meta is disabled, it seems like we can still use the button maybe. Seems like I can, interesting. So you can still take pictures. Probably not videos though, because I don't know how that works with audio. Well, let me try it actually. Let's try a video. So now I'm doing the video, and I'm guessing that would cancel or stop the audio recording that I did on my iPhone, but I'm not sure. Hey Meta, stop recording. Is it even listening to me? Seems like the voice memos are still capturing the audio from looking at it. More about the glasses, are they listening? Hey Meta, hey Meta. Stop recording. Seems like they're not listening to me at all. Let me try the button. Hmm, interesting. So now I'm guessing maybe the iPhone captured the audio and therefore the Meta glasses video will have no audio at all. That will be interesting to check. Or if there's audio on both, I would be kind of impressed actually.
ac0b4ff60a08563e59669baad67b25d266fc4e05ddccc5487db6890881b3895b_32a569d63793.m4a
Monday, February 23, 2026
3:22 PM ยท 11:33
Essence

The speaker reflects on a surprisingly easy training session, despite higher speeds, attributing it partly to a new nasal strip and contemplating the reliability of fitness tracking data and the balance of training intensity.

Summary

The speaker completed a training session that felt surprisingly easy, noting that their heart rate remained low despite increased speed. They questioned the accuracy of their watch's heart rate monitor and the consistency of treadmill readings across different machines, recalling a previous run that felt harder than expected on similar settings. A new nasal strip, which aids in nasal breathing, seemed to contribute to the ease of the run, though they also experienced a runny nose, which was less problematic than before. They also mentioned buying recovery shakes, acknowledging it's a costly habit but convenient. The speaker has returned to using Nike shoes after a bad experience with Hokas, and their previous ankle/Achilles soreness has mostly resolved. They anticipate a snow shoveling session later and are trying to optimize their strength training, aiming for muscle growth without overtraining or wasting effort, acknowledging their subjective feelings about training intensity might not align with objective data.

View full transcript
Training session finished. It went good, felt good for the most part. I wanted to note with the running, I think I surprised myself with how easy it felt and how long my heart rate stayed given that now the speed was even higher. If my heart rate stayed at 135, I think on average, 137 maybe, where, what the fuck? There's some stupid drivers on the road right now. I was just confused by their behavior. Yeah, now there are some caveats, of course. I don't know how accurate the heart rate monitoring from the watch is, although I think it's accurate in the sense that you can, when I use the same watch consistently, I can trust that the numbers are the correct in like in relation to each other. So if it's higher one day or lower one day, I can trust that it represents a like real physical change. But then if I, for example, use the, a different kind of heart rate monitor one day, like the belt around my chest or waist instead of the watch, then I'm not certain how comparable those numbers would be, whether they're like truly you can trust them to be comparable or not. But that's not really relevant for this because I am using my watch on all of the easy runs where I'm trying to stay in this like easy intensity zone. So that's not really a concern, just, you know, something to be aware of. But then there is a different caveat, which is that these treadmills, I don't know if I can trust the numbers on the treadmills. Of course, you would assume you can, but there's different brands, different gyms, and even like within the same gym and same brand of treadmill, there could be potentially variations just between the treadmills as well. I mean, I don't know if there is and how large that is and like on average, if there is and how large it is, but I mean, it's possible for sure. And that again makes the uncertainty of like, can, if I run this like same speed and incline and duration on the treadmill, but then I use like a different treadmill from a different brand in a different gym another day, is that probably the same? Is it probably very similar? Is it, can it be like quite far away? How high is the variance? I don't know. I became more aware of that like last week because the last of the easy runs last week on Saturday felt surprisingly hard. Like, of course, there's small differences, like it's still an easy run overall, but you know, slightly tougher than the other ones, which could of course reflect a difference in my biology that I was more fatigued from the work during the week. It's hard for me to know, right? It's just a feeling, but it did feel to me like the treadmill, even though I had the same numbers, like if it did feel like the treadmill was actually going slightly faster or slightly steeper or both, which made me feel like, you know, even though I had the same numbers, that there was some variance between the treadmills, but it's hard for me to know for sure. Anyways, yeah, session today, even with a higher speed than last week, my heart rate stayed at lows, so that was cool and also felt pretty easy. Now there was another difference today though, so another caveat, I guess, which is that I tried running with this nose strip or this nose tape that opens up your nostrils so it's easier to breathe through the nose. I did this, I never used this, but I had them lying around and I've been cleaning my room, so I figured I should use them or throw them away. So I like tested it. I had two, I tried one last week, but it didn't attach properly, so I ended up not really getting to test it. It just fell off after a minute or something. Today, I attached it slightly better and it worked great. So I was therefore also kept my mouth closed and was nasal breathing the whole time. And it was quite easy to do that. The other runs last week, I did try at times to do nasal breathing, even without the strip. And I feel like I could do it, but it was tougher. So it was easier today because the nostrils were more open, but also, yes, or last week in general when I tried it, I had more issue with like a little bit of snot starts running down my nose when I try and nasal breathing. And so I need to like wipe it constantly. And that happened today as well, but it was so little that it was manageable. And yeah, and I don't know why there was less today than last week. If that's because of the nasal strip or if it's unrelated, I think it's actually unrelated and kind of random or other factors that I just had less, less of a runny nose today. But it could be like the something with the form of the nasal strip also makes it run less. Perhaps there's more air there, so it dries more. I don't know. Yeah, overall felt good. Then also I bought on my way to the gym, I bought two of these kind of like recovery shakes again, which I drank after the run while doing the start of the strength workout. So definitely a big money waster buying these over and over. But I like the practicality of it. Ideally, I should just bring something instead. But you know, I haven't prepared it ahead of time. And so once it gets around that time that I'm going to head to the gym, I want to just get out of the house. I don't want to spend more time messing around at home preparing stuff. So that's why I just leave and then I have the convenience of just buying it. So it's fine. Not a big concern, but you know, it's not if I want to be more, I might want to be more structured where I like have this shit prepared. Either like I don't need to bring anything at all and have anything during the workout, or if I do, I like and bring it prepared ahead of time, or I buy it on my family's budget and I just have them lying at home and I bring them from home. I think that was everything I wanted to note from the workout. Yep. Now I'm back to using the black Nike shoes that I've been using for almost everything because I tried the other ones, the Hokas, and it was a bad experience as I've documented already. And now no issues with the Nikes. I was wondering about, you know, the stretching in the ankle or Achilles or lower leg. It wasn't a big problem today. I kind of forgot about it, which also kind of shows implicitly that it wasn't a big problem. I think I noticed it a little bit, but like a little bit initially maybe, but then kind of forgot about it. So it wasn't that big of a thing. Also, I wanted to document from, you know, when I got it that last run, you know, on Saturday, I got it more than any other session, which was maybe because it accumulated over time or maybe because of the different shoes I was using or maybe both. And then I said even that afternoon, I was like still feeling it if I like, I could like feel it had like stretched there. So if I kind of bent down in a certain way, I would like feel that it was kind of sore in that tendon that I've been stretched. And I could feel that during the Sunday as well, that it was kind of sore in that tendon. But by around the time, like Sunday night when I went to bed and today and Monday, or at least today, Monday, it felt kind of like back to normal, mostly. Maybe there's a little bit there still, but maybe not. Maybe I'm just even hallucinating it because I'm looking for it too much. I would say it's essentially gone like this morning. So back to normal. So yeah, that's good. I might have to do kind of another-ish physical session today of snow shuffling, snow plowing, because there's a lot of snow at home and it just needs to be cleaned up. So I think I'm going to do that as well and just hope that it doesn't really interfere with the training. I think it's fine. I think I'm overthinking it really. When I'm afraid that it's going to interfere, I think it's totally fine. Also on this session today, I think I went slightly lighter or not lighter, but slightly less, like less extra exercises and stuff compared to last week. I don't know what's right. I'm trying to figure out what's training that's like, you know, I want the optimal, optimal. It's kind of hard to define, but you know, I want to stimulate for a lot of muscle growth, but I don't want to waste my time and I don't want to train too much where it's like, you're not really getting much more stimulus. You're just getting more fatigued, which I know is possible. Kind of like overtraining, not that it's dangerous, but just that it's a waste of efforts and you're actually going to get gains slower because your body needs more time to recover. And I don't know if I am in that range or not. And it's kind of like, I feel like it's kind of a bitch excuse to say that you're afraid of like training too much because probably you're just training too little or not hard enough. But I am like genuinely concerned about that, that I'm maybe just like, or rather just like wasting my efforts, not, I could be smarter with my efforts. So I mean, you'll have to look at the logs to get like an objective picture of this because I'm kind of saying my subjective feeling. And even now, you know, it's like 20 minutes after the workout, I've already kind of forgotten about the details, but I believe I made some decisions around some of the exercises to just have like slightly less of the extra like chest or triceps stuff that I usually do. And then
e13e3720803bd721982c6145d93f4f56a5aedf53547533af262e366f9a717883_cf7469fc7868.m4a
Monday, February 23, 2026
11:51 AM ยท 2:13
Essence

The speaker wants to document their daily digital workflows and tool usage as a new, metadata-rich data source for their OpenColl system, hoping it will be captured and recalled in the future.

Summary

The speaker is reiterating a previous thought about adding a new data source to their OpenColl system, specifically a "metadata source" that documents their daily workflows. This includes details like which tools, software, devices, and apps they use, and how they use them. Examples given are their voice memo habit, taking photos for various reasons (food, outfits, acne, fun), their use of AI chats (and attempts to centralize them), and their past and present use of Notion. They acknowledge that some of this data could be inferred from screen time or app usage, but much of it needs to be explicitly documented by them. The speaker hopes this voice memo will be ingested into their OpenColl system so that in the future, the system can remind them to implement this data source or improve the system itself, even though the full capture and semantic recall systems aren't yet in place.

View full transcript
I just remembered something that I also thought about previously and maybe written down, but I'm not sure. I just want to reiterate it one more time. It's another data source that I want to have in my data sources overview. But this one, it's kind of a metadata source, because the data that I want is the data should be my workflows, essentially how do I operate throughout the day, which tools or softwares or devices am I using, how am I using them, which apps am I using, how am I using them. So it's going to be something like, you know, it's going to document this typical workflow I have right now of dumping a lot of thoughts into these voice memos and how they might also get automatically synced. How I like sometimes take photos of the food I'm eating or sometimes just of my outfits or my acne if I'm concerned about that, or some photos I, you know, just photos you take with people for fun or because you think it's beautiful. I use them for like different functions. How I'm, you know, using different AI chats, but lately I've tried centralizing it all through OpenColl and build a centralized memory index, but it's still like undecided. How I've used Notion a lot in the past, but not really that much now. And like what kind of data is there for storing Notion, stuff like this. But there's a lot more context to add. This is one data source that it can be kind of inferred from other things by like tracking my screen time activity and maybe my position or app usage and stuff, but some of it, large parts of it are available from just me kind of dumping it, like if saying it myself or writing it down and then having it cleaned up on my desk. So I need to remember to do this. I'm not setting a reminder right now, but I am documenting on this voice memo. So if I get this ingested into the OpenColl system, perhaps in the future when I ask it for open items that I've said I want to like implement in terms of data sources or just improving the OpenColl system, then you can remind me of this. Hopefully this does not just disappear into the void, but I mean, I haven't yet created the systems to capture this information in a meaningful way and like semantically bring it back later. But I imagine I document it now and we'll get those systems set up in the future. Again, this I can now say hello to myself and my OpenColl AI assistant system in the future. Hello guys.
b04ca29e40cad5497ac3dc8958c3b585ad08fb5d7260aeb8e2ba310da6351fad_e21456efe14f.m4a
Monday, February 23, 2026
11:42 AM ยท 7:24
Essence

The speaker reflects on a productive past week despite not feeling "locked in," while also noting a potential cold, frustration with daily content creation and an AI project, and the need to improve their voice memo ingestion system.

Summary

Driving to the gym, the speaker reflects on their past week, feeling objectively productive with habits, training, and content creation, even if they don't subjectively feel "locked in" or like they're "crushing it." They aim to maintain this consistent, gradual improvement without getting overwhelmed. They also mention feeling slightly sick, like the start of a cold, but plan to ignore it and proceed with a combined run and strength session. The speaker was happy to wake up naturally early, setting them up for an earlier sleep schedule. They express frustration with their OpenClaw work today, having accomplished nothing, and acknowledge the psychological friction they experience daily before creating content, despite it being a quick task once started. They consider ways to reduce this friction, like better planning or a pre-set camera. They also realize they struggle to document their OpenClaw progress effectively, forgetting details between working and recording, and plan to record updates immediately after making progress or schedule it more reliably. The speaker is focused on making their OpenClaw time more efficient, having started a comprehensive but unorganized document on data sources. A key goal is to set up automatic ingestion of these voice memos into OpenClaw, as they contain valuable daily context. This would allow them to "communicate" with the AI by speaking into memos, with the expectation that OpenClaw would transcribe, process, and potentially offer insights or suggestions based on the content. They are annoyed that their previous attempt at automatic ingestion isn't working and needs to be revisited, along with decisions about syncing photos and potentially upgrading a Kodak subscription. The speaker concludes by needing quiet time to think through these issues more effectively.

View full transcript
Driving to the gym right now. I wanna give some updates on how I'm feeling now and what's been happening today, but also how I felt yesterday and what happened yesterday. Because I didn't bother to record a voice memo last night. I kind of wanted to, but also I kind of just wanted to go to bed as well. Let's hope he lets me. Yeah. So, let's start with today. I'm feeling excited about a new week. I'm very happy with how last week went. I would say I can definitely say I'm properly locked in. I don't really feel like that, to be honest, but I think regardless of the feeling, just objectively, I've been doing very good in terms of my habits and avoiding bad habits. Not perfect, but very good. I've been doing good with training, getting productive work done, also content creation. Like there's a lot of things to be proud of. I think I'm lacking that like just like core feeling or just feeling like I'm crushing it. I don't know why, but I think objectively I've done a lot of things good and kind of improved and stayed consistent and just gradually improved, which is, you know, the long haul. And the goal is to just continue that this week. Make sure that the current load is maintainable and I can just continue with the same and slightly improve, but not degrade, not get overwhelmed. Then I wanted to specifically say today, and I even felt it kind of like yesterday afternoon or last night and today, I am feeling maybe slightly sick. I don't know what it is and I'm gonna ignore it and just train anyways. It's a very slight thing, but it's just like slightly in the nose. Sorry, I just gotta concentrate a little bit on the driving right now. Yeah, it's a very slight thing. It's only like in the nose that just feels like something, you know, kind of far in in the back of the nose is just a little bit weird, like a little bit extra sensitive or just a weird sensation there, which I usually associate with like the initiation of a cold. Like it's the start of a cold, but it's so mild right now that I'm not worrying about it. So mild that I could just be that I'm just like feeling weird and not necessarily sick. So I'm just gonna ignore it. Today I'm doing both a run and strength session combined. So extra long session. I think it should be fine. And then just continue with the week. Woke up earlier naturally, which I'm happy with because then it sets me up for earlier sleep schedule. And I think I've still gotten good sleep and stuff. So it was kind of a nice bonus. I wasn't expecting that, but I actually woke up kind of early. Then the work with OpenClaw today, very frustrating. Didn't get anything done. So I gotta be more mindful of that. I gotta think about, well, a couple of things. First for the content I'm making every day, you know, it's such a small effort, but psychologically before I start doing it, it's such a huge thing. It's like always every day this thing is like, Oh, I really don't want to do it. And then I do it anyways. And as soon as I'm into it, it's not that bad. It's pretty quick. But right before I start, I feel this huge friction. Now that might go away if I just continue doing this, but maybe I should try also making it easier for myself. Have a better plan beforehand of what video I'm going to make or suggestions for the video I'm going to make, what the content is going to be. I don't know, like have a camera set up that's already there, like reduce the friction somehow. Something I can think about. I kind of wish, since I'm trying to document what I'm doing with OpenClaw, but I find from the time I've worked until the time I make the video, I kind of forget like what was done in that session versus previously. I forget like what is new. So I should either like record very freshly after getting something done with OpenClaw. I think that's probably the ideal to just do that. Whenever I actually get something done, then record the video. Or just make it more like scheduled or reliable somehow. Because I find it a little bit hard to keep track of in my head. Especially since I'm not only working with OpenClaw in the mornings, but sometimes in the midday or afternoon as well. Then I got to think about how I can actually be efficient with my time and get power out of this thing. Because I just end up throwing time at it and not actually getting any value out. So I started, you know, making a document about all the data sources and the overview. But it turns out to be a very comprehensive document. And right now I've just added stuff without cleaning it up or organizing it. So it's not useful yet. It helped me think a little bit. Then there is, well one of the main things I wanted to be able to do is to ingest these voice memos that I'm making. Because as soon as I get that set up, it has quite a lot of context of what I'm thinking every day. And I can start in a sense, communicating with it without talking with it. Because I can just say stuff in the voice memos and I know it'll transcribe it. And perhaps also do like some thinking about it. Perhaps like live or at night time I ask you to look through the recent voice memos from the day. And like synthesize it in a way, organize it or like do something or come with some suggestion for my life based on the voice memos. Just like something. The actual process initially can be small, but just like something where I know that all the voice memos are being ingested. And there is a processing layer on it because that makes it more real and I can improve the processing layer later. But then at least it's real. And it's annoying because I kind of thought I did this already to set up an automatic ingestion, but then it's just not quite working. So I got to revisit this again. Make sure it's actually working this time. There's some decisions I need to make with the photos, like the iCloud library and the timing, what way I want to sync it. Should it go through the cloud inbox or just more straight into the thing? And then, yeah, maybe update, upgrade my Kodak subscription. I don't know. There's some things to think about. Yeah, I think I'm going to do some thinking now without talking too, because I feel like it's hard to think while talking right now, actually. So I feel like I need the peace and quiet to to think better about this. So I'm going to stop talking.
7f0b182e543531d39ce31320c872a582e2c0ceaa52401b04b8f1a369d83871b8_a185d516ccb1.m4a
Saturday, February 21, 2026
11:40 PM ยท 15:33
View full transcript
Successfully finished this increased workout program this week, which I'm proud of. I will note from the workout today, I believe I said before that I was feeling quite good, but that definitely changed when the workout started. While running, I don't know what it was, because, you know, this is a different gym and a different treadmill, so there could be differences in the treadmill, but I don't know if that's a relevant difference or not, but it felt heavier than yesterday. I don't know, compared to the other times, I think maybe this one felt the heaviest, although there's slight differences. Like it's not that big. I was also looking at my heart rate the whole time, and it was higher than the previous one, so yesterday, for the running specifically. So potentially, I don't know what's more likely, if there's like a difference in my biology or just a difference in the treadmill or both, but it did feel like, and I do think that it's quite likely that there's actually mainly a difference in the treadmill, that's the reason. I was also wearing different shoes today, so normally for like every treadmill run and also every like indoor every indoor run and also the group sessions, the high rock stuff where there's running. Ever since I came back to Norway this winter, I've been using the black Nike shoes that I have, but I figured I should use some different shoes, so today I tried the like grey and orange Hoka shoes that I have. And I'm not sure if that felt different during the run. I have had an issue during like all of these treadmill runs, easy treadmill runs lately, although more so this week and specifically, and even more so yesterday and even more so today. So it feels like it's increasing, but I don't know if it is. Where, but the issue is like I get this like discomfort in my feet, like it kind of stretches a lot in the Achilles or lower leg or maybe even in the heel. It's sometimes hard to place. And then after a while, my feet will like fall asleep. I think one does faster than the other. I think it's the left one goes faster, but I'm not sure. But I think during these last runs, it's been pretty consistently that I feel it more markedly in my left foot, but I do feel it in both. And after running for maybe 30 minutes, the foot is literally asleep, which I then interpret to be that there's some issue with blood flow or something, but I'm not sure how the biology works. I think I haven't mentioned this on any of my voice memos lately, but it's been an issue for multiple of the treadmill runs I've done. But not like all of the ones since I came home, I think, because I had kind of forgotten that this, I've had this problem for a long, long time. I have it documented in a super long ChatGPT conversation. So at some point I'll port my whole ChatGPT chat logs. I'll just port all my ChatGPT data over into the memory system and this open cloud system or whatever the system is. So then that should be accessible, but it's not right now, as of recording. I've documented this in detail for how it feels, but I've never found a solution to it. Like I've documented it over months, I think, and I went to a physiotherapist like once or twice even, but we never really found a conclusion, although they did give me some advice, which I also think I didn't actually apply. I think I just kind of ended up not really doing anything different. So it's been this issue that's been plaguing me for a long, long time, but I've never like actively pursued solving it enough, I guess. And it's just like been there in the background. It's like, it's a hindrance, but it's never stopped me too much. I don't know, but it does prevent me. Yeah. But, you know, I've still been able to do some marathons and stuff, but this has also been, always been something that I need to be wary of. And yeah, anyways, I don't want to get into the details of that now. So sometimes I don't notice it at all. And sometimes it's a huge hindrance. And I seem to be noticing it more lately. So it could be, I felt it the most today and I felt it so much so that even after running, some of the pain kind of persisted. So during the whole strength session after and actually even now when I'm just at home here at nighttime, I feel that my Achilles or lower leg has like been stretched or that it's being stretched. I don't know, it's like kind of sore in the tendon or something. I'm not exactly sure. If like, it feels like it's been overstretched, I guess. And I would notice it specifically if I like stand up and then kind of lean forwards. I like keep the feet flat on the ground, but lean the whole body forwards or like at least maybe crouch down and like lean my body forwards so that the knee moves forwards, but the foot stays flat on the ground. Then I will notice that it's like stretching in the Achilles or the lower leg. And I'll also notice that it's like, you know, extra sore or I will feel it extra because of the like existing pain there from the run today. And I can't remember having anything lingering like that from any of the other previous runs. I think I just noticed it somewhat during the runs. And so it does, it was definitely the worst today of the ones I've done lately. And it was, I think yesterday was also the worst one of the ones that I've been up until that point. And then today it was worse, which means I have two possible theories. Either it's like gradually getting worse and worse right now. So it seems like there's something that I'm kind of building up over time as I'm doing these runs. I'm like accumulating a certain type of fatigue or some, or an injury maybe. I'm not sure. Or the second theory is that it was kind of random maybe that I felt it a little bit more yesterday, but that really today was the unique thing where I was using different shoes and that could have been something where that the feet land differently and that could like trigger it more. I'm not sure. Also, I forgot with these shoes, it's a separate issue, but I get like the ones I used today, I get chafing sores on the kind of inside arch of the feet because I think the shoes are slightly too big maybe, or there's just something with the shape. I don't know, but I've completely forgotten about it because I never get it from the black Nikes that I use. But now I got that again, so that's annoying. So I got to just throw these shoes away essentially. If not, like I would always have to use a, like some protection. Yeah, it's just not viable. So I got to ditch these shoes. I totally forgot about that. The first like 30 minutes I don't notice anything. I don't notice anything. And then I like start noticing it maybe a little bit, but it seems to accumulate kind of exponentially. So the last five minutes it felt really bad, but then I see afterwards I didn't even get, like it didn't penetrate the skin, but I did get these, in Norwegian we say, like a water, like it's a little bit of a balloon there now, like a water balloon around the kind of injured skin. Yeah, and then for the strength session, you know, I said I felt great before, but then yeah, during the run I felt heavy than I expected. And also once I started doing pull-ups, I instantly felt that, yeah, this is like fucking heavy. I felt like my back was not fully recovered from doing the heavy pull-ups. Doing the heavy pull-ups earlier this week. But otherwise, like my biceps felt good. I didn't really have time to train that much last time though. And I guess my grip also. My grip, I don't know, it was like fine. But yeah, I still trained the disage. I'm not exactly sure what I want to be doing on these pull sessions since I'm not so worried with like building muscle on the back as much as I am. Like I just really want to build like a big chest and big arms and get great abs. And then for the back and the legs, I don't care so much, to be honest. I think it's kind of good as it is already. That might just be because like they're not the mirror muscles. Like I don't see them all the time. So I'm just not thinking about them as much. Or they might generally be better developed compared to like my chest. I'm not sure. I do feel like that, like my chest is lagging behind, but it could also be just like a subjective thing because like the chest is just the most visible in the mirror all the time and the arms. I'm not sure again. Anyways, I came home, I had dinner, and then that's kind of like weird because it's really just my second meal of the day. It's kind of like lunch, kind of dinner. And so with my routine, like today and I think I mentioned maybe somewhere yesterday, it's just like weird with the timing and then just stay like after dinner or the lunch, but it's such a big meal. I don't really wanna do anything. I just wanna like chill. And so it's weird that it's kind of like in the middle of the day and I'm supposed to do more chores after. Ended up today just going to my room to watch some Netflix actually. So I watched Stranger Things. I figured I would just watch a little bit, ended up watching a full episode actually. So much more than I expected. But I clicked away after one episode, which is good. I didn't start binge watching because that's really good. It could have been terrible, but also it kind of is terrible in a sense because I thought I was just gonna chill a little bit and suddenly I'm watching a whole episode. I'm

I haven't really been reading that much for also many months. And yesterday I didn't get further than reading like one or two pages and then I got kind of sleepy, I put the book away and then I think I actually fell asleep almost instantly, so that was kind of a miracle because that never happens. But yeah, we will see now, tonight.
95e073f25e5114426560bb2f4e485cc0e0a9ad3d83ad05ce31d5711f1e4a55aa_b627c0d7176f.m4a
Saturday, February 21, 2026
12:41 PM ยท 12:20
Essence

The speaker feels time slipping away due to a busy schedule of chores, content creation, and an extensive training block, leading them to question their daily structure and time allocation.

Summary

The speaker feels time is slipping away, despite following a plan that includes daily content creation, which they've recently been more strict about after a lapse. They haven't had time for deep work in the mornings, though they did make some progress on a technical project, "open claw," in the afternoon yesterday, which they now regret as unnecessary computer time that could have been spent on real-life chores. They're waking up later now, around 9 AM, due to prioritizing sleep for recovery from harder training, which makes them feel like they're starting the day late. Today, an extra chore of walking their brother took up morning time, and by 12:30 PM, they're just heading to the gym for a long training session (2.5 hours plus travel and showering). This extensive training block, at a further gym, is a significant time commitment that makes the day feel like it disappears, and they're questioning if it's worth the time spent, especially since they're not combining the travel with a work block today. They plan to focus on real-life chores in the afternoon instead of technical work and anticipate more tasks like snow shoveling, which has become heavier due to melting. Despite the demanding schedule and physical exertion, they feel good and strong during training, though they are uncertain about diet timing around their runs.

View full transcript
Okay, recording a voice memo from the car right now. I just have to double check that it's using the right microphone. Yeah, okay, it looks good, finally. Okay, yeah, so this day I feel like the time is really slipping away from me and that I'm just doing chores, but it's good. I'm following the plan. So I started the day doing content creation. I made a short and long form. I also did that yesterday, which is good. I think I started first time like four days ago or something and then three days ago I didn't do it because I didn't make like a strict plan for it and so then it kind of ended up not happening. So today and yesterday I've been more strict on like I have to do it again because I think I won't have it as a daily thing. But both this morning and last morning, I haven't actually had time to like do the kind of deep work thing that I want in the morning, like focusing on open claw or something. So I've just done the content without actually getting like the work block that I wanted to. But last night I did spend some time on the couch messing with the open claw in the afternoon. So I got some progress there. And I had some in the morning movie also. But I do, I'm kind of nitpicking now, but I do slightly regret the messing with the open claw in the afternoon because that was, I feel like unnecessary computer time. I should have spent that in real life doing like real life chores instead or other stuff and kept the, be like very careful with how much time I spend messing with the like technical setup. I want to be efficient with my time. I don't want to just throw a lot of time at it. Anyways, that's some stuff that I maybe wanted to say previously, but I didn't because I didn't record a recording like this last night because I just couldn't be bothered. Well, today's a new day and I'm just following the plan, but I again didn't have time to actually do the work thing in the morning, but I did the content. I'm waking up later now. I don't know how I feel about it. I mean, I wanted to try and maybe switching to, you know, even earlier like five or just stay at six. But then some of the stuff with Flynn and everything made it impractical and I don't want to set alarm early to hinder my recovery since I'm training harder. I just want to let the body sleep. And so then it's dependent on like when I'm able to get to bed. So now my routine is more like around being in bed from like 12 to 9 approximately. But whatever, that's also fine. I just kind of, do you feel like I'm starting the day late? So today, yeah, didn't have time. Then I did, there was like an extra chore today was that I had to do a walk with my little brother, Jรผrgen, this morning because my mom is like out doing her own thing. So that took some time. So I just woke up, had breakfast, did my content and then we did the walk. And then as I finished that, came back, I just repacked my bag and I'm now heading to the gym. And already it's like 12:30 and I haven't like done that much. And this workout is not gonna take as long as time because I'm doing the run and the strength workout. Then I'm gonna come home, eat a big meal. It's like maybe lunch, maybe a dinner. And after that, it kind of feels like the day is over, although it's not given that I'll probably not go to bed before around like midnight, like 12, given my current circadian rhythm. So I mean, I do have time, but it does feel like the main block of the day just kind of disappears. Anyways, I'm especially interested like now this training block that I'm gonna do, how much time that just evaporates from my day. And I got to reason about in the future whether I'm actually happy with that time spent or not or how I could cut it down and if I wanna cut it down. Now I'm doing a gym that's slightly further away as well. I'm going to Storeroom because I wanted one with an available treadmill and it seemed like the closest one, Sats Rรถa. It, so they have like indoor running group session and then they like all the treadmills are busy. And I was looking at Colosseum, but again, it seemed like they have their group session, so they're all busy. And so Storeroom seemed like a good option. And I think now that I'm taking the car, it doesn't really take any longer to go there. I'm not sure. It's further away for sure, but I think given that I'm driving, it's approximately the same. Now my training session is gonna take a longer time. It's one hour run and then the strength bit, you know, I'm flexible, but it's probably gonna be at around like an hour and a half, like between an hour and 90 minutes for the strength part. So total, like two and a half hours of training plus the time a little bit in between of changing and before and after like changing and also showering after. And then the driving back and forth. Like this is a big ass time block that I'm taking out of my day to do training. Is that worth it? I'm not sure, but I'm doing it now because it's the plan for the week. It may be that I'm giving way too much time to this, especially now since, you know, I'm also doing the traveling there and back, which normally I would maybe combine with, you know, the traveling is also for my work. So to get out of the house, I'll do like one work block while I'm out of the house, either right before or after the gym session. So the traveling is not just for the training, but also to work somewhere else. But I'm not doing that today. It doesn't actually make a difference whether I, you know, travel just for the workout and otherwise do all work from home or whether I use the fact that I traveled somewhere to also do one work block somewhere like closer to the gym, because ultimately it's the same time spent traveling and training and working, but it just kind of feels, you know, good that the trip, the excursion out from the house is getting more done than just training. So the traveling time feels less wasted since I'm doing both training and work while I'm out. But yeah, I mean, really, it's the same, so it doesn't matter. And then, I mean, in the afternoon today, I will have time to like play with open claw technical stuff if I want to, but I'm gonna say that I'm not allowed. I have to do more like real life chores, like keep reorganizing my room a little bit and stuff, cleaning stuff up. And then tomorrow morning I get to do a work block on that and I still have to continue with the content creation tomorrow. So today is going to feel like it disappears a little bit quick, but that's all right. There's also more tasks adding on because it's been snowing quite a bit and that means we should do some snow shuffling at home. And also just now today, it actually got warmer. So today it's like around zero degrees or even maybe like plus two degrees. Whereas it's been like, you know, like minus seven, minus 10, like minus seven for a long time. So now there's a lot of new snow we need to shuffle to get like be able to maneuver the cars and the trash cans and the mailbox. And also since it now got warmer today, so it's starting to melt. That means the snow is getting heavier, which I don't know if that makes a difference for the shuffling, but it definitely kind of feels easier when it's not melted because the snow is lighter, but there's also more volume. And then as it gets warmer, it melts. So it gets more compact. So there's less snow, but it's also heavier. But I think it's actually psychologically kind of tough for when the snow is heavy, to be honest. It's like easier to get too much in your shuffle in one scoop and then it's like, feels heavy. But anyway, there's a lot of snow. We got to shuffle, you know, inside of the house and outside of the mailbox and the trash cans. And then ideally also outside along the road where people park, where it's not just us, but a lot of other people, but also us park. And I kind of want to be, you know, a good, hardworking citizen, both for my family and for the people in our neighborhood and shuffle everything. That takes a long time and effort. And especially like, I don't want it to hinder my training in any way. And also, yeah, it just takes time. So I might do that in the afternoon today, but I'm not sure like if it's a good idea or not, given that it is kind of a physical exertion and I'm already like training hard. I've been wondering if I'm like training too much compared to recovery, but I don't know. I feel really good today. And yesterday at my session, I mean, I felt, that was interesting yesterday at the training session. I'm pretty sure I documented. I felt kind of tired beforehand. Didn't know if it was a good idea, but as I got into it, both during the running, running and during the strength, man, I felt good. I felt more fit and more strong and did like better lifts and everything. So I don't know what that was, but I felt really good. And today, honestly, as well, I feel really good. I'm a little bit uncertain about what I should do diet wise because when I'm doing these sessions where I'm going to do that like one hour run at the start, I don't want to eat too close to the running because it can get like uncomfortable in the stomach. And so now I've just had my breakfast at like nine or
642767ab8ac7c2420380126c1cc41379a15c93468ce52ba158f123c5f249577a_04d1e56c6158.m4a
Friday, February 20, 2026
1:14 PM ยท 2:29
Essence

Despite feeling unexpectedly sore from recent training, I'm heading to the gym to follow my plan, hoping it's the right decision even though I'm uncertain.

Summary

I'm on my way to the gym, planning a one-hour run and a push or mixed session, though I'm unsure if it's a good idea given how sore I feel. I'm just sticking to my training plan because I don't feel capable of making an informed decision right now. The run was originally scheduled for earlier in the week, but I moved it to today because I did a leg session with Flynn, adapting my schedule. I'm feeling sore in multiple places, which suggests a higher training load than expected, particularly from strength training, even though my cardio feels normal. My chest is still sore from Monday's session, and I'm also feeling it in my back and shoulders, making me question if I should do the planned run after heavy legs yesterday, or the chest work with existing soreness. Despite these doubts, I think it will be fine, especially since the run is light intensity and I want to focus on chest, triceps, and shoulders. I just wanted to acknowledge this uncertainty and how my body is feeling.

View full transcript
headed to the gym right now. I'm going to do both one hour run and a push session or a mixed session. I'm not sure. And I really don't know if this is a good idea right now, but I'm just kind of following the training plan that I made at the start of the week. I don't think I'm able to make a great educated decision right now, so I just got to try it and see how it feels. Per the plan, I was going to do the run not today, but like yesterday and the day before the leg session, but since I did the thing with Flynn to be adaptable, we just kind of trained together. So I did the leg session, but not the one hour run beforehand. So I moved it to today and then I'm doing one tomorrow as well. The reason I'm skeptical right now, I'm feeling sore in many places, to be honest. So I feel like it's noticeable that the training load has been higher. I'm not sure how to say I'm more sore than I would have expected from the strength training, which I think is because it's been a higher total load, but I'm just kind of hypothesizing because I'm not feeling cardio wise, really anything special, but more sore than I expected from the strength stuff. And today, you know, which per the plan, I'm doing like another per session. I don't know, I'm sore in many places and still a little bit sore in the chest, which is kind of crazy because I had chest on Monday, right? And maybe a little bit on the Harris session on Tuesday, mostly on Monday. But, and also maybe I'm feeling it wrong, like, maybe the soreness is not really in the chest, more in the back, which is close by, but wouldn't make more sense. It's a little bit hard to say. I feel like I'm a little bit sore in the shoulders as well. But honestly, it's probably primarily from the back session. So now I'm uncertain if I should do the one hour run today, given that I did heavy legs yesterday, but I think it's fine since it's like a light intensity run and my legs work fine, although I can feel they're sore and I'm uncertain if I should do the chest stuff given the soreness, but again, I think it's fine. Get some more focus on chest and triceps and shoulders, which I want and need. So yeah, overall, I think it's fine. I just wanted to note this uncertainty and how the body's feeling.
3f45ea7c7f1a9bf057b4e3393d15c50ec45d1d062622b66dd0daacc799d661ff_a4b21506ac6e.m4a
Wednesday, February 18, 2026
11:15 PM ยท 13:37
Essence

The speaker reflects on a productive day, despite some time-wasting, and contemplates the nuanced feelings surrounding personal habits and the challenges of new tech, while also noting details about diet and a recent workout.

Summary

The speaker is heading to bed, feeling the day is complete despite not being tired. They honestly admit to ending the day with porn and masturbation, feeling conflicted between it being a harmless pleasure, shameful, or an unhealthy dopamine reward, though generally they've come to view it as mostly neutral. They consider the day a success, having made content, gotten a haircut, and completed their gym workout and other tasks. Despite this, they still feel like more could have been done, noting a messy room and a long to-do list, including snow shoveling. They acknowledge this feeling is somewhat irrational given their accomplishments but also identifies concrete time-wasters: getting sidetracked by a new "open claw" tech setup in the morning, delaying content creation, and again in the afternoon, instead of tackling their "get my life in order" list or organizing their room. They estimate spending an hour or two fiddling with the tech, feeling physically and psychologically drained from screen time. They express frustration with the "open claw" setup, finding it difficult to get working despite seeing others achieve great things, leading to doubts about their own capabilities, though they rationally acknowledge the issues are specific to model dilemmas rather than the tech itself. They briefly mention being happy with their haircut and then detail their diet for the day, which was a bit varied. Finally, they reflect on a recent group workout session, noting a minor issue with gas but more significantly, a feeling that their body didn't want to push at maximum intensity, leading them to ease off slightly, a feeling they want to compare with heart rate data from previous sessions.

View full transcript
I'm going to bed now. I'm not feeling super tired though, so I don't know if I'm gonna fall asleep quickly or not, but I feel like the day is finished. I'm gonna be brutally honest, even though this is gonna be weird to say, but also I think the honesty is just kind of valuable for context, so I just ended the day by jerking off and watching porn because everyone else had gone to bed in the house and I don't know how to feel about it. I don't feel too much about it. Part of me says it's completely fine. It's just like a nice little enjoyable thing, nothing to worry about. Part of me thinks it's kind of shameful. Part of me thinks maybe it's like, you know, too easy of a reward and not good for like life, dopamine balance. Yeah, I don't know, but overall my feelings toward it as of lately, really in general for like the past few months, so it's more like, you know, there, there's not much to it. It's like fine to do. It's fine to not do. I've gone periods in the past without doing it for a long ass time. And then I kind of learned the conclusion after that. It's fine to do it here and there as long as you, you know, you know, dangerously addicted. Today, I got the shit done that I wanted to do. I made content and I got my hair cut on top of the normal stuff of doing some normal like open claw work and doing my gym workout. And so it's a fucking success. I did exactly what I wanted. Still end the day kind of feeling like I haven't gotten enough done, which is interesting. There's like still my room is kind of messy, needs to be cleaned up. I have so much in my reminders of like getting my shit together and getting shit done that I need to take care of. I want to also do like snow shuffling outside because it snowed today. Didn't get around to it. Um, yeah, but to be honest, there's like, I can sit in this feeling and complain, but honestly, it's kind of rational because I did get some good shit done today, but I can also, I can point out a few concrete kind of time wasters today and then use that as concrete feedback for how to improve in the future and then just let go of this like feeling, this like feeling of whether I got enough done or not, it doesn't matter. But there are some concrete things I did well and some concrete things I did poorly. That is first of all, the main thing today was to make content because I kind of knew I had to do that before I leave the house. Uh, but I ended up sitting in the morning, like playing with the new open claw setup and therefore postponing the content because I was just gonna do like one more thing and one more thing and one more thing. I should have not allowed myself to like play with anything at all. Just go like straight into making the content first. A second mistake was now in the afternoon and evening, came home, ate and packed my bag, had a shower, and then again I went into playing with the open claw setup. And that was again a mistake because in this afternoon I shouldn't really be allowed to work on that because I need to like limit the hours that I spend on this tech stuff. It's for like the morning hours and this morning I already spent time on it. And I'm just like fiddling with the config, not really getting much done. It's like very much a time wasting debugging step and it's nice because I'm like more bulletproofing my setup and really feels very wasteful as well. And I should have taken this time. What I was supposed to do was to go through the get my life in order list. There's like some things I need to order. I was supposed to order that right now. I didn't get around to it. And then I could organize my room more or I wouldn't have done the snow shuffling. I would have done something inside. Yeah. Order stuff online and or like organize some more stuff in my life or like clean some stuff. That's what I should have done instead of staying fiddling with open claw setup. I just got sucked into it, forgot about time again. Time just flew past. Yeah, it was bad. Not too long overall. Like I really don't know how long I sat with it, maybe an hour, maybe two hours. I don't know. I actually have no idea. I feel like it can't really have been more than one hour, but still, even just from that, I'm gonna just like physically after feel like I shouldn't have spent so much time on the screen there in the afternoon. Like my, you know, I think it's psychological and placebo more than actually physical, but I feel like in my eyes, like they're getting a little bit tired of looking at the screen. Honestly, I haven't been looking at screens that much today. I think it's more, yeah, it's like psychologically, I just feel like I was looking at the screen more than I should in that time. But yeah, anyways. Yeah, today I started with open claw. I feel like I see it online and people are doing these great things. And then I tried myself and I just get stuck in all these fucking hurdles. And yeah, it's so everything that I wanted to do that seemed very achievable right now, it feels very much like, fuck, maybe it's not any of it is achievable. Maybe none of it is achievable at all because I just cannot get open claw to fucking just work basically in a basic manner. Yeah. That's not so rational because actually it is working well. It's just like concretely add some issues with the different models. And I just want essentially one that's very smart and powerful, but also super fast. And that's like a dilemma where I can't have both. And I can't have the powerful ones, but they're going to be less smart and they're going to be... No, they can't, they're going to be smart, but they're going to be less fast. They're going to be slow and they're going to be super fucking expensive. Or it's relative, but it's going to be more expensive. Yeah. Don't have much more to say. I got my haircut. I'm happy with it. Well, I shouldn't need to say it. It's already, I already said it in a video. I don't want to be repeating myself. I diet is something I wanted to note. I wanted to talk about yesterday as well. Let's see today. Breakfast, oats. I'd say it was a little bit all over the place actually. I had some like bread with butter and cheese. Um, an apple, protein shake. That's going to mix like breakfast and lunch. Then I went to the gym. After gym bought a like a recovery drink. It's called eat. It's Norwegian. It's like carbs and protein essentially. And then for dinner, my mom made, I took a picture of it. So different things. There were some like rests from the thing we had yesterday. And there was, um, she makes this like cool. I don't know what the name is. It's like a pokey bowl or something. It's like sashimi salmon and mango and other stuff. It's like fresh raw and some sausages. I'd like two spicy sausages. Um, yeah. Uh, yesterday I wanted to note because I did the group session and it was like the situation with like the gas. I had some gas, like some farts, but it, it wasn't really a problem. There's never a smell problem that I noticed. I think I kind of let it rip once or twice during the running there. But then I was moving. So I guess I didn't, wouldn't smell it. And it was loud music. So you don't hear it. I never had it while I was on a station. So yeah, I don't know, but it was less of a problem than one of the other times. Like still like it should be less. It should be zero. But on the session though, I did feel like, you know, I was pushing hard from the start or not. Maybe on the purpose. Yeah. As soon as we start running, I was pushing hard because I was competing with Albert and stuff. And as we're getting like into the session, like halfway and I really feel like my body, my heart doesn't want to be like pushing at max intensity right now. Like you feel your heart is beating hard, but also you can kind of, it's like an extra feeling on top of that, whether it feels kind of right or wrong. I don't know how better to describe it. And it just kind of felt wrong to have such a high heart rate on that session. So I ended up pushing a little bit less. Like it's still high intensity, but a little bit less for most of the rest of the workout, except for that, like the final sprint, just because I just felt like the body didn't want to be pushing at that intensity. These are, they're kind of small differences, but it's just, I'm talking about what I'm feeling. So I don't know what the heart rate actually shows, but I feel like compared to, for example, last week when I did this or two weeks ago, uh, I think I pushed harder during those ones than I did on this one. But it would be interesting to look at the heart rate data to see. I think maybe last week I forgot to bring the heart rate monitor belt. So it's just from the watch, which might be less precise. I don't remember. I'm wondering if the data has captured what device was capturing the heart rate data. I think it should. I'm not sure. And so that was interesting. I really don't know why. Could it be because I did for once a longer warmup right before I did like 30 minutes or 25 on the treadmill right before, which usually I don't get time to do any. Could be related, but I don't think so.
d4a72f485cab5aa83ab1337f1f8853ce09ab93cd028ae2bb6e2e7d919a572416_b05fe138f59d.m4a
Wednesday, February 18, 2026
12:30 AM ยท 38:24
Essence

The speaker grapples with the tension between documenting personal experiences for a future AI system and the inherent difficulty of capturing nuanced, feeling-based thoughts, while also reflecting on the nature of productive thinking and the evolving perception of AI consciousness.

Summary

The speaker, despite not feeling like it, records a voice memo to document their day, including meeting Albert and Flynn, and an upcoming content creation session with Flynn that makes them nervous but excited. This leads to a broader reflection on the limitations of current data collection for personal AI systems, noting that without access to messages or constant audio, a system couldn't fully infer their experiences. They spend their subway rides deep in thought, questioning the productivity of their "thinking" which often feels more like repetitive thought loops or feeling-based rumination rather than coherent, forward-moving ideas. They connect this to Jordan Peterson's idea that writing is a vessel for productive thinking, contrasting it with their own tendency to get stuck in feelings when just thinking. The speaker expresses a desire to be perceived as intelligent and to produce novel, permanent thoughts, especially in the rapidly changing landscape of AI. They envision an AI system that can not only collect and synthesize their scattered thoughts and data sources but also understand their workflow to suggest improvements and new tools, essentially becoming a self-improving, self-aware entity. This leads them to ponder the personification of AI, acknowledging the human tendency to attribute traits to non-human things, and questioning whether AI's emergent behaviors might be a step towards understanding consciousness itself, even as they recognize the underlying algorithms and data. They also consider the evolving identity of their personal AI assistant, which will become more defined with use, and the importance of documenting their workflow for the AI to optimize their daily processes.

View full transcript
I don't feel like recording a voice memo right now, but at the same time, I have a feeling like I should to document the data. So today it was cool to meet Albert and Flynn. Got to connect better with both of them. Flynn made his dinner. It was cool. Again, I'm thinking, would that be able to be inferred from my data? Actually not. Well, if my system had access to my messages, then it would be able to infer that. Otherwise, just from the position, it wouldn't really be enough. It would need my messages or alternatively, it would need to have me recording audio 24-7 or ideally both for it to really understand automatically what I did today. So after the training session, we went to Flynn's place. He made his dinner. He was living there with his dad. I didn't fully understand the living arrangement. The son. Yeah. Flynn suggested to me I join him in the city one day to record some content. So we agreed to do it Thursday. It's really good for me. I'm nervous about it. It's going to really put me out of my comfort zone, like the stuff he wants to film, but that's also exactly what I need. I don't want to bother with describing it right now, so this is something I wish could be captured more automatically somehow, but it would need to like listen to the convo we had or even just like see my POV, like see the stuff he showed me on his phone and stuff. Hard to capture this, I think. You know, it's fine, but I don't capture it. I don't need to capture everything. I'm thinking a lot about AI and systems. I don't know if my thinking is productive. The whole subway ride down to the gym, like 30 minutes, I wasn't on my phone, not doing anything. I was just thinking. And the same now on the way home again. Yeah, just thinking almost the whole time. That one home actually went super quick, weirdly enough. I say just thinking, but like, I don't know if like, can I call it thinking? I feel like there's an argument against it because the thinking is not necessarily productive. Like I'm maybe just running the same thought loops over and over. Like I'm not getting anywhere. But I mean, yeah, that is thinking. I just don't know if it's productive thinking. When I just sit like that and think, I find it really hard to stay focused with my thoughts and follow like a chain and like move on. I usually just get stuck in one place. And it's more like it's very feeling based usually. Like I don't know if I actually think so much, I just like feel. How can I describe this more concretely? Well, when I was taking the subway down, I was thinking about which data sources I have that are collecting data about me automatically and how that might be able to integrate into the system. And then I'm trying to think about this, but I just get filled with this like very intense feeling that, you know, I need to figure this out. I need to collect all the data and I have to get it in the system and it has to be all automatic and I just have to get this done. Then of course, if you just zoom out and think about it, it's like, well, why? Do I need this? Like, is this vitally important? Well, no. Most people don't do this. In fact, almost nobody does this. Nobody that I know about. I don't need this. I just think it would be cool. I'm very obsessed with it now in the moment, but it's not like I need it. And so then I didn't actually do very concrete thinking about what those data sources would be. I'm just kind of like sitting with that feeling. I really just like sit with the feeling and don't really think that much in any direction. My thoughts just keep going back to that feeling, essentially. So I don't know, that's probably a very normal human thing, but I've never really had much discussion about that or read about this thing that I'm trying to describe right now. It's a little bit hard to articulate, I feel like. Kind of the same thing on the subway up. As I've said before, I feel like talking like this definitely helps me focus my thoughts and actually make a move forward. So also like writing. I think I should do more writing, actually, just to help me think. That reminds me of a very cool clip I watched from Jordan Peterson from one of his lectures where he was educating the students on this question, like, why do we write? Why do we learn to write? Why do we need to write in university? Why do we need to learn to write in university? And his answer was, well, we learn to write in order to learn to think. Learning to write is a kind of vessel for learning to think. Writing is a vessel for thinking. I think that point is interesting. And then, you know, thinking, then it means, the way I interpret it now, is not just thinking in any type of thinking, because I'm always in thought, someone's kind of thinking, but it's like, you know, you write to, as a tool to enable productive thinking. Thinking that actually moves forward and is coherent and based in reality and not in your feelings. And you're more easily able to move on from a topic to the next one if you've written enough about it. Whereas if you don't write but just think, then it's easier to get stuck in the feeling or in the thought. But I think really in the feeling. One thing I keep thinking about throughout the day is like whether, like I want to be smart, I want to be intelligent, and I want to perceive myself as an intelligent person. And I do, well, I don't know, it fluctuates. Sometimes I see myself as an intelligent person, sometimes I kind of don't. But I don't know in the times that I don't if I see myself directly as unintelligent. I don't think I do, it's more like I see myself as unskilled in that thing. But that's definitely irrational, sometimes feeling based. Like I'll sometimes in social situations feel like I'm just autistic, like I'm just not really getting it. Or I'll try like some new skill and I'll be really bad at it and I'll like think, oh why, I suck so much, I'm so bad at this, even though actually it's completely normal when you're new at something to be bad at it. What the fuck is my point? I don't know. Yeah, so I want to be intelligent and I see myself as intelligent, I don't know if it fluctuates. I'm wondering if I'm like, am I really like producing any intelligent thoughts or not? Or like very much like normal and like regular and pattern-based thoughts. I don't know, like you really want to have like novel ideas and novel thoughts and be individual. Like not shape your opinion after the world, but truly like, you know, think about it yourself. Maybe think about it from first principles. I don't know. I feel like it's like a kind of trendy Elon Musk thing, but just like, think about something, you know, yourself. And I don't know if I do that, if I do that well, if I do it enough. I'd like to think I do, but I have like no objective measure of it. With AI I want to be able to, like, things are changing so much and the information I'm consuming is so like temporary. I want to be able to state some things that are like more permanent or they're certain for a long time period or forever. And I want to have these novel thoughts that I come up with with myself, by myself, who are kind of like statements that you can say and they just have to be true. I think if I think too much about that, then I reach the same issue, which I know philosophers reach of like trying to figure out what you can say is actually true and certain. Like how you can know anything at all. I don't want to go that far. But I want to be able to come up with this. These statements that are essentially highly likely truths about the future and that I've come up with. That's cool. And I think I have them here and there. And then other times I feel like I have no idea. I feel like I had some today, but now I don't remember in the moment. Some of the things I've said about, you know, the workflow that I'm imagining with AI and that kind of, you know, depreciation of, or that's not the word. Depreciation, the... Well, just the fact that the operating systems we use now will become outdated. They can be updated, but really like it's just going to be a completely different operating system that is used in the future. Less statement in itself doesn't mean anything because it's of course true because you just say in the future. But I mean, I mean in more specific like time frame, like in not too long, you know, like within... How long? Within a decade? I'm not certain, but yeah, I think likely within a decade. Yeah, for sure. Things are changing faster and faster. That's the thing I forget. And it's, I mean, the world is slow to adapt, but honestly within a decade with the current speed of change on things, yeah. It's going to be a completely new paradigm for standard operating systems that most people use for like tech. More like natural human interaction with the tech. Then could I be more precise to like within a decade? It's... It doesn't say much, like it's a very wide range. But I have no idea how to be more precise than that. I don't have enough knowledge. Yeah, I thought I had actually. Yeah, I'll get a couple. I'm probably going to set up OpenCLaw just on my Mac. I have security concerns. Maybe they're big. It's so hard for me to evaluate. I realize I'm taking a risk

There's a lot of unanswered questions about humans and the brain and how we work, but at least for the AI, we know more about the parts and how it works is the thing. So, like, we know it's not a person, whether it's a consciousness, I guess we don't really know and it depends on your definition. But when I'm, like, referring to this thing and I, like, give it an identity and a name, there's some weird aspects about that. But I think you can maybe break it all down to the reason it gets weird is because we've taken something that it's not actually, or, like, giving it human traits, even though it's not human. And that's, like, a thing we do with our language all the time, like, give human traits to non-human things. And that just breaks any, like, logic following it, but it's still something we like doing. So that's the thing, like, give it a, we say it's had an identity and a memory and stuff. And that's why, well, I haven't even said the thing that's weird yet, but that's why, like, things get weird. But you can also just say, zoom out from all that and say, no, it has no, like, human traits because it's all, it's not a human, obviously, it's a machine and an algorithm and data. And we all, we know how it all works, like, it's all, you can show it with the math and the algorithms and the data. And therefore, it's not like it thinks or it feels or it has an identity, it's just a combination of algorithms and the data and the math. What I was going to say, which is weird, which is like, it's like when I'm building my system, my AI assistant personality or whatever, over time as I use it more and more, I think the identity will become more kind of locked in place, like, more firm. But in the beginning, the identity will be super shallow and fluctuate a lot. It will change a lot because I can just change it a lot manually and I'll suddenly turn on the system and reset it up on a different device or with a different system or a different data set or different memory or whatever. While I'm yapping right now, I just really got to go to the bathroom and take a dump. All right, that's done. We'll continue the yap, but first, another thought that I had, which I want to say before I forget it, is that, yes, it was amongst the data sources that I want to at least be aware of and think about if I'm going to collect them, whatever. Like, what is the data sources that exist and which of them can provide me value for the system? One, which I've thought about before, but I don't know if I've documented it anywhere yet. I just realized again, so let me jump it now. It's documenting my workflow or dating it or having data on my workflow. What I mean is that I should have data on exactly which tools I'm using throughout my day, like both, especially digitally is what I'm talking about, but it can apply to everything, but especially digitally, like all the tools that I'm using, which like softwares, apps, devices, operating systems, websites, then how I'm using these tools and, like, how much and, like, inside of them what I'm doing. This is kind of collected already with the timing data, like the screen time and activity data. But, I mean, I think that actually captures a lot of it, but not everything. For example, it's not capturing, you know, conversation history within AI apps, but that can now be collected through... I realized this one is kind of, like, partly covered by different things, but if I set up multiple different things, then at some point it becomes kind of fully covered, which is cool. Why is this important? It's because if my AI system knows how I'm interacting with all my tools and, like, what my workflow is throughout my life, then it can look for inefficiencies or potential for improvements, and it can also look, pay attention for me to new releases of things that could improve my workflow. So I don't have to do manual research about anything anymore because it already knows exactly what it should notify me about or just, like, change in the workflow under the hood in its own tools. Or you can, like, notify me, hey, here's this new release, which it suggests would fit, like, perfectly into my workflow to replace some part or upgrade some part. So, yeah, again, it becomes this more and more self-improving system, but the system kind of has to be very self-aware. The system needs to know exactly how it works and also how I'm using it. It's very interesting how this starts turning into, like, a kind of conscious system or what we at least feel like seems similar to consciousness. That's interesting. Let me continue the yap. By the way, I'm noting random things like this. Like, I just remembered the new data source because I imagine at some point, maybe even as early as tomorrow morning, actually, I want to write a more... I want to try and synthesize my information and my thoughts throughout the weeks. And so I'm trying to document a lot of them, capture them. And then at some point, I'll try to collect them and organize them and synthesize them. So I'll, for example, want to make an overview of, like, all the data sources that exist, like data that I have that might be useful for each of them. Like, how useful they are, how accessible they are. Can I, like, automate the data ingest or no? And so for that, I'll have to, like, in the moment, think about, like, be able to think about and, like, remember that all of them exist. But it's easy to forget in the moment. But then I imagine I could have my system search all the voice memos or all the data that we have in our system for any, like, thoughts I've had around it to help me build that list because I've, you know, mentioned or listed a lot of stuff already. But it's kind of scattered throughout, so the system has really got to be able to go through a lot of transcripts of what yapping in order to find the correct stuff. Now to continue the long yap. What was I even talking about? Yes. Personification of AI systems and whether it's, like, conscious or not. I'd say I don't know. Maybe it is. It's showing behaviors that, you know, seem similar to consciousness. And it feels weird because, like, we know how it works. We know the parts and therefore, we kind of think it's not consciousness because we made it. But also maybe that we're just starting to get closer to what consciousness actually is, what evolution has developed. Maybe. We don't know. But yeah, the interesting part about a system and an AI system like this is that we know how it works and we know the parts and the parts are composable. Because we can take what we usually kind of call the brain for these systems, which is, like, the main AI model driving it. Like, right now it would be, like, Claude, Opus 4.6 or OpenAI Codex 5.3. Like, popular choices for the brain, which is, like, the main. Do you still call these just the LLM or does that, is that not correct anymore? I think that's actually not correct anymore. That it's not just the LLM. Like, the interface is similar, but inside it's, there's, like, more steps than just an LLM. I'm not sure. I think especially Codex is specifically not an LLM, but rather more like steps built around it, but I'm not sure. It's an AI model, I guess, is the term that's used.

But yeah, there's a brain. It's weird to use the term brain because in humans, we say like the brain is everything. It's all the thinking and all the memories and all the feelings and everything. In the AI system, we kind of like separate them. So we say the brain is the AI model, but then the memory is completely separate. And then like the skills again, it's kind of like a separate thing and tools, kind of like a separate thing. Anyways, yeah, so in the AI system, the brain, you know, is replaceable at any time with a new and improved brain or a brain AI model from a different company. And then the memory is completely replaceable. And then that begs the question, like, is it still the same kind of person you're talking to? Well, that would be weird. But is it still the same system? Well, yeah, it definitely is still the same system. Well, it depends on how much you change it, but yeah, it's like the system, you improved it or you changed it. So is it the same person? That question doesn't make sense because it was never a person, it's a system. But we like to put these, you know, personality traits on it to kind of feel like it's a person. Uh, even though it's not. But then, like, the way I'm gonna continue referring to it is that, you know, I'm kind of going to give it person traits, but at the same time understand that, you know, we replace parts and that it's gonna change a lot, but I'll still like call it by one name and just say like, and then the name, I don't know what it will be in the future. For right now, it's like OpenClaw or it's Jarvis. Then I'll say like, hey, Jarvis, today we're gonna replace your brain or replace your memories or whatever, which is weird. Also, since these AI models kind of, you know, don't carry states, but they persist context through their memories, that means that right now, even though I'm not talking to the AI, I'm just recording a voice memo all on my own. Since this data is being ingested and is gonna become part of the um system or my personal AI, that means in a sense I am actually talking to my AI right now, which I don't know the name right now. I guess it's Jarvis. So I can say, hey Jarvis, I'm talking to you from the past right now, which is kind of cool. Yep, I think I've done enough yapping for tonight. I should try and get some sleep. How's the day overall? It was good, you know, I did some computer work. I did also some real life stuff, you know, I did a good training, social with the guys. I did also snow plowing at home. So that's good. I feel like I didn't get enough done like the day disappeared way quickly. I didn't really get that much real life stuff done or that much computer stuff done. I don't know where the time went. And yeah, I cannot be waking up at 5 tomorrow because I'm going to bed so late now. So I'm just gonna not have an alarm. I think sleep is very important for recovery now that I'm trying to up my training load as well. So I think I wanna set this sleep schedule of waking up at 5, but then if I gotta do that, you know, it's a compromise. I gotta choose. I cannot stay up late. So for now I'll just prioritize the training and the flexibility and I'll let the sleep schedule just be for now. It's fine. I really wanna start making content. I think I should force myself tomorrow. I should do some OpenClaw stuff, set it up for myself. I really wanna research also how the guy that made it, what's his name, like Stein Pete, I think. Research his like just regular day workflow. Although I'm not necessarily gonna do that tomorrow. That's just like an item that I really wanna do. I think it's just probably so many interesting lessons to learn from him. And I know he's written like an article about it, I think, because I've seen people refer to that and it's probably posting really on X as well. I wanna read that main article. He's discussing like always committing to main and having like a lot of terminal windows running codex open at the same time, like 12 or like 20 or something, maybe more. Just some of his like philosophies and workflows and principles around development and building. I'll be here tomorrow. I should definitely, I need to fucking make content and get my hair cut. It's like the two big things that I always say I wanna do, but I always procrastinate. I can make like one short and one long form video, and I could set a rule for the foreseeable future to that I have to post one short form and one long form every day. And I'm wondering right now, should I set that as kind of like a law or whatever, whatever? Honestly, I think I should just set it and then just fucking do it. It's not that much work and I want to do it. Usually I'm careful about setting rules like this because I know it's important that I actually fulfill them. Otherwise I lose trust in myself. Honestly, this one, I'm not gonna think much about it. Let me just set this as a rule right now because I know I've felt this for a long time. I know it's something I wanna do. I know it's not a lot of work. Every day I should post a short form and a long form video, but the bar can be super fucking low. It can be edited, unedited single clip for both of them, but like one to short form platforms, Instagram, TikTok, one to YouTube. And I say long form, it can still, you know, be just five minutes. It's okay, or much longer. It can be like one unedited five minute clip, which is like the content kind of, it can be like a very simple daily dev log or something, or just like daily life update. I don't know, it could be anything. I'm gonna set the bar super low, but I should just post something every day. I think that's an important principle to incorporate. And does that mean I need to create it every day? Well, no, technically I could create ahead of time and schedule. But if I don't do that, then I will, you know, minimally have to make one every single day. Yeah, I think that's a good rule to just decide. And I think I should just decide tomorrow I have to get my haircut. I have to make content. And then everything else is like whatever. You know, I'm gonna follow my training routine. I'm gonna do some deep work, try and focus on OpenClaw developments. But it's like, it's really the two, the content and the haircut because I've been ditching that forever. But then also like adding that in without losing my mojo on my other things. So keep the same routine going with everything else, but add in that. Yeah, I think that's good. But here I'm just overthinking so much like to find the right place and the right time, whatever. Let me just decide to do it tomorrow. Figure it out tomorrow, bro. That's some good rules to decide. Okay, I'm gonna go to bed, or I am in bed. I'm gonna stop talking because I don't think I can really fall asleep while I'm talking like this. And even after I stop talking, I think it's gonna take a while before I fall asleep. But yeah, good night.
c96d5062e84073546c182a74cc98a17db05c4b47dde28d132810a8684a73584c_1ee2cfb7836e.m4a
Tuesday, February 17, 2026
5:36 PM ยท 70:45
Essence

The speaker is grappling with an obsession with OpenClaw, struggling with setup decisions and error messages, while also envisioning a future where AI is a constant, seamlessly integrated companion.

Summary

The speaker is currently consumed by OpenClaw, feeling a strong pull towards it that borders on obsession. They acknowledge this tendency to hyperfocus on new interests, particularly within tech and AI, and are trying to balance leveraging the technology with not getting completely sucked in. A major source of stress is the constant need to keep up with OpenClaw's rapid developments and best practices, a recurring problem they face with various topics. They are particularly frustrated by the difficulty in making informed decisions due to a perceived lack of clear information, leading to decision fatigue. A significant decision revolves around where to install OpenClaw: on a cloud VPS, a new dedicated machine like a Mac Mini, or their existing MacBook. Despite online guides universally recommending a VPS or new machine for security, the speaker is strongly drawn to installing it on their MacBook for greater integration with native apps and personal data. They suspect that many users actually do this, but tutorials focus on more complex, secure setups to stand out. Having previously used a VPS with a "one-click" install that proved problematic and difficult to update, they are leaning towards a fresh install on their MacBook to debug issues and enable seamless updates. While considering buying a Mac Mini, they view it as an impulsive investment and believe their MacBook could serve as a stationary PC when needed. The speaker is also experiencing unpredictable error messages with OpenClaw, which they suspect are related to complex tasks or specific inputs like web searches or images, but lack a consistent pattern. This further fuels their desire for a fresh install. Looking ahead, they envision a future where AI acts as a live, present conversational partner, orchestrating background tasks and tool calls without interruption. They dream of an AI system that captures every thought and sensory input, though they acknowledge this is far-fetched and potentially unsettling. More realistically, they ponder the feasibility of a 24/7 active AI voice conversation, considering the practicalities of devices, battery life, and form factors like AirPods, Meta Glasses, or a custom wearable with a speaker. They also desire an easy way to feed digital files to their AI, imagining a future where a completely new operating system seamlessly integrates AI into every aspect of their digital life.

View full transcript
All right, OpenClaw is definitely filling my mind these days. I'm becoming a little bit obsessed. I need to get a grip, stay in control, stay in reality, leverage the amazing technology without getting too much sucked into it. Because it is very fascinating. I do notice myself caring only about that and nothing about anything else, which is not good. It's okay. I don't need to care about anything if I don't want to care about it. Actually, that part is fine, I think. The problem is that I feel this huge stress of trying to keep up with everything happening with OpenClaw and the newest, best methodologies and use cases and extensions within it. And this problem is not something that's just like a Perry now with OpenClaw. Like, I have the same problem all the time throughout time, just with different topics grabbing my interest for that time. So it'll be like, you know, AI coding in general, or it'll suddenly be world news for a period. Well, not really. So it's within like the tech and coding and AI field that's been like capturing my attention the most. It will occasionally be within, like, drama and online personalities, maybe, but not usually. Anyways, at OpenClaw, I've become very obsessed with like how I can get the most out of it and get it to work as well as possible right now. And I'm also frustrated by some of the decisions I need to make where I find it just like hard to find good information and to make a decision. And so it's like an infinite thing where I never find enough information to make a good decision, which means what I actually need to do is just make a decision. So it stops fatiguing me. And there's not really any right or wrong. You just kind of have to weigh your options and then pick something. A big decision that's bothering me is, like, whether I should install and use OpenClaw on a cloud VPS or buy like a new machine, like a Mac Mini or something, to set it up on. Or if I should just install it on my MacBook that I use already. And really, it's the last option that's the most appealing to me. But I'm frustrated because nobody recommends this. Every single guide I read about OpenClaw anywhere online, they show setting it up on a VPS or a new or a fresh machine or like a blank machine that's not your regular one, and they all recommend this as well. Nobody recommends doing it on your normal machine. I still kind of want to do it, and I'm curious if a lot of other people actually do that as well or not, because I can't really find information about it. Maybe I haven't tried hard enough, but I feel like I've looked a little bit. And nobody's recommending that or showing that. But I think it's because it's genuinely quite easy. Like, you just run the one command in the terminal, follow the setup instructions. And it's just like kind of the people that do that, they don't need to show that they're doing that because that's like the default thing. And that's why all the tutorials, they show like something different, you know, to stand out, they show here's how you can do it and also make sure it's much more secure. So I think actually it's misrepresented where it feels like it seems like online, like everybody's doing this in this extra secure way with a VPS or a custom machine just for this. But actually in practice, a lot of people are just installing it on their machine. I don't know. That's what I think. And I might just end up doing the same for myself, to be honest. I've been running it on the VPS so far and it has been working fine. I just have this one issue, which I don't even know if it's related to the way I've set it up or not. But I just find it hard to debug, so I kind of just want to do a fresh install. And since I tried the thing that was supposed to be the super easy, you know, one click, get a VPS with OpenCloud already installed, it was supposed to work super well. But still, it wasn't really that good. Like, there was some points about the configuration which should have been presented better. I just feel like it was a shitty delivery. And then I'm afraid that it's not configured correctly and that it would be better off just installing it the normal way with the command. So I kind of just want to do that on my machine. I also kind of just want to buy a Mac Mini because I can afford it. But it is a bigger investment compared to what I usually do, so it feels like kind of impulsive if I just go out and buy it. So right now, I feel like I should take some more time to just weigh the decision or research into the security. But right now, what I'm thinking I'll probably end up doing is actually just ditch the hosting or VPS that I have. I still have like some of the stuff that's set up is unrelated to OpenCloud. It's more about just my data ingest automations. That's set up either way. I have some memories now in OpenCloud, not that much, but I have some. I might want to port them, but honestly, for the potential complexity, I could also just ignore it because it's not that much information there. Yeah, I'm thinking I'm just going to ditch that one, install OpenCloud just on my normal Mac, because then I want it to literally be able to automate things in the native Mac apps that I'm signed into, like my reminders and voice memos and stuff, and also like just navigate my browser as me, where I'm logged in and everything, which is where the security concern comes in, but that's also where another level of automation potential comes in. So I think I'm just going to do that, to be honest. And I think initially it's fine, but then the risk is prompt injection or someone just like hacking your gateway or something. Not that I really know what gateway means, but someone just getting access to send whatever instructions they want. I think I can limit that first. Like I set it up on my machine and then I would set up like the Telegram connection. And I assume that one is safe. I have no idea how it works, but I mean, it's part of like the default configuration. So I just assume I don't need to have any security concerns about that. And then anyone can control it, like if they're signed into my Telegram, which nobody is, it's just me. So that's still within just like the normal security of everything that I have. But at that point, you can kind of remote control my Mac from my phone via Telegram, which I think is fine. But then I got to be careful if I set it up to do a lot of proactive automations, especially when I'm not there, like overnight or just while I'm away. Just the potential that it is not necessarily getting hacked, but just proactively completing a task by performing some like sidesteps along the way, which I actually don't want to allow, but we have just forgotten to think about. That's like the issue. So I think I should just honestly set up with full permissions and the max automation potential, but then just be a little bit careful with how I actually automate things. Yeah, I think that's probably the way to do it. Or I could consider having like one on a VPS and one on my local machine because the biggest drawback of having it on my machine. Well, one is the security thing, but even if I set it up on a Mac mini, I honestly would want to sign it into my Apple and Google account so that it would have access to a lot of my stuff. And then the security concern is just the same. So, but the other issue is that if it's my MacBook, it's not always online. So that's another issue. Now I could just kind of like leave it home and leave it as a permanently like plugged in, always on, running as a computer. I can honestly do that. And the more I'm able to outsource stuff to OpenCloud to do automatically, then I don't need to be so much on my MacBook anymore. So it's honestly fine. I can just turn that into a stationary PC, but then with the option of anytime plugging it out and taking it with me. So I think that's probably what I'm going to do. I think I should maybe research the security a little bit more or maybe just do it. I don't fucking know, man. I don't know. At some point I just need to make a decision, I guess. So that's one big decision that's bothering me right now, but I'm having this like, I'm getting these error messages from OpenCloud sometimes when I text it and it's, it feels unpredictable. I'm not able to tell it. It feels like it happens when I ask you to do more complex stuff, but then I've also had times asked it to do even more complex stuff and it done it just fine. It seems like it's maybe related to when I ask it to do specific web searches or send it like an image. But then other times I've done that and it's worked fine. So really, I haven't found any predictability with it or any way to reliably recreate the error. And so I'm wondering again, if I should just do a fresh install, if that's going to help it. And also for the possibility of updating it because new updates are being released so fast. It seemed like the one click install option I got, it wasn't really set up for being able to be updated because you should just be able to ask OpenCloud, like, hey, update yourself. And that didn't work on mine. It said the way it was set up, it didn't work. I would have to like uninstall OpenCloud and then reinstall it through like a Git approach. And at that point, you know, why even have it on that one click setup machine when I have to uninstall it and reinstall it? Well, I don't know if there's still a lot

trigger functions, trigger systems, like initiate work, manage work and also, you know, get responses and include that in its context and in its way of acting without actively being blocked by itself, which means that any major work is kind of put on another agent or another thread or another computer or something. But the main one you're talking to, it has the ability to, like, it's fully like, it's never even for a millisecond, like busy doing something else, like it's always live in the conversation, but then under the hood, it's orchestrating tool calls or different processes or whatever. And that enables such a powerful way to just do everything because if you have that, then you can imagine like the tool calls it does underneath. They don't need to be, they're going to get better, but they don't need to be any better than they are right now because the speed of things, which for some things is slow, it's still tolerable as long as the AI is very live and present in the convo because as it's working in the background, it can still stay live with you. Perhaps you can talk about a couple of things in parallel, and the way I'm now trying to manage multiple parallel coding agent jobs, for example, on my computer. Now I could do that through voice and just manage all of those. And then while they're all going, we could just talk about politics or something. And of course, we expect the AI to have like general knowledge about the whole world, so you could just answer without even looking it up. But then for like tasks that need doing, and it just does it at the same time. Of course, I would like my AI system, which is my system, my digital system, to also capture every thought I have and every sensory observation and every input my body gets. But I think that's further away, so I'm not really thinking about that right now. I imagine you could have something connected to your brain somehow that literally can capture every sensory input and every thought. And then the system can be aware of that and then you can discuss it with the system at any point, or you can do processing based on that. That seems to me very far-fetched right now and like scary in a lot of ways because I don't know what that would enable. That's too far-fetched. But what is real is like listening 24-7, for sure, active like AI voice conversation. I don't know about the feasibility right now, but possible, yes, or very soon. But like feasible, like how practical it is, I don't know. And you know, what you have access to and the price of that and the energy usage and stuff like this and what device would you run it on. Is it on your phone or would you have a custom device for this? That's the thing of it being like able to speak back to you, always like it being with you. The thing is, capturing audio, that's not too bad, that's all right. But if you want it to always be with you to speak with you, then you have some options. Either you got to always have something in your ear, like AirPods or the MetaGlass or something, or you got to have something on it that's like a speaker so it can speak out loud. And then it starts getting more intrusive, like other people can hear it, but in a way it would be fine. It both are fine and both are possible right now. Feasible, you know, it's a question. I can just wear my AirPods all day. I have to charge them at some point, switch to a different headset. Or you could have a device on you that's always capturing audio and also is able to have a speaker so it can make audio. I know there's devices that's being commercially sold that capture audio all day. But for the speaker thing, I'm not so sure how that would work, but possible for sure. Practical, I don't know. Yeah, let's imagine you could have like a pendant around your neck or an earlobe or something that does this, like captures all the audio and also has a speaker so it can speak to you or play sounds for you or even play music at that point. Although for that, you might want a headset just for like higher audio quality. Or of course the MetaGlasses, which do they have a microphone? I think they do actually. And also the speakers that are pointing to your ears. They don't have all day battery right now, but you know, technology improves. So those are some interesting alternatives. I think it's less about the, the more interesting question is not what's possible with the tech right now, but rather like what is, like if you have the tech, what is the like the ideal form factor though? Like what is actually nice experience? Sometimes you would like, you know, you know, you would like it to answer sometimes, but maybe in like a headset form where nobody else can hear it or you can hear it even if it's loud. But sometimes maybe it's nice if it's not like just in your ears, but rather like kind of just out, loud as you're talking to a person. But then this could also be dynamic. If you have kind of like a smart home setup, you know, then the thing could be with you, let's say around your neck, and it plays kind of on the speaker. Or let's say you're wearing like the MetaGlasses. They're not really in your ears, but they have a speaker pointing into your ears. So it could speak to you kind of privately all day and it's with you. But then as you enter your home, now it of course knows that it's your home and it switches the audio output out to a speaker array in the house instead or in the room you're in or something. That would be cool. It would be cool if it could, you know, see everything that I see. Again, that's kind of possible with the MetaGlasses right now, but they don't record for that long at a time. I would say, yeah, technically possible right now, but like feasible or practical, not really yet. But getting there. And then other ways of like sending data to it. So any manual way, which is like currently any file that I get digitally, any PDF or file or something, I want a way to easily send it to my AI. Now, I don't know how that would look in practice. It could be like you have a Telegram bot you send it to or there's like a folder on your Mac where anything you drop and it gets like sent to it. Or from your iPhone, you want it to appear in the share sheet somehow as like a contact or even like a separate app. But more generally, at some point, like this is going to become less and less relevant because this is based on the operating systems that we're currently using. But what I'm imagining is really a completely new operating system, right? Which means what I'm talking about right now is really just like an adapter between old and new operating system. But if we want to strictly stick to like more of a new operating system, then that concept of like having a file on my phone and sending it to my AI kind of disappears. Because any file that I'm accessing with is going to be through my AI. My AI is already going to have it. My system is already going to have it. I'm not going to have any files that it doesn't already know about. Either it shows me the file or if it's like sent to me somehow, it's still not really sent to me, it's sent to my system and then my system shows it to me. I imagine in the future, like when you contact someone, we'll probably always have a kind of like direct way to contact someone if you want to. But for someone, you know, to keep their contacts kind of open, it's more like you say to your AI assistant or system, whatever you want to contact some person, and then your AI assistant goes and contacts their AI assistant. And then, you know, intent is transferred between the humans to their personal assistants and then in between the AI assistants. These terms that I'm using, I don't know which term is good to use, like whether you say AI assistant or your AI agent or just like a system or your new autonomous operating system. There's no like right word to describe it and I don't know if it's it or if it's like a composite thing. It's hard to talk about. I don't know which words to use. Yeah, whatever. Another thing. I want my system to be very aware of exactly what my knowledge is because then it can tailor information to me perfectly based on what I already know and what I don't know. Which means information can be rewritten to the perfect zone where it can use terms I know to be more efficient and compact, but terms I don't know or concepts that I'm not familiar with will need to be explained so that I can consume information at an optimal rate and learn at an optimal rate. I'm just gonna take a quick break from this recording. Alright, I'm back from my break. Now continuing on the point of I want the AI to, as much as possible, know exactly what my state of mind is and like what my knowledge is so that it can tailor information to me based on what I already know versus not. And also is my personality, the way I like to ingest knowledge or be communicated with, and also my intelligence. And I don't know if there's other factors. Now, kind of building on that, one thing that would be super cool, I don't know if this is building on that or if it's a precursor to that or a specific use case within that. Specifically, it's kind of an upgrade on something I'm already doing. So I'm already listening to a lot of YouTube videos. I use YouTube actually primarily as my podcast platform because I have YouTube review. And so I'll listen to YouTube videos. And I do this a lot in while I'm like traveling or just walking or something, you know, where I

focused on OpenCloud. I wanted to know like the leading use cases that people have documented online, the way they're using it, techniques and stuff or updates. And so it's kind of random if my YouTube algorithm is gonna show me that or not. And within a video, it's kind of hard to tell from the title and thumbnail whether it's gonna contain like new information or whether it's stuff that I already know. So the algorithm does a great job, but ideally, I really have no idea how hard this would be. And I guess it's like a continuum. It's something you can like make a first version of and over time improve. But I want like my system to have this type of personalization algorithm that we see in YouTube or Instagram or other places. But it should have that across my whole life and my whole online presence. And of course, there's different platforms right now kind of for different things, but I'm, you know, want to have it where I'm not really using a platform for anything. Like everything comes to me through my AI system. It comes exactly the way I like it presented. I can like pick different personalities or like styles if I want to, but it's still like within my preferences and contextualized to me. And that means, for example, when I'm doing these things just like around, walking or doing chores or running, I wanna listen to when I would usually go to like just listen to YouTube videos as while I'm doing the things, instead, I would want that to be a feature of my AI system where I just kind of initiate it. Maybe I have my always on assistant, I just tell it, okay, now I'm doing this and just like, you know, stream information about this to me. Or it's like a separate app within my system or whatever. I don't know. But then it should just speak information about stuff at a high rate, exactly tailored to what I know and don't know. It could be news, it could be a certain topic, and it should be much of like a one-way or it could require like minimal acknowledgements for me that I could do while I'm doing my chores to like see if I'm paying attention because sometimes I get distracted and I catch something. Then also it's too much work to skip back a few seconds. I have to like, I do sometimes have to like open the YouTube app and tab back. It's very annoying. I wish I could do that from the lock screen, but there's no way to do it like a few seconds. You have to slide the thing. And if it's like a 30 minute YouTube video, I'm not able to do it precise enough. I'll like go back multiple minutes and it's annoying. So I'd like a way for my AI system to just like, I don't know a good term for it, but just kind of stream information into my brain like that, like just speak in the way a podcast would be where it just like rambles, but I should be able to pick the topic that I'm interested in or just let it, you know, do anything that it thinks I'm interested in. And then I should be able to guide it at any time super easily by just verbally speaking some instructions, telling me to pause or asking about what it just said or anything. It should be super fluid. I think this concept, if I formalize it a little bit more, it's super fucking cool. It's also, I've never seen an example of this, but at the same time, it is, it is kind of possible with the tech right now. I don't know about the quality within you're gonna have to make your own algorithm finding information which is valuable to you and these algorithms are super complex and you want it to work across the internet and then needs to be synthesized and then kind of tailored to you, which is, I don't know, possible, but it's like a prompting skill and use the right tool. And then you need it to be able to kind of speak constantly like this, like a monologue or it could potentially like create audio files, you know, ahead of time as a little step to get there. But I mean, for the final solution, that's not good enough, but just like, you know, how it is when you make technical systems to like take it step-by-step. That's something I could do. I could like prepare ahead of time streams on information like this as audio files that I can ingest in my own time. And so I could have the system perhaps ahead of time, like every night, reason about why it seems I'm interested in right now, what I've been consuming that day and prepare ahead of time some audio files like 30 minutes each or an hour each or something about different topics, perfectly tailored to me and then they're ready. So if I want to play them, then they're already prepared. So then it's just a matter of streaming the audio, which is very easy. It doesn't have to do the live synthesizing or the research or the text-to-speech. But I mean, in the ideal system, it's truly live. But that would be like in between step, but it's honestly makes it sound very feasible right now. Like not too hard to set up and still hugely practical. It's just a question of like cost. Like if you want to do that every night and like have a lot of availability and variety, maybe, then how well it will actually be able to personalize it. Depends on how much info it has about me and my skill of setting up the system. But very, very feasible right now, actually, a version of it. What more do I want? I want so bad, now that we have these AI tools and they're developing, I wanna have a lab. You know, I just wanna be Iron Man essentially. I wanna have a lab where I can also create physical things, not just digital things. And I wanna have like 3D printer and laser cutter and other tools where I can like convert my ideas into physical manifestation. And then as the tools get better, you know, that I can really just describe something in natural language or with my body language. There's like a camera watching me maybe to like try and create the thing and then interact with it in certain ways to like build, you know, the 3D file or whatever. And then get it actually made or printed or ingrained on physical material. That would be super cool. Just even just having like a 3D printer and like a software right now that exists to design 3D models, but there's like a learning curve to these softwares. But if they can be made more natural and intuitive using AI so that they're more welcoming to me without having to spend a lot of time learning the software, that would be so cool. I haven't looked into that, like exists right now. I'm sure there's some things that are made. I don't think it's that good yet, really. With me, my guess based on my current, what I've consumed, but again, I haven't looked specifically into it. Then what else do I want? Well, yeah, so I do want the system to know exactly what I know. I want it to know more, but it should also, what I mean by know what I know is not that it has the same knowledge as me, but that it, it has knowledge of exactly what I know and I don't know, which means it has like all the world's knowledge, but then it knows how much of the world's knowledge I know or don't know. And so it can tell it to me. I've already kind of explained it. But so one very practical step towards that I can do right now is to have the system just parse my YouTube history because that's my main platform for ingesting information, especially if I combine with like a Google search history or something or my other AI chats, to be honest, because a lot of what I learned go through other AI chats. But now I wanna set up, it's tech is already out. You just need to create the system where I don't use other AI chats anymore. I only use my one system chat through that. And if I wanna use a specific AI provider or model, I just set up my system to use the API for that model or provider, but still through my system, still my memories get saved in the same place. My chat history is saved. And I can do that now. So I can talk with Gemini if I want to or OpenAI or Anthropic, but through my system, through my chat. I save my own chat history. I build my own memory layers on top of that. And so my AI system always have all my context from like everything I've consumed. So then I just stop doing normal Google searches. I always do it through my system. I stopped doing things in other AI apps like ChatGPT and stuff. I just do it through my system. I will, at some point, you know, stop consuming YouTube like that because it would come through my system, like the information stream in. But I think that's further away because I think the way the YouTube algorithm understands me very well is gonna be very hard for me to replicate with my own system. But you know, eventually replace that. So then all of the information I take in at some point will come either through my system or from like other people in real life, in which case I want my system to capture it through like audio recording and potentially like visual recording, although that's further away. Like for it to record my POV 24/7 or some camera around me that films me, I mean, that's more complicated. It depends on where you are and stuff, but like filming my POV 24/7 or like while I'm awake at least. Not feasible right now. Possible but not feasible, as I think I already said earlier in this recording. But at least the audio part. And that is, I think I can get far with that if it can be good at speech detection, identifying different persons. Maybe it will just help that I give it so

Call it something with AI because that term gets just used for so much. I'm just gonna call it like my life OS, my life operating system. Henrik OS, life OS, I don't know. Which is going to be using leading AI tech, but also just tech, you know? Like not everything is necessarily AI. Some of it is just systems. Some of it is just databases, algorithms. Yeah, this term AI, like the meaning is so... Yeah, whatever. The term is so bloated or... Is bloated the right word? It's just like thrown around so much. It's just like a lot of the developing tech we call AI because it's smarter tech than before. Anyways. Yeah, but while I'm still using these operating systems, let me, like, although the... I'm stuck. Okay. While I'm still using these operating systems, let me fucking try to automate it because the technology is kind of here right now to do it. It's like just now, really. These like browser use and computer use models that can literally like human, like click buttons on your computer or on the web, navigating these interfaces that are made for humans in a way that a human would, which means I don't need to do it anymore. And of course, all of these operating systems and interfaces are going away, over time because they're gonna be replaced with just autonomous systems where you don't need them anymore because we all have... That's a shitty word, but we all have our like AI systems or AI assistants or AI operating systems. And it's all just connected in a different way, which is more intuitive for humans. But there's a path in between and I'm still dependent on using these operating systems I have right now, but I can automate it and I'm gonna do that because I don't wanna be tied to my computer. I wanna feel very free of technology. Like I'm, of course, building technology around my whole life, but at the same time, I wanna feel very free from it. I wanna just kind of be in reality and then just have the technology suddenly there in every aspect or suddenly it depends on what you're doing. But yeah. At times suddenly and at times be like all encompassed by technology. And I guess at times none, no technology, although I know as I'm building more and more of this, being occasionally around like no technology, it's gonna feel very primitive, very Stone Age. And I'm not gonna get fucking anxious about the data that's not being automatically collected because I don't have my devices on me or whatever. And feel like, you know, I'm lacking on a lot of data, even though I'm just literally existing in real life the way humans have always done. But that's a worry for the future. Yeah, so I haven't looked into it, but it's like pretty recent. I kind of right now we have ways to just automate like normal human use of our phones and computers. I think probably has been out earlier for Windows and Android, but I think it also exists for iOS and macOS now where it can navigate the phone and the computer the way you do as a human for like tasks. Like there's still some things I would just want to use the phone myself for, especially like anything out with other people, more like social things. It's like when you're doing, using digital tools, but with people. I like airdrop photos to someone or do something on Spotify, take some photos. Like this is stuff I'll still be doing kind of manually with my phone for a while. But more on my computer, a lot more on my computer, I can automate this stuff because I don't wanna really be on my computer, man. It's just good because it has power, it has the internet, it has a display, a keyboard, like input output. But once I get more of this stuff over to like my AI system, who's available multimodally, the Mac will just be a part of that, but it's just a part of the system and I'm gonna interact with it in so many ways, completely separate from the Mac. But then the Mac will be this kind of like stationary dock where I can go a little bit more into things. I have like the display, the keyboard is interactive. I don't really know how much or little value and importance the Mac will have. I'll have to see. It will gradually get less and less important. I have this like, I could see a way where it just becomes very little important very fast and I just really stop using it. Like, what do I need my Mac for, to be honest. I use it for coding tasks and work tasks and stuff, but the more and more I get this system automated, like it can really do all this shit for me on the computer. The only thing I need to do is communicate my vision of what I want programmed or made or systemized. And I can just do that mostly through a voice conversation. Then I will want to consume a lot of information, which I can do through listening to it from any device, but I will sometimes want to read and then the Mac, just having the computer display, it's kind of bigger. It is a natural way to like read information. That's something I would probably use it for. And then, you know, sometimes I wanna make something, but I don't know if it's technically possible yet or if there's like public APIs that make it possible or whether, so I need to like do research on these things, but the system can automate that. Then I need to learn the answer for myself, but that can just be spoken to me, to be honest. If I have like AI that really understands what I'm wondering and what I wanna make and can do the research properly for the intent that I have, then I don't need to read much back. It can just tell me, okay, I did the research. Here's like the state of the world. And like, I can choose to accept it or not. Like, here's the state and here are my options. And sometimes it's hard to accept. You want something to be possible, but it just isn't. But still, yeah, I don't really need a computer for that. Like a desktop. Like a computer, again, it's kind of like a, what is a computer? Is your phone a computer? Well, it kind of is, but usually when we say computer, we mean not the phones. We mean like laptops and stationary computers. Hmm, fuck. Right now I'm so excited for this future. And there's a big question of cost, of course. These systems that I'm gonna set up, how much is it gonna cost to run? That's gonna be a big consideration for me. I imagine in the future I'll make a shit ton of money and don't really need to worry that much about it, at least for my personal use case. But as of right now, I don't have that much money to play with. And so it's kind of like a thing where I wanna just take a chance and invest my money before I have any. And I think this is kind of the perfect opportunity to do it. But you know, it's a risk. So you gotta think about it. Maybe I could get funding. Like I've been thinking about before to get funding. This would maybe actually be the thing. Actually, yeah, I just realized now. For all the time before I wonder like, maybe I should get funding from someone to do the things before I make money. I didn't, I feel like I was disciplined enough or working hard enough or doing something cool enough to be deserving of that. This like AI stuff might be the time actually where I feel like I could see that as a real option. Something to go for. That's an interesting thought. I never realized that before right now. I still wouldn't feel quite ready for it right now. I wouldn't feel deserving of it. But that could definitely change actually if I go more seriously into this. Then there's a big question with all these things I wanna have. Should I build them myself or just wait until it's made by someone else? Because I'm just one dude. I'm technical, but you know, I have my bachelor's. Like I'm not a master's, PhD, researcher. I haven't worked in the tech industry or the AI industry or anything. Like there's a lot more people in the world who are more knowledgeable about me, about this than me. Also a lot of people working on this who are smarter than me and who have worked on it longer. And there are also just more people. There are like teams and stuff working together and they have more funding. So I shouldn't waste my efforts on things that bigger organizations are gonna figure out for us and push the leading research on. But there's certain things that they're not gonna do or they're not gonna do a long time because they try to hit a broader, more commercial market and therefore they have to be more careful. Whereas I just moving for myself can take more risk and move much quicker and do more like niche things. And I don't need to worry as much about privacy and security and safety and stuff. But it's very, this consideration can be hard to make. So there's clearly some things I should do and some things I shouldn't do. Some things are just like really, I could maybe work on for a long time, but honestly, it's just not worth it. Better to just wait for a big actor to solve the problem and then use their solution or their service. But I'm not sure of like all the things I've listed and the things I wanna do what I should focus my efforts on versus what I should, although I wish it existed now, I should just literally just accept that I need to wait for it. So I need to organize this like list of things I wanna do into, for each of them how like possible they are right now or how far into the future they are, kind of like how feasible they are, how like complicated or potentially expensive they are. And how, which like I can do right now and should do right now versus which I can do, but it's gonna take

Like kind of through my life operating system because it would, you know, automatically edit the video and stuff. But yeah, anyways, that reminds me of another use case I really want, which is also possible now, but I haven't looked into it and I don't know how good it is. But yes, it's like right now, yes, definitely possible, which means I need to do it. Automatically editing and posting videos for me. How I want my workflow to look. I don't wanna, the only manual thing I wanna do is record the videos because I need to physically be there speaking to the camera. Everything else should be automated, which means after I've recorded it, if it's one take or if it's one clip or multiple takes or if it's multiple clips, it shouldn't matter. I've just recorded it. Everything else should happen automatically, which means with the way my operating systems are now, I'm usually recording on my phone, which means it's saved to the photos and overnight it's synced to iCloud. Now it's in the cloud and other system can pull that out of my iCloud, like copy it out over to its own. Look at all the video content, understand it, understand what is what, like what's multiple takes, like where should it be cut, then cut it and then also like edit it in other ways, add text, whatever, and finally publish it to whatever platforms and fit it to different form factors if it needs to. All that could be automated. In the future, maybe I don't even, you know, record it on my iPhone, I record it on, I would just have a camera and a screen and it's connected directly to my system. So whatever I record, it goes directly. It's just saved as a file in my system. But for a while, the recording is gonna happen through my iPhone just because it's practical. And this is why I've started already. I've set up the automatic sync, which gets the photos and videos from my iCloud to my own system, my own database or my own storage bucket, cloud storage bucket. And that's super cool. But none of that understanding the video or the editing pipeline or the publishing, I haven't done any of this. Especially the editing, I'm the most curious about. How good is that? But that's gonna be interesting to explore. And then here's one of those questions where like, should I set this up very manually? But then what if there's a service right now where you just dump the raw takes, the one video or the multiple takes or the multiple videos, and it just figures everything out. I think that probably doesn't exist, but then that's gonna be created at some point. So then am I wasting my time by making that myself? Well, there's a lot of context that goes into how do you actually want that video to be edited and published? A lot of like, you know, it depends on your personal goals and your personality and what kind of content you wanna give out, what image you wanna give out. So I guess my system would have all that context, like know what I'm trying to do with my content. Whereas for an external platform, I would need to pass that context somehow. Like what is it actually gonna do with these clips and make it like how much is it gonna edit it? Like which tone is it gonna go for? Like which speed? How many like graphics should be on top? I don't know. And then like, do I even wanna post content or do I wanna think like more in the future that it's not even relevant? Yes, I do because I think content, I don't know if it's gonna be around forever, maybe, but at least a long time. And it doesn't really matter what I think because the world moves slow and struggles to keep up. And so others are still gonna be consuming content for a long time. And therefore I wanna make it for them that are gonna consume it. And for myself as well. I guess I kind of find it kind of cool, like documenting in a more explicit way. I don't know, do I even wanna make content? I could draw an argument for why it's completely pointless, but honestly right now, yeah, I do. I feel like I respect myself more if I make some content. I feel like I'm doing something more real, putting something out into the real world that other people can see. So I feel like it's a status thing as well. Like there's proof for other people that I'm actually doing something. And for me right now, that feels like something I want and kind of need. And so I can make like content about this, have like a series or department content is like trying to automate these videos that I'm posting as much as possible. So I'm making videos about trying to automate the videos. That would be super cool. And then the first automated videos could be super shit, but they're still just documenting me making the automated videos, but they are also automated, but they're shit. And then over time they get better as I improve the system. That would be super cool. It's like a recursive series. I could start that right now. Same with just like managing my social media. I just like posting videos there and also potentially photos or posts. But I feel like that value is also just going down and down and down. Videos is a little bit more interesting. I like YouTube videos as well, but like it's all going down. Like I, you know, there's so much content being posted. I don't watch any of it, but even though I watch a lot of content, especially like YouTube videos, you know, I want to find the highest leverage, like the most information dense and the most like intelligent content. Because most of it is just slop. But still it's like a economically valuable now and it's still gonna continue being it for a while. And also just like being real is always valuable. Yeah. So I need to figure out what I wanna do with OpenClaw or AI automation or like system making like right now. Like there's a lot of ideas. What should I do right now? What can I do right now? How do I make progress and not just stay in this dreamy phase and also how do I make sure I'm not too obsessed with this? Like it's not the only thing I'm consuming or thinking about. Like I wanna stay in reality as well just to have like a balance just to feel like I'm doing well in life in all areas. I don't wanna just be... I wanna be a fucking like world tier leading in AI automation and like just automating my life and setting up elite personal assistants. But at the same time, I wanna be very much in real life. Like not a techie nerd, just very much in real life. Like very good with people, just like healthy, super fit, strong, fast. I just wanna be able to like dance and laugh and have fun and play video games just for fun and like be really good with people, like party with people, be like a Chad. So I gotta figure out how to balance this. That's where things where I gotta, you know, fix my hair. Maybe I'll upgrade my wardrobe. Not really personally, my wardrobe is fine. Here's a big thing. Fix the skin, which I kind of did now, which is good. Could still be a little bit better. And then just be more aware of it. But now all my acne essentially went away. It's amazing. I'm ready to make sure I don't have like dry flakes and stuff. I need to be more social and around, more around people who I like vibe with really well. So I'm just like vibing, just living, having fun with people because I don't do that that much these days. And with these days, it means now, but also before. And it's been like a lot of my life and it's a big problem because I think life should just be richer. Yeah. And then in general, like with my goals, for example, with the training, make sure I'm collecting data. First of all, that's like the easiest thing I can do is just make sure I'm collecting data because then it can always be processed later. But then also make sure I'm utilizing the tools that exist right now to process it, analyze it, give me advice and then make sure I'm disciplined to actually follow the advice. And so I can actually track that I'm actually getting better. And yeah, that's enough for this recording. I got to get ready for my training today. And yeah. I'm not sure if this is going to work, but I just want to test something because currently these recordings are getting synced to the cloud and then the system that I set up with OpenClaw is pulling them in from the cloud buckets, the inbox, as I've called it. It's pulling it into, what's it doing? It's running this ingest script, I think every night where it's transcribing them with OpenAI Whisper or GPT Whisper or whatever, with Whisper, and then storing the transcripts in our database. I'm not sure if it's really seeing the transcript though, like the actual AI agent that's doing the work. But just as a test to my OpenClaw, if you hear this or like read this right now, I just want you to send me a message where you just say like, just confirm essentially that you heard this or that. Say like, hey, I'm just letting you know, I saw from the voice memo you recorded at this time, which was like this length that you at the end mentioned this thing. And so I'm just sending you the message just to confirm that I saw it, which I think it's not gonna happen. I think it's not really gonna see this or even if it saw it, I don't know if it would send a message or if there would be like a prompt injection risk there because I guess there kind of is, but now it's just me recording to myself. And so this is gonna come from my iCloud, you know, to the OpenClaw. So therefore I think it's fine. But yeah, Open
6991eaf1c2bf710dab436bf6de479f20d93fcb6612208aba44a80105e87845c8_f04ac52e2a3d.m4a
Tuesday, February 17, 2026
9:23 AM ยท 12:12
Essence

The speaker is contemplating whether to give up on their current inkjet printers due to recurring issues with dried or clogged ink, despite the existence of maintenance hacks.

Summary

The speaker is frustrated with the high cost of ink cartridges and the problem of ink drying out or clogging if not used regularly. They believe this is the current issue they're facing and are asking for information on how long it takes for ink to dry and if there are solutions. They've already looked into some hacks, like dipping cartridges in water, printing test pages, properly powering down the printer, keeping it plugged in for automatic maintenance, and maintaining humidity. They're hesitant to give up on their current inkjet printers entirely, suggesting that perhaps they just need to use them more frequently. The speaker is considering troubleshooting two black ink cartridges in parallel, performing the same fixes on both, and hoping one survives. They acknowledge that they haven't tried all the suggested methods, like soaking or blotting, especially for the black ink, which is new and shouldn't be dry. They express a desire to approach the problem systematically rather than just waiting for it to resolve itself, and are looking for advice on whether to disassemble the ink cartridges.

View full transcript
Prices with ink cartridges are expensive and need to be replaced. The liquid ink can dry out or clog if not used regularly. I think that's the problem we have, right? Du, kan du sรธke opp hvor lang tid det tar fรธr det liksom tรธrker og er et problem? Men selv nรฅr det skjer, hvis jeg velger รฅ bruke vanlig, sรฅ er det noen enkle metoder du kan gjรธre for รฅ fikse det. Du bare dypper de litt i vann, basically. Det ser ut her, sรฅ alle hvis tingene tikker pรฅ inkjet for meg. Sรฅ det er en tanke litt for รฅ gi opp de to helt ennรฅ da, fordi at det kanskje er det kjรธpet, liksom. Vi fรฅr se, som sagt, sรฅ har jeg prรธvd en del, men jeg skal prรธve litt. Nรฅ skal jeg bare hvor ofte man bruker den. Og neste spรธrsmรฅl er om det finnes noen hacks nรฅr det har skjedd. Det har jeg sett pรฅ allerede. Ok. Det gjรธr det. Det nyeste veien. Print the test page. Power down properly. Always use printer's physically power button rather than the wall switch. Keep plugged in. Many modern printers perform automatic maintenance cycles in sleep mode. Maintain humidity. Extremely dry environments accelerate drying, keeping the little bit of ink. If dealing with a clog or are you looking for a new printer, that might be what I had. Sรฅ kan du sรธke opp hvor lang tid det tar fรธr det liksom tรธrker og er et problem. Men selv nรฅr det skjer, hvis jeg velger รฅ bruke vanlig, sรฅ er det noen enkle metoder du kan gjรธre for รฅ fikse det. Du bare dypper de litt i vann, basically. Det ser ut her, sรฅ alle de tingene tikker pรฅ inkjet for meg. Sรฅ det er en tanke litt for รฅ gi opp de to helt ennรฅ da, fordi at det kanskje er det kjรธpet, liksom. Vi fรฅr se, som sagt, sรฅ har jeg prรธvd en del, men jeg skal prรธve litt til. Nรฅ spรธr jeg bare hvor ofte man bruker den. Og neste spรธrsmรฅl er om det finnes noen hacks nรฅr det har skjedd. Det har jeg sett pรฅ allerede. Ok. Det gjรธr det. Det nyeste veien. Det ser ut som det er. Vi pleier รฅ ha en stor sรฅnn. Det er ikke noe รฅ gรฅ, men vi mรฅ ha noen skaffebiler. Men det er ikke sรฅ ofte du skal. Jeg tror det er bygget inn i printeren, pรฅ alle de vi har. Ja. Det er ikke sikkert det. Hvis du har brukt mellomtiden da. Ja. Kanskje. Nรฅ bare ikke gi opp. Nรฅ venter jeg. Liksom edder litt systematikk i tommeltheten er bra. Nei, altsรฅ det er en ting jeg bare venter liksom, รฅ dykke inn og bare la ingenting skje. Men รฅ kanskje gjรธre noe systematisk, plante seg. Det er ikke noe. Vi mรฅ gรฅ. Vi mรฅ ikke stresse. Det er ikke noe. Vi mรฅ ikke vรฆre stresse. Men er det noe demontere blekkpatronen? Over til deg. Jeg bare tar break down. Jeg runder ikke break down, men summarizing. I en setning. Litt for tidlig รฅ gi opp. Litt for tidlig รฅ lรธpe og kjรธpe en ny printer. En idรฉ er jo รฅ la disse to sortene nรฅ feilsรธke dem parallelt. De gjรธr det samme pรฅ begge samtidig. Og hรฅpe at en av dem overlever. Ja, vi blir jo svarte. Det er ikke sรฅ ofte du skal. Du skal prรธve. Men jeg skal prรธve. Det skal jeg prรธve. Jeg har ikke prรธvd sรธking eller blotting. Men pรฅ disse sorte sรฅ kan det vรฆre det som er problemet. Eller om det er noe. Fordi nรฅ har vi nytt sort blekk som skal ikke vรฆre tรธrrt. Men sรฅ kan det vรฆre noe at det var... Nรฅ hadde den virket i mellomtiden da. Ja, kanskje. Nรฅ bare ikke gi opp. Nรฅ venter jeg. Liksom edder litt systematikk i tommeltetten er bra. Nei, altsรฅ det er en ting jeg bare venter. Liksom at det skjer. Men รฅ kanskje gjรธre noe systematisk planmessig. Der ja. Nรฅ bruker jeg. Vi pleier รฅ ha en stor sรฅnn. Det er derfor vi mรฅ ha noen skaffebiler. Men er ikke det? Det er ikke sรฅ ofte du skal. Vi mรฅ ikke. Vi mรฅ ikke. Men er det noe demontere blekkpatroner? Over til deg. Jeg bare tar... Jeg runder ikke break down, men summarizing. I en setning. Litt for tidlig รฅ gi opp. Litt for tidlig รฅ lรธpe og kjรธpe en ny printer. En idรฉ er jo รฅ la disse to sortene nรฅ feilsรธke dem parallelt. Vi gjรธr det samme pรฅ begge samtidig. Og hรฅpe at en av dem overlever. Ja, vi blir jo svarte. Det er ikke sรฅ ofte du skal. Det var ikke det. Vi mรฅ ikke. Men er det noe demontere blekkpatroner? Over til deg. Jeg bare tar... Jeg runder ikke break down, men summarizing. I en setning. Litt for tidlig รฅ gi opp. Litt for tidlig รฅ lรธpe og kjรธpe en ny printer. Sรฅ en mรฅte รฅ gjรธre det er รฅ la disse to sortene nรฅ feilsรธke dem parallelt. Vi gjรธr det samme pรฅ begge samtidig. Og hรฅpe at en av dem overlever. Ja, vi blir jo svarte. Det er ikke sรฅ ofte du skal. Det er ikke det. Men er det noe demontere blekkpatroner? Over til deg. Jeg bare... Jeg runder ikke break down, men summarizing. I en setning. Litt for tidlig รฅ gi opp. Litt for tidlig รฅ lรธpe og kjรธpe en ny printer. Sรฅ en mรฅte รฅ gjรธre det er รฅ la disse to sortene nรฅ feilsรธke dem parallelt. Vi gjรธr det samme pรฅ begge samtidig. Og hรฅpe at en av dem overlever. Ja, vi blir jo svarte. Det er ikke sรฅ ofte du skal. Det er ikke det. Men er det noe demontere blekkpatroner? Over til deg. Jeg bare... Jeg runder ikke break down, men summarizing. I en setning. Litt for tidlig รฅ gi opp. Litt for tidlig รฅ lรธpe og kjรธpe en ny printer. Sรฅ en mรฅte รฅ gjรธre det er รฅ la disse to sortene nรฅ feilsรธke dem parallelt. Vi gjรธr det samme pรฅ begge samtidig. Og hรฅpe at en av dem overlever. Ja, vi blir jo svarte. Det er ikke sรฅ ofte du skal. Det er ikke det. Men er det noe demontere blekkpatroner? Over til deg. Jeg bare... Jeg runder ikke break down, men summarizing. I en setning. Litt for tidlig รฅ gi opp. Litt for tidlig รฅ lรธpe og kjรธpe en ny printer. Sรฅ en mรฅte รฅ gjรธre det er รฅ la disse to sortene nรฅ feilsรธke dem parallelt. Vi gjรธr det samme pรฅ begge samtidig. Og hรฅpe at en av dem overlever. Ja, vi blir jo svarte. Det er ikke sรฅ ofte du skal. Det er ikke det. Men er det noe demontere blekkpatroner? Over til deg. Jeg bare... Jeg runder ikke break down, men summarizing. I en setning. Litt for tidlig รฅ gi opp. Litt for tidlig รฅ lรธpe og kjรธpe en ny printer. Sรฅ en mรฅte รฅ gjรธre det er รฅ la disse to sortene nรฅ feilsรธke dem parallelt. Vi gjรธr det samme pรฅ begge samtidig. Og hรฅpe at en av dem overlever. Ja, vi blir jo svarte. Det er ikke sรฅ ofte du skal. Det er ikke det. Men er det noe demontere blekkpatroner? Over til deg. Jeg bare... Jeg runder ikke break down, men summarizing. I en setning. Litt for tidlig รฅ gi opp. Litt for tidlig รฅ lรธpe og kjรธpe en ny printer. Sรฅ en mรฅte รฅ gjรธre det er รฅ la disse to sortene nรฅ feilsรธke dem parallelt. Vi gjรธr det samme pรฅ begge samtidig. Og hรฅpe at en av dem overlever. Ja, vi blir jo svarte. Det er ikke sรฅ ofte du skal. Det er ikke det. Men er det noe demontere blekkpatroner? Over til deg. Jeg bare... Jeg runder ikke break down, men summarizing. I en setning. Litt for tidlig รฅ gi opp. Litt for tidlig รฅ lรธpe og kjรธpe en ny printer. Sรฅ en mรฅte รฅ gjรธre det er รฅ la disse to sortene nรฅ feilsรธke dem parallelt. Vi gjรธr det samme pรฅ begge samtidig. Og hรฅpe at en av dem overlever. Ja, vi blir jo svarte. Det er ikke sรฅ ofte du skal. Det er ikke det. Men er det noe demontere blekkpatroner? Over til deg. Jeg bare... Jeg runder ikke break down, men summarizing. I en setning. Litt for tidlig รฅ gi opp. Litt for tidlig รฅ lรธpe og kjรธpe en ny printer. Sรฅ en mรฅte รฅ gjรธre det er รฅ la disse to sortene nรฅ feilsรธke dem parallelt. Vi gjรธr det samme pรฅ begge samtidig. Og hรฅpe at en av dem overlever. Ja, vi blir jo svarte. Det er ikke sรฅ ofte du skal. Det er ikke det. Men er det noe demontere blekkpatroner? Over til deg. Jeg bare... Jeg runder ikke break down, men summarizing. I en setning. Litt for tidlig รฅ gi opp
391b6c6bd83455223f1338ae1ef2a92aae3ab5f3bc79d527647daeb4a1d819e1_5d0622087c9a.m4a
Monday, February 16, 2026
8:46 PM ยท 28:31
Essence

The speaker reflects on a successful but busy day, grappling with balancing ambitious personal goals and social opportunities against a strict new sleep schedule, while also envisioning an advanced AI assistant to optimize his life and interactions.

Summary

The speaker reviews a successful day, having completed planned tasks including a long gym session with running, though he feels time disappeared quickly and wishes he'd accomplished more. He dismisses this feeling as overthinking, acknowledging he did things correctly and shouldn't feel shame. He's trying a new routine of waking at 5 AM to work, but a late group session and a dinner invitation with a new acquaintance, Flynn, threaten to disrupt his sleep schedule. Despite the potential for fatigue and reduced recovery, he plans to push through, prioritizing the valuable social opportunity with Flynn, whom he admires for his confident, extroverted energy, seeing it as a chance for personal growth and socialization. He also details his diet for the day, which included oats, recovery drinks, a protein shake, bread, stew, and some chocolate and Norwegian buns, noting he's not strictly tracking macros but feels his current balanced approach to food, allowing for treats, is working well both physically and psychologically. He's also making progress on organizing his home and tackling a long list of personal tasks. The speaker then shifts to his interest in cutting-edge AI tools, particularly OpenClaw, and how top users are employing it, often by setting up teams of specialized sub-agents. He envisions creating a personal AI operating system, which he's considering naming "Henrik OS," that would function like a Jarvis-esque assistant. This AI would be always-on, available in every modality, and capable of natural conversation while simultaneously performing complex tasks in the background by delegating to sub-agents or tools. He dreams of an AI that could manage multiple requests concurrently, like migrating a database and researching protein folding, while maintaining a fluid conversation. He desires this AI to be accessible via voice, ideally through a constant connection like an AirPods call, and even contemplates a device that records sound 24/7 for a "super memory" and objective self-evaluation of social interactions, acknowledging the privacy concerns but valuing the potential for self-improvement. He also notes a new voice AI he saw that understands emotion and sarcasm, which he finds more impressive than current mainstream options.

View full transcript
Right, day review. It was a very successful day today. I did exactly what was planned, essentially. I'm sitting here now feeling like the time disappeared quickly. I wish I got more done. But I think that's just like an infinite thing, because I know I kind of did things correctly. So, yeah, sure, I could have been slightly more optimal here and there, but honestly, there was just no point in, you know, feeling any shame over not getting enough done. Rather, I should just lower the ambitions. Well, no, I don't want to do that either, but I don't know, I'm overthinking it. Today was good. I did the workout, now with the running as well. You know, adding in more, that's cool. And the time went surprisingly fast. I mean, the run took a while, but I was just listening to podcasts. But then, you know, I did a normal full session after, so it ended up being two and a half hours in the gym. But the whole like strength session, you know, kind of just, I didn't feel any like lack from having done the running before. Like it was just good. It was good vibes. Yeah, I can clearly be like a long time in the gym and just, yeah, be going. Still really gotta get to fucking fixing my hair. It's gonna get done. I'm going to bed. I wish I went to bed slightly earlier, to be honest. I feel like that's a reoccurring theme. Now that I'm waking up at five tomorrow, I really don't know if waking up at five actually makes sense, because now I want to try this new program where I wake up at five and work instantly five to 6:30, which I think is cool to try for a week. But tomorrow I have the group session. It's pretty late. It's like 7:30 in the afternoon. And so even that alone would kind of put me past my bedtime by the time I come home. And then now afterwards, I'm gonna go to Flynn to get dinner at his place. Just because it's an amazing social opportunity for me. And so that's just gonna completely fuck my sleep schedule for that night after the most intense training session of all, where I know like I'll be feeling fatigued after, and also like the day after and stuff, I'll like feel it in the body for sure. So that's... That's like shit. It does not make sense to stay up at that time with the sleep schedule I'm trying to do now or to try and have the sleep schedule at that time. But I think, honestly, what I'm just gonna do, I'm just gonna ignore that, keep the same waking up at five, and just push through it. And then after this one thing, then I'll have time to, you know, adjust and do it properly. And so it'll like fuck me up for a day or two, but whatever. I'll still be able to operate everything fine. I know it won't give the body as much recovery as it should get from the training. But otherwise, like throughout life, I'll be able to operate essentially exactly the same. Like, it's not gonna limit me in any way. It just feels like I'm, you know, throwing gains away a little bit. Yeah, whatever. Just push through. Just do it anyways. Stick to the great new routine I'm trying to set, and then, you know, willingly fuck it up this one night, but it's okay, because it's a great opportunity to, like, meet a cool person. I think we're gonna get along great. I think it's... Yeah, it's an amazing social opportunity for me. It's gonna open up, like, we're probably gonna end up after this spending more time together or just going to the gym together or going out, which is, like, great for me to socialize. And he's a very extroverted, like, ADHD type person. He's, like, uh... Has a lot of sex, which is kind of weird to say, or he just... He has sex. Which, uh... I just feel stupid saying it, but I know he has that confidence, even while sober, just, like, confident, outgoing energy. It's exactly what I need in my life. It's exactly what I wanna learn to have myself and which, like, most Norwegians, most people, but especially most Norwegians, don't have. Most of them, like, even... Most of my friends have always had, like, you need to drink to, like, get out of their shell. He's just out there. It's perfect. It's exactly... What I need and kind of what I wanna be. I'm not saying I wanna be like him necessarily, but that energy, I definitely wanna have a lot more of that in my life, in me. So he's gonna be good for my transformation. I know this sounds a little bit weird, what I'm saying. Whatever. Other things. Food today. Breakfast oats. And then during the gym session, I bought two of these, like, kind of training recovery drinks with, like, carbs and protein. I drank them both after the run. I drank the start of the strength session, and then I bought another one after the gym session, so I drank three, actually. Then came home, had lunch, had, like, a protein shake when I came home. And then some, like, bread with butter and cheese. And then dinner was this, like, stew that my mom made. Took a picture of it. I'm really not, like, thinking closely about diet, you know. I'm supplementing clearly with some protein shakes and recovery drinks have, like, extra protein in them. But I have, like, no idea right now about my protein intake. It's... I think it's fine for right now, and, you know, as I've been for a while, just not think about it. Not overcomplicate things, you know, just try and live. But at some point, I will want to get some more concrete numbers again. But for now, it's fine to just continue with the same. I really have, like, no idea. Today I feel like I ate a lot, to be honest, especially with this dinner. I wasn't really hungry, then I started eating and I just wanted to eat a lot. Yeah, also, yeah, after lunch, I also had some snacks. I had a little bit of chocolate and these, uh... I don't know how to say it in English, but in Norwegian it's called fastelavnsboller. It's like buns, I guess, is what you say, with some, like, vanilla cream inside. I had, like, two of those kind of small ones, though. I was, like, craving a lot of extra energy and you... You know, it's kind of like junk food or snacks or whatever, but I really don't see it as a negative thing at all. I'm just, like, free. I don't crave this stuff normally, like, now I did. I think it's completely fine. I have, like, no guilt over really... Or maybe now I just... I'm, like, slightly wondering maybe that's, like, eating too much, whatever. But honestly, no, I'm not... I'm not gonna have any guilt over it. I know athletes eat sugar and chocolate and junk all the time because they just need to fuel their body with enough energy. Now, I'm not an athlete. I definitely don't want my body to be underfueled. Now, I don't wanna do, like, a dirty bulk either, but I don't think I am. I think I'm eating completely reasonably, honestly. So, yeah, I think I'm just not gonna think more about it. I'm not gonna overthink it. I'm just mentioning it. I'm trying to explain how I see it as of now. Now, this is just the voice message. It's hard to know, like, concretely, you know, exactly how much real food versus junk I ate and how much total food for someone to actually do an objective evaluation of, like, whether I'm doing it correctly or doing too much. But, um, I think I have a good mindset for it and a good balance now through just, like, being aware of this for a long time and having a generally healthy diet, but with allowed, like, treats or sugar here and there, usually combined with training. I think I have, like, a balance that works well where the body gets what it needs and it's not psychologically fatiguing at all. I don't usually crave more junk food type stuff. I'll have it here and there to enjoy myself or to fuel training. And, uh, I think, uh, it's just good in every way, to be honest. Yeah, now I've now limited my work more today and I'm gonna continue doing it this week. I wish I could just follow the same routine tomorrow and every day. I can do it every day, but not tomorrow since the group session is so late. And since there is a group session, it's like a different... So tomorrow is gonna be a completely different schedule. But I'll just have to adapt. I'm very happy with my routine, to be honest. I'm fixing up stuff at home. I got some more organized in my room today and there's still some more to do and I'm ordering some new stuff. I'm trying to, like, get my shit in order. I have this one reminders list, like, in the Reminders app. That's like, get my life in order and it's like a bunch of things I'm supposed to do. So I'm making progress on that today. Did some snow plowing as well for the fam. Um, yep. Still gotta work on the printer thing. I've just left that because I got so bothered with it. That's... I should definitely do that tomorrow. I do my work thing and then once that's finished and I have free time, I'm just, you know, gonna spend

I just wanna know the leading, like either software development workflow using like the best AI tools and it changes so quickly, which is why I feel like I need to like check it every day. Or like now with OpenClaw, like the world leading like OpenClaw use cases, like what people made. And honestly, I do feel like I'm pretty well caught up. Like I see essentially how the best people are using it. I know there's some stuff I don't see, of course, but in general, like I think through the YouTubers I watch now, I've seen like most of it. I should look more on X as well, cause there's a lot of talk about it there and my algorithm feeds it a lot to me. But now, so now I also still get videos about how to use like Cloud card or anti-gravity, but I know it's irrelevant now because I'm using like an even higher power thing with OpenClaw. And so I see how people they like set up a team of sub-agents or like a org structure and then they have their usually their own, like the main agent is the one they're talking to and it's always delegating to the other ones. So the main one is always available and then the agents are like talking between each other and they usually specialize in different things like marketing, development. Then they'll assign different AI models to each of the sub-agents depending on their task and how like much brains they need in it. So it's a trade-off between like brain power or cost and then also different models are better at different types of work. And then they build their like, they'll call it a mission control, which to me just doesn't make sense. I would just call it like a operating system, like a life OS or a Henrik OS. Um I actually started discussing it with my OpenClaw. I call it life OS but I think I'm gonna change it to Henrik OS actually. And then I'll just refer to it usually as the OS, the operating system. Yeah, here's some um visions or dreams I have for how I can like set up stuff, systems, which is probably going to be set up with OpenClaw and then also other tools. Just because that's OpenClaw seems to be the most powerful right now. Um yeah, I want like my AI that's always on, always available and it's available in every modality. I think I've talked about this already. So I imagine like a Jarvis, right? Like from the movie and you're usually gonna be talking to it. And you know, you, you're gonna have a very like human-like natural conversation, but you know, in the background as Jarvis is talking to you, it can also do any form of like smart work in the background. It can, you know, do all the work, all the work that smart AI agents can do. Now technically, it's not really the Jarvis doing the work. It's mostly just staying present in the conversation and it's just gonna send off like tool calls in the background and then occasionally read like responses or read stuff that it can read very quickly while still staying live in the conversation. So for example, I'll have a convo and then I'll ask you to like, you know, migrate our database or something. And he'll say, okay, I'm on that. And then while he's working on that, we can just keep having the convo. And then I say, yeah, also, could you research the leading protein folding technology? And then he'll say like, all right, I'm researching that. Meanwhile, do you wanna keep discussing moral philosophy? And then we have a conversation about that and then as we're talking, he's gonna be like, okay, so the database restructuring just finished. By the way, do you wanna keep talking or do you wanna follow up on that? Or you can be like, later, okay, the research finished. You wanna get the results or just keep talking? And so you can manage all this, you know, because then technically it's gonna be set up with sub-agents or like tools that he calls. Um and then, yeah, so primarily through voice, which means that I wanna be able to from my phone. I wanna have it as my Jarvis, as like, on my phone, for example, like a contact in Telegram that I can call anytime and even video call anytime. And then I don't know if I need video from him. I guess that would be cool, but also like I can show him video right over him. But also even pass calling, that's kind of like you need to pull out your device and call and stuff. I just wanna have him always on, to be honest. Cause for example, if I'm wearing my AirPods, just have like a call going the whole time. He's mostly just there in the background listening unless I like say something where, you know, he understands that he's supposed to respond. But then at some point, you know, it's gonna be kind of draining and you kind of just want your own device for that, which is what I'm assuming someone will invent in the future, but that's kind of a limitation where it starts getting hard for me to do that myself. Also, I'd suggest that I wish I just had a device on me that recorded sound 24/7 and synced it to the cloud. Every conversation I have where, you know, I don't think to record it and I don't wanna be dumping what happened later, you could just try and record it all the time. It's gonna happen in the future anyways. I wanna be like way out of the curve on this and I know people are doing it already. Like these devices, they exist. They need to charge sometimes, but they do exist. Maybe I should order one of those, but it's just, they're kind of obvious and I wish I could have something subtle cause it's not socially accepted at all. And of course, yeah, it intrudes majorly on people's privacy. I don't know, I feel like for myself, if I could just have one and nobody could know, I would probably do it. I don't know if that's unethical. Um I guess I don't care so much. Like I, I just find it, it would be so useful for, you know, just super memory for being able to discuss with my AI in hindsight, how I actually um behaved in a conversation. Like my perception versus more of an objective evaluation. Like did I do well? Was I talking too much or was I present in the conversation? Was I truly engaging with all the person or just kind of yapping? These are like things I wonder about sometimes it would be cool if I could like ask an AI about that. What else? Yeah, I saw a video on YouTube today, like a release from a company. They said they have this like new voice AI that really understands or some new AI that really understands emotion and they have their like real-time voice AI as well. They like combine their systems. Now you can have this real-time like streaming voice that's like response really quickly, but it understands the natural flow of the conversation, like when it's supposed to jump in or when it's supposed to stay quiet. And it understands emotion and sarcasm and stuff like that. It seems like a very good voice model. And from the video, which is, of course, you know, kind of an ad and very controlled, seemed more impressive than the ones I've seen from ChatGPT and uh Google currently. Or I just haven't really tested them with sarcasm, to be honest. Yeah, I don't know. Uh I think this one was better at understanding, like pauses, whether to respond quickly or whether to wait even. Like if even, it's not just a timing thing. Sometimes I'll turn quiet, but it's obvious that I'm like thinking or looking for a word or something, so I don't need to jump in just yet. That one seemed better at that, which gives you a more natural conversation. And that's something that I could then maybe plug into my system. I don't know about pricing, of course, or how well it actually works or if I even need that. But like with my OpenClo, I really wanna set up at least being able to call it. I could set up that, you know, super quick. I have no idea about the cost, but probably manageable. I really want to set up a way for it to work off of subscriptions instead of API usage, so it's more controlled. I wanna set up a way to control the cost more where we just ahead of time, we're more aware of how much tokens or dollars something is gonna use. And potentially when I ask you to do a job, we could like estimate the cost beforehand and then report afterwards. And then also see if the estimate fits and over time we could store that data and compare it and then improve our um algorithm for uh token usage prediction for like prompts. That prompts that trigger, you know, a longer cascade of work. I do notice it's burning through quite a lot of tokens. Then I wanna get more data live synced. One thing that hit me today, yeah, to get my YouTube history, if that's possible, which I feel like it probably is. Like YouTube data is generally pretty open. If it could know like which videos, because I'm, that's essentially my platform mostly for information consumption. I'm there more than I am listening to any other um podcast platform. I'm more than I'm like reading news online or scrolling on X. Like most of my information that I get in is through my YouTube algorithm. Just having it play mostly just audio and not video because I have YouTube premium. If it could see that because it has like my timing screen time data and stuff, my timing data. I'm not sure on my Mac it can maybe see which videos I'm watching on, but through iOS I don't think it can. And most of it is through my phone and it's kind of in the background. So I think we need to get the data from YouTube. If we
3bf177d2c843f08af9fed0d1a7c68c11b6f6fdc440837dc4de61191161599a93_cedd85561a35.m4a
Sunday, February 15, 2026
10:44 PM ยท 11:20
Essence

The speaker reflects on a satisfying day of unwinding and personal tasks, while simultaneously grappling with the challenge of balancing work, personal life, and fitness goals in the upcoming week.

Summary

The speaker had a great day, enjoying a break from routine and screens, which included cleaning and organizing their room โ€“ a long-delayed task. In the afternoon, they went to the cinema with a friend to see a movie they were excited about, even managing to sneak into a second movie afterward. Despite the enjoyable day, they're already looking ahead to restarting their routine next week, feeling a strong desire to accomplish many things but also a lack of time. They plan to strictly limit deep work sessions to two 90-minute blocks daily to make space for "life management" tasks like haircuts, errands, and personal projects. The speaker also grapples with how to incorporate more running into their fitness routine, finding it challenging to fit in alongside strength training due to logistical issues like hilly terrain and the effort involved in skiing. They acknowledge that their room cleaning took longer and achieved less than expected but are still happy with the progress and plan to continue tomorrow. They've outlined a tentative strength training schedule for the week but note the lack of dedicated endurance training outside of one intense session. They're considering adding easy runs to strength days but worry about sabotaging gains. Looking ahead to tomorrow, they're trying to figure out how to structure their deep work sessions around the gym, contemplating an earlier wake-up time to fit everything in. They also express a desire to restart content creation but struggle with finding the time. The day's food consisted of oats for breakfast, a large taco lunch, and cinema snacks for dinner, which they indulged in, noting the low protein intake and some digestive discomfort later in the day. They also mention a persistent throat-clearing habit. The memo ends abruptly while discussing a recent purchase.

View full transcript
Happy with the day today. It was good to unwind and do something different than the routine day. Get some time off from the screens and stuff, everything I wanted. The day went exactly as planned or intended, essentially. So yeah, great. I spent quite a lot of time in my room, just kind of cleaning and organizing everything, something I wanted to do for a long time and never got around to. And then in the afternoon I went to the cinema with my friend Verbeur to just watch a cool movie, Marty Supreme, which I was excited to see because I've seen like trailers and clips of it on YouTube and Instagram previously, and so I was excited to actually see that one in the movie, so now I got to do it. And I mean, I'm saying today was good that I got off the screen, but then I went to the cinema. But whatever, I think it was fine. It's not like too much screen, and it's got like a different type of screen experience, so. And it's in my projector, but I don't know if that really matters, but. We even did this fun trick where after you watch the movie, you can, they usually just check your ticket at the entrance, but then once you're in, you can actually walk in between the cinema halls. So after the movie, we just hung around a little bit and then went into another cinema hall and watched another movie almost all the way through. It's crazy. But I'm also excited to start my routine again next week, start work again. There's so much I wanna do, but I feel like I don't have enough time. I was supposed to like prep today, maybe pick a place to get my hair cut tomorrow. Haven't gotten around to it. I was supposed to get my room more organized, haven't gotten to do it yet. More like stuff to clean up, more stuff to do, so. I need to be very strict this week with not working too much, so I have time for like life management as well. So I think I'm literally gonna limit it to two 90-minute deep work sessions every day. That's it. And then the rest is like training, and then, you know, like life in between tasks, but then like some time spent on catching up on like the life tasks I wanna do as well, which can be with clothes or hair or like messages I wanna send and stuff, or maybe just things I wanna explore online. So there can be some more computer time specifically dedicated not to working on the project that excites me most, but like other stuff. Also, I was thinking like today I would plan all my workouts for the week and maybe try to incorporate more running somehow, but I haven't gotten around to that either, and I don't know how I would incorporate more running. Like I think I need to have these one-hour easy runs in the day as well, but at different times than the strength sessions, which means they need to be like either mornings or afternoons, opposite the one my strength session is. And that's just impractical to fit in since I'm like only training inside in the gym right now, so I would have to either, like maybe I do gym early in the day and then afternoon I'm at home and I just run outside, go do that. Or I could like go skiing, I guess, but it's just like so much extra effort with that equipment and stuff. I just can't be asked, to be honest. And it's hard to go skiing, like easy, without getting very fatigued. Also running in the place I live, it's so hilly, it's kind of hard to run at an easy pace. That's a lot of things I wanted to do. Like dinging around also with the cleaning and organizing of the room. It took longer than I expected, got less done than I expected, but still I'm very happy with the result. I got some new cool ideas. I'm cleaning some stuff up. I think I'm just gonna continue with the same also tomorrow. So the plan is to, well, I have like a, I mean, I say I wasn't able to plan the workouts for the week, which I wasn't, but still I know like what I'll end up doing. Just based on like daily driving, I'll do push Monday, Tuesday I'll do the high rocks group session now with friends, which is cool, with Flynn and Albert. And that's like a super intense. I'm gonna push hard. It's like a max out effort for me, cardio wise. So that's cool. Then Wednesday, pull, Thursday, legs, Friday, push, Saturday, pull, something like that, probably. Sunday, rest. I'm probably gonna do that. And I could replace the legs with like more running, but I don't know, maybe I'll just do strength legs in the gym. I don't really know. And then as you see, there's no in here, then there's no actually more running except for what I do in the high rocks session. So, which is, I mean, I'm short training my endurance, but it's only one session in the week and it's hard, so I should probably have more. So maybe I should add in runs also during some of my strength sessions. I'm afraid that that's gonna like sabotage the gains though. Maybe it's fine if I do like an easy run, even if it's one hour, if it's like, you know, the pace is eight kilometers an hour or nine kilometers an hour, maybe it's fine. But also I don't know if I really get, if it's like worth it for the time spent, if I actually get something out of it or whether I should just not worry fucking spend the time on it. But then for these days, I don't know about the whole week, at least the start of the week, max two hour 90 minute deep work sessions a day and then like get home and do other stuff. I also wanna start doing content creation again, but again, it's just like, fuck, when am I gonna do it? It's more work. It's, but yeah, I want to be doing, I don't wanna just doing this work. I wanna be documenting it and sharing it as well. But I feel like I gotta plan it to make myself do it. Like tomorrow I'm just gonna go and do the work on the Jarvis project or on open claw again, because that's what excites me the most. Okay, and then we'll see. And the way I'll structure these days, I'll have the first 90 minute deep work block before gym and then the second one after. I gotta figure it out how to time it with my meals and stuff. I'm thinking ideally, actually, I'll start waking up earlier so I can do the first 90 minute sesh and then like breakfast and then drive down to the city with my dad when he drives around like 7:15, 7:30. Then do the second deep work session then, then go to the gym around, you know, noon and then just go straight home. Then I gotta start waking up at four or something, 4:30. Or maybe later, maybe five would be sufficient. Work from five to 6:30. Yeah, I could do that, start waking up at five. I might do that actually, but tomorrow I have six. So that means I only do one deep work sesh before the gym, and then the second one could like be after, but then I want to like eat and stuff. So I would have to bring like a proper lunch or be willing to spend a lot to get a proper lunch to do that second deep work session, like out and about. Or I would have to go straight home and then do the second one at home. I'm actually not sure what I'm gonna do tomorrow. I'm probably not gonna end up packing any lunch, so I'll just see how I feel after the first work session in the gym if I want to stay or go home. I'm hoping to kind of get ahead of schedule and plan better, but I don't know. I feel like I should just go to bed now and try not to think too much. Food today, breakfast, oats, lunch, tacos. What was like leftovers from yesterday. I ate quite a lot actually, big taco lunch. I had like three tacos, but that doesn't say much because it depends on the size, but I made like three. And we take them in like tortillas filled up with meat and cheese and everything. And then dinner, there was no real dinner. It was just the cinema snacks. I just kind of went crazy there and bought sugar-free soda, sugar-free sodas. Then we had like a big popcorn, which we shared and he brought like a mix of nuts and chocolates and also chili nuts. So indulged quite a lot there, but I think it's like fine. It's like Sunday, nothing, you know, to think about or to worry about, just kind of let loose. And overall, given that I didn't eat dinner, I think it kind of equals out anyways, but low protein though, so that might be a concern. And then I did notice, I have just this like coughing thing that I said annoys me. Noticed that, you know, multiple times throughout the day, also during the cinema, just these like really feeling the urge to clear my throat. Which I feel like should be unnecessary. Like I shouldn't do that at all when I'm perfectly healthy and not sick at all. And then I did get some gas as well, but I think mostly after we started doing the cinema snacks, I think the gas I'm having now is most from like after I had that and like an hour after we started eating that and getting more of it. And not too bad, but some gas, which even smelled a little bit. Yeah. Earlier in the day, I don't remember. I didn't notice anything. And then I made a cool purchase today. I was, I had this green shirt, which I got from running the Oslo marathon this summer, where you could like
39276755251e02f748ccd23dc828b43dff9f8c5d9a6839cf5f1dc0a310ef8182_7a89901a7cee.m4a
Sunday, February 15, 2026
7:31 AM ยท 34:45
Essence

The speaker reflects on personal well-being, work discipline, and exciting new ideas for automating content creation and leveraging existing AI tools, while also acknowledging the need to balance work with self-care and real-world tasks.

Summary

The speaker begins by documenting recent food intake, noting a decreased appetite, and observing improvements in skin and lips after using lip balm. They express satisfaction with their work discipline throughout the week, viewing working too much as a positive problem stemming from passion. They've maintained consistent training and successfully adjusted their sleep schedule to wake up earlier, appreciating their adaptability. The speaker then expresses a desire for a dashboard to review their week and brings up two persistent minor health issues: frequent throat clearing coughs and a consistently runny nose, especially when transitioning from cold outdoor to warm indoor environments, or during certain exercises. They acknowledge neglecting real-life tasks like getting a haircut due to intense focus on projects, and realize they need to either complete or discard tasks from their reminder system to maintain its effectiveness. The speaker is excited about a new project to automate video editing and posting from their camera roll using their existing Jarvis system, aiming for consistent, low-quality output that can improve over time, and plans to explore this next week. They also consider how to better balance daily work with learning about new AI workflows, suggesting dedicated time for exploration at the start and end of each day. Finally, they question the efficiency of building their own memory system when tools like OpenClaw already exist, deciding to prioritize utilizing the best available AI tools, specifically OpenClaw, next week, while still valuing their unique data syncing pipeline.

View full transcript
Thinking out loud. First, there's a few things I want to document so that the system has access to information. And then I want to just think, these are the things I wanted to say, kind of on record here. I had food yesterday. For dinner, we made tacos at home. Breakfast, I took pictures of. It should be obvious from that. Was there any lunch in between? Not really. I just took like an orange and a Pepsi Max. I feel like my appetite was lower yesterday. And even this morning now, I'm not really that hungry either, which was interesting. Skin is still getting better. And then I took the lip balm before bed last night and it seems to really have helped as well. Because my lips were kind of fucked yesterday. They're not so bad today. So that's good. Then I have now been kind of locked in all week working. I think Monday was kind of a recovery day where I had to do very little screen time. But overall, I've been very good. I've been disciplined. I've been working. I've had that, you know, the problem is mostly on working too much, which is a fucking amazing problem to have. You know, that I'm working on something that I'm so passionate about that I struggle to put away. That's cool. I still got to get better on that. But you know, what a cool problem to have, you know. And then the training, I'm training consistently. I just want to get smarter with my training, but like I'm still training. So it's great. I've been sleeping well. I changed my sleep schedule this week to be earlier so I could wake up early, like with my family and join like my dad when he drives to work every day. Just sit in the car on the way down. So I used the alarm a couple of days, got sleep a little bit shorter, but now I'm more adjusted. So that's great. I'm really grateful I'm able to just adjust my sleep schedule like this, like without much effort or thought. People say like they have to wake up early or they have to wake up late. Like, I don't, well, I can understand people are different. For me, it doesn't apply. I can adapt easily. That's fucking cool. Yeah, I want to review the week a little bit, but since, I mean, always the days kind of blur together, so it's harder to remember. And I don't, I couldn't maybe remember, but honestly, I feel like it's going to take a lot of mental effort and I don't even want to bother with it. But I wish right now I could just see a dashboard or maybe on the canvas in the system. Just some form of dashboard that's like going to show me the week in that past in as much or as little detail as I want. You know, I write like a very low detail, you know, overview summary of the week or like a very high detail from every day. That would be cool. Yeah, one thing I've been wanting to mention for a long time to kind of document it here in recordings, but I've just forgotten about it every time. It's a health thing and it's very small, but I wish I could fix it. It's that I do these like coughs, these like small coughs throughout the day. Now I'm just going to do an example. It's like to kind of clear my throat. And I'm not sick right now. I'm perfectly healthy. And well, I haven't been sick for a while now. Like when I'm sick, I do that stuff much more because depends on what kind of sick, but usually I have something in the throat. I feel like when I'm healthy, I shouldn't be doing that almost ever unless, you know. And then yeah, when I'm sick and then I kind of get out of the sickness, then gradually, usually I have it in the throat, but it gradually goes away. But then I feel like at some point it's supposed to go away all the way. But I'm usually left like permanently. Like I'm saying it now, but honestly, I'm pretty used to it. Like, I think large parts of my life are like that, like many months or years or maybe always or maybe not. I'm not sure because I'm not that aware of it because I'm pretty used to it. But I just like a few times throughout the day, I have to like really clear my throat. And I would think that that's, I haven't looked into it at all, but I would think maybe it's not supposed to be like that. Oh, I'm just gonna light a candle in my room for a nice morning vibe. Alright. Also, another thing is that my nose is always a little bit runny. Runny, runny, runny, runny. And it's not a big problem, but it is a thing. Like it's, and it's more of an issue for me in the winter in Norway. Like I walk around outside and it's mostly fine. Sometimes while I'm outside, it'll like run a little bit down and I have to wipe it. Then the biggest issue is whenever I've been outside and then I go inside, I think it's the temperature difference or it could be air humidity or something, but I think it's the temperature mostly. Once I go inside, it starts running a lot more. It seems like there's something that's built up that's kind of stiffened a little bit, maybe along the edges or something. And then it like melts, it seems when I get inside. And it starts running more and I have to usually kind of like lean my head back to make sure it runs the other way, so I can kind of swallow it. Get it to run into my throat and swallow it through the head the other way instead of down outside my nose. Also sometimes when doing exercise indoors, like I'm doing the high rock sessions every week now, I get the same issue a little bit. It varies how much, but whenever we do a big part of the session there is running, I don't really have an issue, but then we might do, it's like this ski erg where you stand, I don't even know how to describe it, but it's the ski erg machine. It simulates stalking in skiing in Norwegian. Stalking in Langrenn. I think it's called like polling or double polling or something in English. That one you put your head down a lot and so I notice then my nose like starts, at least last time, starts like running and I have to all the time like pull it in like. But that's hard while you're like breathing heavy and it's intense. So I need to kind of like take a break to take my head up and kind of let it pour in, but then that's impractical. Or I can try and do while I'm running and it's possible, but it's kind of like hard. Run and like take my head up. Because it seems when I just kind of like blow it in or pull it in with air, it doesn't go all the way and it just goes kind of up and then it starts running down again. So I have to really lean my head back to actually get it like out of the nose and into the body. Also like when I swim, for example, when I was traveling, whenever I hop in the ocean or the pool, usually there's like more snot that I need to blow out, but it's not a big issue because you can just kind of blow it in the water. But usually also after we've swam, I get the same kind of like runny nose throughout all the time, like pull it in to like really get rid of it. And it seems most people don't have that in the same way. It's not a big issue, but it's like a thing. So it's something to like note down. Okay, I can't remember anything else that I really wanted to blabber about. Or anything else that I've been wanting to document here lately. I really need to get my hair cut. I've been saying it for a long time. I've been busy working on my projects, which I'm very passionate about, which is cool. But I've forgotten about real life tasks, cutting my hair. Also like my reminders, I've filled them up with tasks. The reminders app and the whole point of it is that, you know, I can dump stuff there so I don't have to remember it and the system is working great. But it also requires that I sometimes go and check it or that when I get the notifications, I actually respect them. But I haven't really been doing that. I have kind of looked at it though and I see more and I see, okay, nothing is that high priority and therefore I can put it to the side and focus on the project. So really it has been working correctly. It's just that I think I've deprioritized the things for a little bit too long now or for at least long enough where I can't keep doing it anymore. I need to go through that again because if I just keep doing it forever, then the system doesn't work anymore. That means I either need to do the tasks or I need to say, you know what, it's not important enough. I'd rather just keep working on things I care more about and therefore like delete the reminder or the task. It shouldn't be left as like a thing that's on the list. Like I'm supposed to do it, but then I also always end up not doing it. It's like a horrible in-between state to be in. Yeah, I think today I've been, I've been locked in throughout the week, honestly. I have been. That's fucking cool, man. I want to do the same next week and in order to keep it sustainable, this Sunday, I need to make a very different. So I think today I'm gonna want to be more like moving around, not sitting still so long at one time. Nothing like big because I'm, I'm not training today. It's a resting day. But I think I can still like take a walk. I don't need to stay completely still or I'm not sure if I should. I

It's good because of the consistency, but I can set the bar for that content very low, I think. And it's like, now I'm making these projects, I'm excited about it. I don't need to do much with the content, I don't need to edit much if I don't want to. I just should just share what I'm doing. I just, I really wish I could make the videos more visual and engaging and cut out the bad parts without having to do the work manually, so I really wish I could automate those, so that's something I need to look into. But now it's sick though. I was wondering, oh, I actually relaxed now. I was always thinking before I record a video, so now I wanna get the... Get them sent to an editor automatically, but I didn't know how. But now I have the system, I didn't even realize. Because with the Jarvis setup, I set up my iCloud library sync, and it's working. I haven't tested it with videos, I can test it today. So now the Jarvis system actually gets all the photos and videos I take automatically, which means, oh, that would be a fucking fun pipeline to work on, trying to get it all the way to understand the content. So I want to get it from a whole camera roll, what's actually part of a video, versus what's just random stuff. Edit it together in a simple, easy way, and post it the full pipeline. And I don't care about the quality, the editing can be bad, but then it just takes it all the way to post. So the only thing I do is record, and maybe make voice memos or send instructions where I inform more about the video, but even that, it can probably just infer from the actual video clips. Just in the video clip where I'm talking about the content, I could just intro it and be like, okay, this is gonna be a video about that. And then see an instruction to Jarvis, just like people do to their editor, or I can just say to my editor, you know, I start the video, I'm making a video about the snow in Norway, and I say in the beginning of the video, like, hey editor, this is what I'm thinking for the video, and then now we go. And then if I have to take multiple takes, it just understands that from the transcript or just looking at the video. Oh, actually, fuck, yeah, that's a cool project, yeah, I should definitely do that next week. But then, I think that pipeline is gonna be complicated. I don't think there's good tech to really do it smoothly. But hey, there's good enough tech to start and to do it at a low level, and you know what, that's good enough. And even just doing that, it's such a cool thing to make content about. And then, you know, the videos are shit, but then you'll see gradually improve and it's just fun. That's a great thing to build in public, yeah. I definitely wanna start posting more. I think I can just, don't worry about the quality at all. The quality can be shit, the point is just to post and to share something, and the quality is gonna get better over time. I actually don't care that much about the risk of posting low-quality content. That's actually fucking cool. That's the unique thing that I don't care that much. I need to utilize that. Since I've gotten used to that, I've posted some low-quality stuff. I know now with the people I follow, you know, they care, they don't care. I don't really care, to be honest. I'm completely fine with serving them low-quality content. Either they're not gonna see it or they're gonna see it and understand the process or they're gonna see it and actually think it's cool or they're just gonna unfollow it either way. I don't care anymore. Well, I do care if they see it and think it's cool. But then it's a positive thing. Fuck yeah, now I'm excited. Oh, shit. I don't know, this idea just got me super excited. Only something I was really excited about before, but I wanted to hire a video editor, but it just seemed like too much work. A video editor or a social media manager. But the tech is getting there more and more to where we can automate it. Okay, fuck. I wanna work on it, but today is gonna be a rule, no working. I'm allowed to think, I'm allowed to record these voice memos. I'm not gonna do work outside of that. And now for consumption, I don't know if I should. I wanna watch content online, YouTube or Twitter, seeing what people are doing with leading AI workflows and OpenClaw to get inspired and to see how I can improve my setup and workflow. But I think today, honestly, reduce screen usage, so I think I'm not gonna do that today. So I guess I'll have to do that throughout the week, but I find it hard to balance what time am I gonna spend working and be focused versus what time am I gonna spend online learning, but it's also just kind of distracting. But I think I should probably do both every day, but just work, start of the day and end of the day, have some time for just exploration. I think that's a good balance, probably. Because things are moving so fast. I feel like if you just try and catch up once a week, it's not good enough. You could, it's fine, to be honest, but I kind of wanna do it every day. And I don't wanna be working too many hours every day. I would rather need to find a way to get more leverage and to get more done with my work to be more efficient and get to a higher level. Like my problem is that I'm throwing too many hours at this shit. I need to throw less hours at the actual, like interacting with the AI directly or staring at the codec screen. I need to fucking level up. Always, always just keep zooming out and giving higher level instructions, creating more like systems instead of sitting there working direct. Okay. I'm really excited to work with it next week. I've been wondering a lot this time if I'm wasting my time kind of building out this Jarvis system when a lot of components are already solved or at least very well functioning in OpenClaw and probably other systems that exist, but OpenClaw has been taking off. Like I'm trying to build my own memory system. Maybe I'm wasting time. Like it's an interesting problem to think about, but you know, there's more problems in the world and I know in OpenClaw it's already working super well. I should probably just use that. I think that's also a very big thing that I should do. Like start just like very like high priority next week. Like fuck this system that I'm building, parts of it are you need, like the parts of like getting the data synced. It's like regardless of which AI agent the system I'm using, it's just getting the pipeline for the data synced and it understanding what I want. That's unique. So I've done a lot of work on that and that's good. I need to utilize the best tools available. That's what it means. I need to use OpenClaw right now for motorware and I need to look at some leading people, how they're using it right now and use it the same way for me. I think that's a very high priority next week. I don't know if I should set it up on a VM in the cloud, which is what I have right now, or a VM locally on my Mac or just on my Mac, not even on a VM. I've heard many times that that's a bad idea security wise, whatever, whatever, but also that is kind of how I get more power out of it though. Like, especially since I'm in the Apple ecosystem and I think it can interact with all my Apple apps if I set it up like that and browse the web for me with being logged in and stuff, which is a big pain point for me right now. In terms of automation, I kind of just want to do that, set it up just straight up on my Mac and then it's the big security issue, especially if you, and then I think the biggest prompt injection risk is through installing like any skill and kind of you want it to browse and install skills itself, and that's why it becomes like a risk. So I don't know how to think about this. I need to do a little bit more research about it and then just make a decision. But I mean, I am kind of willing to take risks though. I just don't know if it's too much, but I mean, I would rather be bleeding edge and automate a lot of stuff. But then also if I do that and also share it online, then I open myself up to more exposure for hacking. So I'm really not sure how it works. Yeah. But yeah, I just keep talking about this tech project. Can I talk about something else that's just something else in life? Like later I'm very obsessed with this one thing, which is cool, but also like let's have a balance. Let's do it. I need to be more social. I need to have more like people around me with energy that I love. Like it's not good for me living in this house. I need to be more honest with my family. I need to get an income stream so I can be independent, live where I want to. And also that I can finally show my family that the things I'm doing is actually in the right direction. Like right now I'm kind of seeming like a bum, even though I'm now getting my shit together, I'm not looking like a bum as much anymore through my behavior, but it's still like, I'm just a lot of time on the computer. There's a lot of things with like family dynamic that I'm not talking about right now because I'm just not that focused on it, but it's all the same issues that's always been here. It's weird because I'm

It was really interesting to catch up and so that was cool. That was just social hanging out, kind of more like I was doing in my childhood, which I missed having at home. Like some of these times where I'm just like hanging out and talking. So honestly, it was good. We just talk, talk about career life. But then after that, I didn't really have time. I was gonna have a second proper work session. I never found the time. I just went to the gym. And then I was like, home. And then the day kind of, you know, dinner and the day just kind of like evaporates after that, you know. Maybe I worked a little bit at home actually in the evening. I don't remember. And uh, yeah, it was good. I got the, just like the hanging out, feeling good. I was especially my convo with Daniel, a lot more with Isaac, but especially with Daniel, I was so fucking excited. Like finally I could talk about AI and we relate a lot and we have fun together. I really, I do really crave that and I have too little of it, I think, in my life. So, it was nice, man. But overall, I also leave it with a feeling like, fuck, I didn't get, I should have at least gotten one another proper deep work session that day. So I just think I need to structure my day better where I can at least have like two proper high-value deep work sessions. And then honestly, it was good. I just talk, talk about career life. Um, but then after that, I didn't really have time. I was gonna have a second proper work session. I never found the time. I just went to the gym. And then I was like, home. And then the day kind of, you know, dinner and the day just kind of like evaporates after that, you know. Maybe I worked a little bit at home actually in the evening. I don't remember. And uh, yeah, it was good. I got the, just like the hanging out, feeling good. I was, especially my convo with Daniel, a lot more with Isaac, but especially with Daniel. I was so fucking excited. Like finally I could talk about AI and we relate a lot and we have fun together. I really, I do really crave that and I have too little of it, I think, in my life. So, it was nice, man. But overall, I also leave it with a feeling like, fuck, I didn't get, I should have at least gotten one, another proper deep work session that day. So I just think I need to structure my day better where I can at least have like two proper high-value deep work sessions. And then honestly, past that, I don't need more like two proper deep work sessions. That's good. You can do more for sure. But uh, I think in terms of living a balanced life, I don't think I should do more because I need to dedicate time to other things as well. And then I want like my training sometimes alone, but sometimes with friends and then great friends with great vibes. Also just hanging out, lunches, dinners, whatever. Traveling time also with great friends having a great time. Some other like social occasions, I don't know, every day, like definitely in the weekends, maybe also every day, like something in the afternoon that's more like social or fun. And then some time for doing like research or just like checking the news, AI tools, news, use cases and stuff that people are doing. So I can make sure I'm like trying to be bleeding edge on this stuff. That should be like a daily thing, afternoons. Also having some time just in complete peace and quiet. Well, it's gonna be like wind down time going to bed and also potentially mornings. Although mornings I just kind of want to start working instantly. So it's more like an evening thing, I think. Take some time to collect my thoughts, think, plan the next day. So when the next day starts, it can just start with a bang. I think that's the way I prefer. And uh, yeah. Okay. I've been yapping for a while. I think maybe I should start doing something IRL. Get some breakfast, I think. Um, I got really excited about the future right now actually. My like heart is tingling a little bit. I'm just excited. It doesn't sound like it from my voice, but I'm trying to, I'm trying to keep calm, but I'm actually super fucking excited right now. It reminds me of when I was watching Natty Nat on YouTube, this League of Legends player. And he, he got rank one solo queue in Korea, which is a huge fucking achievement. But you know, he's not like social and extroverted. Like when you watch like the streamers like Kai, for example, you know, he's more of like a, it's like a gamer. I don't know if to say introvert or extrovert because he's obviously like streaming and stuff. But at these times I find it very hard to understand and uh, label people within myself without actually use, but like He's not like loud and entertaining like that. But so when he got like rank one, he's just like sitting there kind of in quiet, just like, yeah, nice, nice. And his friends behind him are much more excited, like, yeah, fuck yeah. And he's just like, Yeah, awesome. Yep, yep. Kind of like halfway smiling, just sitting there. That's how I am right now. Not that I made a huge achievement like that, but I'm just excited. But I'm not looking at that much. Yeah, I know this. This is a nice point to leave off the recording. Let's have an amazing day.
e7b6e6b7d373622ebbb573c7174bd21decd235a3e22cca379039d053a9b56681_86810c6100b3.m4a
Saturday, February 14, 2026
10:02 AM ยท 12:23
Essence

Today's gym session was a somewhat unplanned, lighter workout focused on endurance and accessory strength, complicated by scheduling conflicts and uncertainty about optimal training methods, but ultimately felt like a refreshing change.

Summary

The speaker is heading home from a light, somewhat unplanned gym session. They initially struggled to decide on a workout, considering an easy recovery run or threshold intervals to build endurance, but ultimately opted for an easy run combined with accessory strength training. Their plan to do an hour on the treadmill was disrupted by a group session, forcing them to cut the run short and focus on the gym session instead. During the gym, they used lighter weights with more control, listening to a podcast instead of music, which felt like a good, less intense change of pace. They're unsure if the lighter weights were truly productive for muscle fatigue, but felt it was a good approach given their overall body fatigue, especially in their grip strength, which they note as a potential indicator of recovery. They also reflect on their improving acne, the dryness caused by medication, and their recent decision to restart creatine in capsule form for convenience, aiming to simplify the process even further.

View full transcript
All right. Heading home from the gym now. Did a, I would say, pretty light session today. It went a little bit off the rails compared to what my plan was. So today I was really not sure what kind of workout to do or to even work out at all, but I figured I should work out somewhere. I really didn't know what in terms of what is actually smart to do today, given my goals and everything. I figured this out on the go, right? So figured I had like a good week of training. Today's like a kind of an accessory day, but I figured I want to build my endurance, so I should do like some more running because I haven't, you know, done any running. I just did the Tuesday session that was like really hard, like high intensity. So I figured either I should do like a very easy run, like run an hour or something at an easy pace, which is like a recovery run kind of, or maybe even like threshold intervals where it's like hard intervals, but you don't push into max intensity, but just like threshold. So I don't know enough about this stuff to make educated decisions. I wasn't sure about those, but I figured it probably gave it to the easy run combined with like an accessory strength and bodybuilding session. So I was going to do an hour on the treadmill. I still don't know which pace is correct for like an easy run jog. I tried the treadmill like 10 kilometers an hour. I think it's actually a little bit too fast, too tough for me to be honest, for like what's supposed to be a very easy run. I think my heart rate goes too high, but I'm not sure what the actual reference values that I should look for are. And I've tried once doing this thing about eight kilometers an hour. I mean, that was definitely easy. I think that's good. Maybe I can do a little bit faster. I'm not sure. I was gonna try nine kilometers an hour today. But that was an error in my planning, which I wish, you know, my brain was smarter and had remembered it. But potentially in the future, if I set up a better personal assistant system, then it can warn me when I'm about to make a mistake like this. So what's the mistake I made? Oh, fuck, I just remembered I made another mistake. Damn. I just remembered I was supposed to pick up a package where I went to work out today. And I forgot the package. And now I'm not gonna go back. Anyways. I went to Satserรธ and I figured I'll be there when it opens. That's cool. Strong way to start the Saturday. Opens at eight. And I was gonna do like the hour treadmill run first and then some bodybuilding type workout after. But kind of just like a fun workout, you know. Whatever accessory exercises I wanna do to build some more volume for the week. Some more chest, some more arms, bicep and triceps, some more rear delts, I figured. Like the chest is my main focus that I wanna grow. And then next is the arms. And so then I went to run on the treadmill, but I saw there was a lot of people there. I figured, fuck, maybe that was a group session. Checked the app. Saw there was a group session starting at 8:30. And it's like fully booked, so they used the whole room. They used all the treadmills. I believe. I'm not sure. There might be one or two that's not reserved for it all the way in the back. I didn't get it checked because all the ones in the back had people on them. So I couldn't do that run, so I just did like 10 minutes on the treadmill, like warm-up run. And then I just went on with the gym session instead. Gym session was good. I did lighter weights, more control. For the first time, I was actually listening to a podcast while training instead of music. I always have music. I didn't feel it needed to be so intense. I just wanted to do with control. So it was cool. It actually, like normally a podcast is way too boring and even music I struggle with that it's not like exciting enough to do the workout probably. But today I was really just having the podcast and vibing, doing like, you know, single arm cable bicep curls, like super controlled, low weight. So it's not like so stressful for the system, but I'm trying to isolate the one part at a time. It was cool. I know since I did lighter weight as well, I don't know if the training was really productive or not, like if the response is good, if I'm really fatiguing it enough. Because the lighter weight, you can kind of do a lot of reps and after a while, I don't know if the muscle is really tired or you're kind of just building up this fatigue from just holding the weight for a long time. This, in Norwegian, I would say melkesyre. In English, I guess like lactated, that's what it's called. So I don't know if that's an issue that it's like too light and therefore you do a bunch of reps and then you kind of stop because it gets tough, but you haven't really pushed the muscle in the way you could. I'm not sure. Honestly, I think it was good. I think it was a good way to train. There's a potential concern that it was just too easy and pointless, but again, I have no idea. Like overall, in the body before the session, even though I've been training consistently this week, going hard on the sessions, I feel fine. I don't really, didn't really feel fatigued anywhere, but I noticed it when I started training, actually doing like pushing or pulling exercises, especially like trying to do a pull-up. And I was like, okay, my grip is definitely not the same. That's probably the one place in the body I noticed it the most. It's just like in my forearms, in the grip. It just feels weaker. And it makes sense because I was specifically trained back at the end of the pulling session earlier this week, but also I have heard somewhere, I think from the RP YouTube channel. I'm not sure. Maybe Jeff Nippard. I've heard that measuring your grip strength is actually a really good proxy for your overall fatigue or recovery level in your body. I don't know by what mechanism that works or how like accurate and valuable of a metric that is. And it's kind of a throwaway comment or, you know, in my head, it's just something I've heard somewhere, but it was from one of these sources, which I really trust and which give like practical advice. So I don't know, but and then just the notes maybe. Yeah, so definitely felt during the workout, like it was not going easy. I could have gone for a party if I wanted to. But also I felt like it felt kind of right to not do that because the body felt kind of fatigued overall when I was working out. Where like still pushing more hard, I could do it if I was going to do like heavy compound exercises and stuff. I could do it, but it would feel kind of wrong at the same time. Like I'm not fully recovered from heavy compound earlier, so it would be weird to do it again where I thought it made more sense to potentially just do these lighter weight isolation exercises for some more stimulus. So overall, I'm uncertain and I didn't track it so closely, but it was a kind of nice different type of session. It's lovely nice because I'm also afraid of everything becoming like boring and repetitive. So in that sense, it was nice. It was nice. Other things I wanted to say or note that I kind of forgot about yesterday. Yeah, my acne has been getting better. It's still pretty good. I have a few pimples on my face, but overall very good. Skin is interesting, like the meds make me more dry, but the skin is less dry and flaky because most of that flakiness comes when I have a lot of pimples that come. It's like when the pimple disappears, I think it leaves some like extra skin flakes or something. And so now that I don't get the pimples popping up all the time or disappearing, it's more smooth overall, even though it's still tending toward dry. My lips are definitely dry and I've been avoiding using the skin moisturizer and the lip balm to kind of keep it natural and unaffected. But I think at this point I should definitely use the lip balm just like all the fucking time. Not as a solution, but just a little bit of a damping of the symptoms in this transitional period. And then I'm hoping over time to be able to stabilize with not being dependent on it. Yeah, creatine. Yeah, I don't know if I noted that in any of my journals previously. I started creatine again. I don't remember exactly. It was like two days ago or three days ago. So I'm taking it every day now. I think I'm taking three grams a day or five grams a day. I'm not really sure. I have the capsules now, like before when I was taking creatine before, I took the powder and I just found it like annoying having to open, pack and mix it with water and stuff too much. So I realized capsules is less work. Like even though of course it's a very small task, still like you do it every day, it just gets annoying. I want to make it as simple as possible. Now I have the capsules. Even still, it's kind of annoying in that it's in a box. I have to screw up the lid, close it. I need to get them in like a dispenser or something. Just make it super fucking easy. And then I think the calculation is I take like four capsules a day because that totals to three milligrams, four grams, I don't know, three grams I think or five grams. I'm not sure. I
6ea34a2986e31b2fa736cdaab466fa062c2d77aa736899d9046a245e6c8224a1_3fdb34ad58ad.m4a
Friday, February 13, 2026
9:43 PM ยท 2:19
Essence

Despite a good day and an early bedtime, the speaker is frustrated by an ongoing, difficult task of setting up an old printer for their mom, contrasting it with a day filled with work, food, and gym.

Summary

The speaker is heading to bed after a good day, pleased with the reasonable bedtime, though wishing it were a bit earlier. Their evening was marred by a frustrating attempt to set up an old printer for their mom, a task inherited from their late grandfather. This has proven to be a "fucking hell job," with none of the printers cooperating, requiring slow, tedious debugging steps that have yielded no success and caused significant frustration. Earlier in the day, they had their usual oats and coffee, worked at a cafe where they also bought a club sandwich and coffee, then met a friend at university for a substantial meal of tacos and a hamburger. Despite feeling full, they managed a gym session, experiencing a bit of gas. Before putting away their devices, they set up a system to create a new display on a canvas for the next day, intending to check it automatically in the morning.

View full transcript
I'm going to bed now. It's been a good day. I'm going to bed at a reasonable time, which I'm happy about, although I wish it was slightly earlier. After I put away all the devices for the day, I attempted this task for my mom of setting up a printer. It's, I don't want to go through all the context now, but it has these like old printers from our grandfather who passed away and we're trying to just set one of them up to work at home and it's just been a fucking hell job. Like none of them wanna work and the process is so slow of testing them out and they're like almost working and then I don't have to try all these debugging steps. And it's just fucking horrible. It takes so much time and it's still not working and it's just frustrating. Today, breakfast, same oats, coffee, worked at a cafe, then I bought some stuff there, coffee and like a club sandwich. I went to uni to meet my friend Daniel, got food there as well. Quite a lot of food actually, but yes, that was like taco Friday, so like a taco thing. And then also a hamburger after that. In the gym, I felt quite full, but it was fine. A little bit of gas. And then I've set the system to create a new display on the canvas for tomorrow. I did that before I put away the computer, so it's probably finished already, but I haven't looked at it because the point is for it to kind of just be automatic for me to check it tomorrow, so that's what I'm gonna do. Hopefully it's cool.
a68c019dde2376a5163d936b5bc44971e7941f8a6c0233a2b212ac4ee8346778_67777577da30.m4a
Friday, February 13, 2026
1:00 PM ยท 1:29
Essence

The user is frustrated with a music app's search and playback functionality, encountering issues with finding lyrics, playing songs, and unclear error messages.

Summary

The user is having a frustrating experience with a music app. They initially couldn't find lyrics for a song and were confused by the search results and ordering. After finding lyrics, the play button was grayed out and non-functional. This issue persisted across multiple songs, including an Imagine Dragons track, leading the user to suspect a bug with the app's playback functionality. They also expressed a desire for clearer error messages when the app fails to find a song.

View full transcript
Ah, no lyrics available for this song. Det var synd. Scroll opp igjen. Hva faen er det her? Hvorfor finner jeg ikke den mest populรฆre? Rare sรธkekriterier og rekkefรธlge. Ok, hva ser vi? Jeg trykker play. Lyricsene kom, men du kan ikke pleie. Det mรฅ jo pleie. Ok, det funker ikke. Hvorfor er pleieknappen grรฅ? Pleieknappen funker ikke pรฅ denne. Gjรธr den ikke? Nei, jeg bare kommenterte pรฅ det jeg sa. Sorry, jeg snakker for mye. Nei, det gรฅr fint. Allright, den funket heller ikke. Nรฅ drar appen ikke klarte รฅ finne sangen pรฅ den sangen. Allright. Men det er jo ikke, den forteller jo ikke at den sรธkte etter. Men derfor burde vi si fra om det. Vรฆr sรฅnn liksom, sorry, jeg fant ikke resangelen. Ikke sant, da hadde folk fรฅtt noe. Prรธvde รฅ finne sangen, men klarte det ikke. Sorry. Allright, sรธker pรฅ Imagine Dragons-sang. Trykker play. Den er ogsรฅ grรฅ. Funker fortsatt ikke. Det var rart. Funker fortsatt ikke. Nรฅ begynner jeg รฅ lure pรฅ om det er en bug pรฅ alle. Gรฅr tilbake pรฅ Can you call my singing.
926c964979be32c918076f4399fe762afbf5970cfbe239624e20c840535b805a_a7aeb713e053.m4a
Friday, February 13, 2026
12:49 PM ยท 7:44
Essence

The user is deeply frustrated and confused by a music app's unintuitive and buggy interface, particularly its lyric synchronization and settings.

Summary

The user, Daniel, is giving feedback on a music app that has unexpectedly saved his previous state, which he finds problematic. He's trying to demonstrate a bug where a large, yellow warning banner takes up too much space. He then navigates to a private browser to start fresh, searching for a song. After finding the song, he notes that the audio source is shortened, which he finds annoying, and the button to dismiss it doesn't work initially. Once dismissed, he plays the song but immediately notices that the lyrics are not synchronized, making the app feel broken. He scrolls through the lyrics, looking for settings to fix the sync issue, but finds the interface confusing and cluttered. He encounters a settings screen with an "exit" button he desperately wants to press, overwhelmed by the chaotic UI which includes options like "view on YouTube" and "audio source with a link." He describes the UI as the "most chaotic" he's ever seen, with too many unnecessary elements. He then discovers a "Raw lyrics" section and a "Time sync studio" button, which he hopes will fix the problem, but it doesn't work when he taps it. After several attempts and navigating back and forth, he eventually gets the time sync studio to open, but finds its interface equally bewildering, with options like "segment" and "jump to next." He accidentally triggers music playback and editing functions, further adding to his confusion. He then encounters a "Type the first letter of each word" screen, which he identifies as a "writing practice" mode, and while it seems to work for writing, the "hint" button doesn't display any hints, leaving him utterly frustrated with the app's overall design and functionality.

View full transcript
Ok, Daniel gir tilbakemelding pรฅ appen. Nรฅ har den jo lagra, sรฅ det er litt bad. Det her er jo ikke bra. Den har lagra? Ja, nรฅ har den lagra. Det er ikke bra, eller bare ikke bra fordi du skal vise meg feilen, mener du? Det er ikke bra, ja, det er fordi jeg skal vise deg feilen. Sรฅ planen var at det kom det gule varselet, og det tok opp jรฆvlig mye plass og var irriterende. Da gรฅr jeg inn pรฅ en private browser, fรฅr vi se. Ja. Hva er den ikke lagra? New guy is empty, search for a song to get started. Search any song ute sted. Ute sted, aribergoda. Ja. Sรฅ ute sted, kom inn. Der er den. Det er bra. Der. Audio source is shortened expected. Ja. Ok, jeg trykker pรฅ den, men det gรฅr ikke bort. Jรฆvlig irriterende. Jeg trykker det. Der ja, nรฅ er den borte. Ok, og sรฅ kan jeg starte sangen. Swipa ned, fungerer. Hรธrer vi nรฅ? Hva skjer? Hva mener du? Hvorfor fรธler du den ikke? Nei, spiller sangen, det fungerer, men det er ikke noe synk pรฅ lyricsene. Det er ikke noe synk pรฅ lyricsene? Jeg ser pรฅ masse tekst. Ja, det er feil. Og da tenker jeg at appen er รธdelagt. Ja, og sรฅ ser jeg liksom sรฅnn pรฅ det her, og sรฅ er jeg bare sรฅnn, ok, mรฅ jeg drive og gjรธre dette her selv? Ok, greit, jeg kan scrolle. Scroll deg. Greit, jeg pauser. Jeg driver og leter om det er noen sรฅnne settings. For det jeg tenker nรฅ er, nรฅ mรฅ jeg finne settings for รฅ finne hvor faen jeg er og hva som skjer. Sรฅ jeg trykker litt pรฅ denne her. Ok, hva var det her og hva skjedde nรฅ? Kan jeg spille av? Fortsatt statisk. Jeg trykker pรฅ den tilbake. Ok, det er tilbake. Men dette var bedre. Store og liten tekst, foretrekk stor. Men ikke noe synk. Trykker der. Nei. Nรฅ var vi her tilbake. Sรฅ du fรธler fortsatt at den er รธdelagt og du leter etter knappen for รฅ skru pรฅ synk, ikke sant? Ja. Nรฅ trykker jeg pรฅ denne her, hva gjorde det? Det gjorde ingenting. Ok, den gjorde ingenting. Ok, den gjorde ingenting. Ok, nรฅ er jeg pรฅ en settings-skjerm. Hva faen er det som skjer her en gang? Ok. Jeg ser exit. Jeg har bare lyst til รฅ trykke pรฅ exit, for jeg vet ikke hva som skjer her. Kan vi ogsรฅ ta skjermontak samtidig? For det der er verdifullt for meg. Jeg skjรธnner at du mener vi har ni prosent strรธm. Ja, ja, ja. Ikke stress. Sรฅ jeg ser pรฅ skjermen her nรฅ. Hva i helvete er det som skjer her? Altsรฅ, det er view on YouTube, audio source med en lenke, apply. Dette er det mest chata UI'en jeg har sett i hele mitt liv. Det er ingen som helst. Altsรฅ, det er sรฅ mye unรธdvendig her. Og sรฅ gรฅr jeg inn her. Altsรฅ, jeg har bare en gang lyst til รฅ trykke exit. Jeg har ikke lyst til รฅ vรฆre pรฅ den skjermen her lenger. Jeg kan godt scrolle. Her var det mer! Her var det mer! Det var mer her! Raw lyrics! Hva i helvete? Hva skal jeg med dette her? Hva skal jeg med dette her? Godt spรธrsmรฅl. Skal jeg drive og paste? Ja, bro. Nei! Time sync studio. Kanskje det er det jeg leter etter? Nei, det var ikke det jeg leter etter. Det funker ikke รฅ trykke pรฅ time sync studio. Nei, det funker ikke. Han trykker pรฅ time sync studio, og det funker ikke. Sรฅ det funket heller ikke. Ja, interessant. Jeg trykker pรฅ YouTube. Jรฆvlig irriterende. Du er alltid kommet bort til knappen du vil. Ja. Nรฅ er den plutselig pรฅ YouTube. Det var irriterende. Nรฅ mรฅ jeg trykke tilbake. Exit. Ja, dritnice. Her funker ingenting. Ok. Sรฅ her. Der er den tilbake her. Det var skikkelig dรฅrlig. Prรธv รฅ gรฅ inn pรฅ settings og ut igjen. Er det fortsatt รธdelagt? Nรฅ var det good. Jeg lurer pรฅ om det var noe med touchen pรฅ skjermen. Det var det jeg skulle kalle den. Prรธv time sync studio nรฅ. Nรฅ funker det. Det var noe feil et eller annet. Jeg vet ikke hva det var. Hva i helvete er det her, mann? Sรฅnn faktisk. Bro. Dette, nรฅr jeg sรฅ dette her, jeg fikk hodepine. Sรฅnn faktisk. Hva faen er det her, mann? Time sync studio. Alt for mye greier. Helt uforstรฅelig. Altsรฅ, dette her er ikke menneskelig. Dette er AI. Segment. Jump to next. Det er ogsรฅ en knapp tydeligvis. Segment boundaries. Han driver og editor for jeg. Oi, nรฅ klarte jeg รฅ registrere. Ogsรฅ. Nรฅ spiller musikk tydeligvis. Jeg vet ikke hvorfor den spiller musikk. Ok, exit. Nรฅ har jeg klart รฅ redigere dette der. Ok. Nรฅ trykker jeg pรฅ den nye knappen. Ok, hva er dette? Type the first letter of each word. Snart inn i writing practice modusen. For nรฅ har den blitt VGF, ja. Sรฅ skjรธnner du hva denne skjermen er? Eller er det bare forvirrende? Hva tenker du? Tenk hรธyt. Nรฅ sitter du og skriver i writing practice. Dette funker. Skrevet fรธrste linje. Funker greit. Men hint. Hint. Trykker pรฅ hint, men du ser ikke hintet. Hva er det som skjer her? Skjรธnner ikke hva hintknappen gjรธr. Ser ingen endring. Oi, hva faen er det som skjer her? Ser annerledes ut. Ok, dette er altsรฅ dritirriterende, mann.
f7e358f0bf6ca1f85e51a7c7e842c96e492fb330eb46129bd884dd9640969c96_18cf88aafcb8.m4a
Thursday, February 12, 2026
11:17 PM ยท 40:05
Essence

The speaker reflects on their late-night work on an exciting, passion-driven project, acknowledging the need for balance and sustainability while envisioning an advanced AI system that automates data collection, task management, and personal organization to enhance their workflow and life planning.

Summary

The speaker recounts staying up late working on a project they're incredibly passionate about, recognizing it's a "good problem" but also a habit that needs balancing for sustainability. They spent an unexpected extra hour or two fixing inconsistencies in code, which was supposed to be bedtime. This led to setting a task for their AI system to analyze timing data and display it on a canvas. They describe the current state of their project as a functional Next.js app on a public URL, noting some issues they wanted to fix, but then realizing the AI system would have the commit and memory history for context. They detail improvements made to the system, including better automatic validation for app changes, aiming for a fully automated workflow where new versions are deployed directly to production for easy inspection across devices. The speaker expresses excitement about integrating more data, especially voice memos and journal entries, into the system. They acknowledge the challenge of extracting meaningful information from their verbose voice memos, despite LLMs' capabilities, and plan to iterate on extraction methods once the data is in. They envision a self-developing system where they can voice ideas and bugs into memos, and the AI will process them overnight, presenting prioritized tasks or insights on a canvas screen in the morning. This would automate daily planning, even suggesting packing lists based on their schedule, though they question the practical value of the latter. They also ponder the possibility of the system autonomously implementing tasks without their explicit instruction, once trust is established. The speaker reflects on the rapid, exponential improvement of their system, despite feeling it's slow at times, and considers the potential for context drift with self-documenting agents. They also weigh the benefits of building their own system versus utilizing existing platforms like OpenCLAW, noting their preference for the control and understanding of their own setup, but acknowledging OpenCLAW's capabilities. Finally, they list desired automatic data sources โ€“ voice memos, screen time, and location data (though location seems challenging) โ€“ and health data like workouts and sleep, to paint a comprehensive picture for life planning and coaching. They dream of a highly async, multimodal AI interface that communicates quickly and manages parallel tasks, ultimately aiming for a system that can proactively organize their thoughts and present actionable plans.

View full transcript
Some end of the day notes. I ended up now staying up too late, working too much again. Which is funny because it's a problem, but also, for being a problem and a bad habit, it's one of the best ones because I'm literally working on a project that I'm so fucking excited about. And it's actual work, and it's actually a passion that I'm pursuing so much that I just want to continue doing it instead of like going to bed and stuff. And so in a sense, it's super cool. But I do also gotta be able to incorporate balance in my life because the whole point is that it needs to be sustainable. It's not cool to do all on one day if it fucks up another day. But anyways, not a big issue now, but a little bit too much in the nighttime. Like everything in the day was good, just now at night, I was gonna do something quick, and then I just, I saw there was some inconsistencies, something I wanted to like fix up and stuff, and it just ended up kind of expanding and expanding, and so I ended up sitting like, I don't know, an hour, hour and a half or two hours or something in code, just fixing this stuff now at night. That was bad. That was like when I was supposed to go to bed, and then I did all this extra stuff. Anyways, I set it to a task right now, like analyze my timing data from today and put it on the canvas where we set up the canvas, which is sick. I don't have a screen yet in my room, it's currently just like a Next app, just like a, you know, a public like random URL domain where the website is. So it's awesome. It fucking works. And now, and it put it there, but there were some issues in like, you know, the page, which I was just about to specify, but then I realized there's no point because the system that we'll be reading this voice memo in the future will also have the commit history and the memory history, so the context is already there if it were to investigate. So, yeah, I shouldn't waste my words on it. I guess a lot of what I'm saying could be inferred, but I can just say whatever I want to say then, I guess. Yeah, there were some issues, so then I wanted to fix up that. Some issues in the actual site, in the layout, and I mean, I wanted it to format the data in a better, more cool way, do cool analysis, and then there was problems in the actual workflow in the way, like, it's supposed to do everything in one step, but like the validation, like you thought it validated it, but then the actual production thing didn't work, so the validation wasn't good enough, so we set up better automatic validation. So hopefully from now on, anytime I ask you to make an app or make a change in an app, it will do like the full workflow fully automatically and verify it so that like I just come back and it's just live on the public URL for me to inspect. Or just say, like, you can do branches and preview and deployments on local dev and stuff if you want, but since these things are just private projects anyways, and development is happening constantly, it's just easier to just all the time put the news version just live on the production URL, and then I just have one place to go and look every time I can check it from all my devices, and I just see if it's correct, it's correct, I just leave it, or if it isn't correct, I can just send another message to like bug fix it, and then it's fine that it's incorrect on the production side a little bit because then it's fixed later, so. It just makes it easy. It's in a sense slower, but since the work can be done autonomously while I'm not doing it, the actual time I spend on it is less. And that's cool. I'm really excited to get more data into the system and to get better interaction patterns with it. So first of all, all these voice memos that I'm recording, and like the journal or the voice memos that kind of the same thing, but they're kind of separated, to get this into the system in an automatic way. When I do that, it's gonna be so sick. And well, first I gotta get it in, and then I gotta get it to be able to, well, first like transcribe them, but that's not so hard, but then to actually like meaningfully extract the information from them because I do yap a lot. And although LLMs are great at processing lots of textual data, and have large context windows, I noticed like still, when I do all the yapping, and I've tried to pass it to models, which I think would catch everything, I've just noticed they miss tons of like information that I thought they would extract from it. But I guess it depends on the model that's used on the method and the prompt and stuff. So it's definitely something to investigate, but I know in the right setup, I can get all the information out. Just at least once I get the data in there, because then we can iterate on it and try to extract information. And then the system already kind of starts becoming self-developing because I no longer really need to chat with it that much because I can just say my ideas on these voice memos whenever I think about it. At nighttime like now or in the daytime, whenever, just say like, oh, this feature would be cool, or this feature would be cool, or this is what we should do next, or this is something that's bugged right now. Like this I want to change or whatever, or this is just what I'm thinking about and I want help brainstorming. And then whenever I arrive back to the AI chat, I'll just ask it like, okay, what should we work on next? I'll just, I won't even say anything. I'll just ask what should we work on next? And it'll go through this data or it'll have gone through it already and processed it. And it'll just say like, oh, based on, you know, your thought dumps from yesterday or the last few days, here's what's really been on your mind that you probably want to work on right now. And then I can just kind of like pick one. And then it already has the context and then you can ask follow-up questions or I can add more context if I have it. Really, it can just use that, whatever it has, and ask questions and then, yeah, it's already starting to get pretty automatic. And from that point, we might even just set it to just do these things without even like waiting for me to hop in the loop to ask it. But that's another like thing you have to verify first if you really feel comfortable with that and trust the system, but like at that point, I could technically do it. Just set it like once a day, for example, look through the things, pick one of the pain points or insights or new features that I want and just like try to implement it and see how it goes. And you know, set some limitation maybe. Yeah, so it's like, it feels like it's very close and like we just started, but it's going. Well, I feel like in a sense, it's going so slow, but also I feel like it's going so fast at the same time because the improvement is so exponential. Like you just improve building some of the basic blocks and it just becomes like stronger and stronger. Like recursive improvement. So, I mean, it seems like that right now, but we'll have to see if the context, if it starts becoming too much context and there's too much like drift and maybe everything kind of breaks apart. Because already I'm noticing it's kind of trying to like self-document with these agents.md files everywhere. But I think they're generally like too bloated. Like when you ask an LLM to make that agents.md file, it uses, you know, this pattern generation thing that it's trained to do and it just, it generates too much generally in those files. Like it's, some of the things it writes are like unnecessary, like two lines are redundant or it'll write the same thing into different files at different places, making redundancy and also opening, increasing the potential for context drift. But also as a human, I don't really wanna read through all this and actually think about it. So really, you still want the system to do it, but you need to set up like harder reasoning about these things. And you kind of just wanna throw a smarter model at it, but then it's not always the solution to just throw a smarter model at everything. You can be smart about what can be done with a smarter model and what can be done with a less smart model and where you wanna put like extra reasoning efforts and how much context you need to pass into that. It's all kind of context engineering. Then I'm wondering if for now I'm kind of building my own system, but like ClaudeBot or OpenCLAW already exists and it's huge and people are using it and I'm not really using it right now. I got it. Now I set up like now the Telegram chat is working perfectly. So now I don't have the API key issues anymore. So I gotta start using it more. And essentially all this system I'm trying to set up here, maybe I should just set it up in OpenCLAW instead. I'm not sure because it's, well, I have slightly less trust in it than what I'm setting up on my computer because I just know more what's in it, but overall I'm not that skeptical overall. But in a sense OpenCLAW maybe feels just like kind of complex because there's so many parts. But then again, I don't need to worry about all the parts. I can just chat with it in the main chat. So, and it is kind of the same thing that I'm building in many senses. Like it has the memory system,

I really wanna have screen time automatic, voice memos automatic, other notes automatic, although not as important. I wanna have location data automatic, but I think difficult, maybe impossible. I'm not sure, I gotta look into that one more time, actually. I discussed that with AI like two days ago. I don't even remember the conclusion. Damn, that's... I'll see what's here, actually, I don't remember the conclusion. I think it was that it's not really possible, but I don't wanna accept that. Yeah, then I was wondering like if I should make my own app, but is that gonna work? Maybe just as a developer app, could I do that? And not an App Store one. I think that convo maybe it got left a little bit unfinished. I don't remember if I did it with ChatGPT or with OpenCLaw. Maybe I did it with OpenCLaw and then I ran into like the error message and so the convo kind of stopped abruptly. I think maybe that's what happened. No, I'm not sure. It's a typical thing I would talk with ChatGPT about, not with OpenCLaw. I'm not sure, man. Get all the automatic data I want. Let's see if I get location, screen time, and voice recordings. I think that's maybe the three biggest ones that would be available. Unfortunately, location is maybe the cool. Everything else I know I can set up, I think. They vary in complexity, but I think everything else seems very like feasible. But the Google Maps location seemed very difficult from what I remember. But like, I really want it, so I'm gonna look more into it. So I want one of the main ones, while recording is the main one, and it's also like super easy actually to get in. And then the processing is another beast, but like getting it in is actually super easy. Or I use the actual analysis and extracting useful information and organizing, that's a different beast, but... Actually getting it automatically synced, I think is super easy. It has to go through my Mac, though. So my Mac is not always online, but I think I can set like a time, for example, at night, it's always at home. It can run as a background job, like at some point through the afternoon or maybe also at nighttime. I don't know. I think it's gonna be feasible. I get recordings, location, screen time, or the timing data. Those are really cool because they start to paint a really big picture of like the main things I'm doing. And then yes, like workouts or overall health data. Yes, I say all health data, but really the main stuff is like workouts, sleep data, general movements, like I guess steps. And then some metrics on like my recovery status, but that's a little bit difficult. That's like the main ones, I think, but then ideally just like all health data. But those are like the main ones that would actually be available because then I can start painting a really cool picture. And then with that, you know, work out from Apple Health, but then also from, yeah, Heavy and Strava. Strava is not as important. Yeah, so now it's starting to become a little bit much, though. Like, for... See, I want all of this, man. But I kind of need all of these to be able to do, like, actual proper analysis and to really do like life planning and stuff and coaching. So I guess I just need all of those and then potentially more, I don't know. Yes, now I think I should like list it out to help me think. I find it hard to think about right now. And that's the thing now. If I just had this set up already, this is like a future UX that doesn't exist yet, but I just wish it did. I made this voice memo, right? I'm just like trying to think out loud, but I'm not able to organize my thoughts. And then imagine if I could just wake up tomorrow and the canvas screen in my room, it just shows like from the Jarvis system, it's like, good morning, Henrik. Here's an organized list of how you should prioritize working today for, like, getting the data sources into the system based on, like, my thoughts, or it has me reorganize it or come with some suggestions or something, right? Based on what makes sense. And I could just wake up to that because this, my voice memo could sync automatically and then it could process it during the night and update the canvas, or process it, combine it with a larger analysis, update the canvas before I wake up. That would be so sick. And then based on my usage patterns, it can understand the way I'm kind of going to the city to get out of the house, working somewhere at a cafe, going to the gym somewhere, it depends on... know like which workout I'm going to do, and then it can also make like a packing list for my backpack, although I don't really need that, but also kind of, I always kind of wish I had it, although I don't need it. Just like an extra reminder to make sure I packed everything, but I don't think it would be useful. I think I just wish I had it, but I don't think it would actually provide value, because I don't know, like, at today's... so today I forgot my towel and deodorant, and I felt so fucking stupid. Honestly, even if I had a packing list on the wall, I don't think it would have helped because I need to actually check it and verify. And if I just verify the backpack properly, I would have probably noticed, so I don't know. And then, yeah, so I don't know what the data sources. I guess voice memos is the main thing I want to get in. But really all of those I said. And then another big thing is not a data source, but like a big thing about the system. I really want to get it implemented in a way where it's more async. I wanna have a top level chat that's multimodal, it's textual chat, or I drop... I do voice transcripts, or it's just like a voice-to-voice chat. Or it's like I do voice, but I get text back, or I do voice, but I get voice back and text back, or I do... I just talk to it, but I get back voice, it talks to me, and it can show me text on the chat, and it can show me stuff live on the screen or on the canvas, if it can do it quick enough on the canvas. If not, just directly in the app somehow else. Just communicate with me as quickly as possible. And that's like the main AI that you're interacting with, but then under the hood, you know, it can issue tasks in parallel, other sub-agents, whatever, because the problem is that you wanna do tasks in parallel, but then you have to keep context of everything. And really, you want just a system to also keep the context of everything for you, but it's just, it's not really a system for you yet. And so when I have maybe these tasks running in parallel in Codex or whatever, I wish there was like a different AI or agent sitting on top of all of that, managing all of that, which is just talking to me more in real time like a human or writing in text or showing visuals or all of the above. Whatever makes sense and whatever, it depends on how focused I am, if I'm just staring at the screen or if I'm doing things like packing my bag or something in parallel. But let's say I'm staring at the screen. It just talks with me, it says what's happening in the different branches, and then whatever I'm trying to learn about and make decisions about, regardless of which branch it is, I just talk with it, and then we kind of steer all of the parallel work at the same time. And you wanna have one that's like always available, never blocked, and then all of the work actually happening on kind of like sub-threads, but they're always interruptible or steerable. But also sometimes I just have like, I want it to work, but also I just have a question to help me think at the same time. And then I can maybe think with like the top level one, like a conversation while the other work is happening underneath. And then that conversation might lead me to a conclusion to like steer one of the work sub-threads in a different direction. Yeah, I just have this like one that's like real-time, always with you, always available. Like either it's dormant or listening or it's directly talking to you right now. So you're kind of, it's much like a human interaction. Like you're always there just like engaged in each other, but at the same time it's also just under the hood orchestrating all this other stuff without like blocking it from live interaction, just being present with you. That would be so fucking cool for productivity. Because then the system matches like, then the limitation is just me as a human, you know, my processing speed. And that's cool, I guess, not for the system, then it would be more optimized by putting me out of the loop. But it's optimized for me, I guess, the use case at least where I'm using it. I get the ultimate experience of where like the system, you know, it's so fucking capable and it's just, we're always at the edge of how, just like how much I can understand, how fast I can process things, how much I can think, how fast I can express myself, my ideas, how fast I can consume information. And I'm just at kind of like peak human creation mode in a sense. Or peak human performance, I guess, in this aspect. Yeah, that's cool. Anyways, yeah, again, you see, I'm completely forgetting about everything else in real life. I need to get a proper training plan. Yeah, I think I'll

But it's also one of those things that, you know, in the future, it's more valuable the earlier you implemented it, and you can never go back and implement it earlier. So it's a thing that, you know, like grows in value the longer you have it, and you can never go back and get it implemented earlier, so you need to do it as early as possible. So I guess I should do that tomorrow then. Then I need to think about DB. I need to connect it to the system. That's kind of good because I know the system is gonna need a database anyways. But I just, it's been nice to not worry about that yet because I don't know really how to do it. It's always been kind of a mess, I feel like, when I've tried to do it before, but I think it should be very, I mean, I'm told it should be very easy now. You use like a skill or an MCP thing, yeah, it should be able to handle everything. I just haven't had a smooth workflow with it yet, so I'm just like a little bit afraid, but I mean, it should be good. It should be easy. It should be smooth. So I guess I have to do it. I feel like I'm so behind on open cloud. I haven't even tried it properly. I really need to do that. Also on CloudCode. Obviously, I'm using Codex now and trying to get a lot of value out of it. And I watched some, like CloudCode is kind of the industry standard for the most hardcore agent devs, which is what I'm trying to be. And I watched YouTube videos on like the advanced use cases of it to be like max productive and leading edge. And I've kind of seen it and I feel like I understand it, but I need to actually do it at least once to know that I kind of can do it. I've just seen it on YouTube. It's not good enough. I don't know if I can actually do it. I need to like open CloudCode, try an example with some of that, see that I can actually do it to be like, okay, to kind of unlock it in my mind and be like, okay, I know how to do this to feel like I'm keeping up. Fuck yeah, I really want to also be able to automate computer use or at least browser use. Like today I was gonna set up the CloudFlare storage bucket. Overall, it didn't take that long, but it was so fucking annoying because I'm getting so used to the AI agents being able to use so much autonomously through CLI usage and skills and MZPs. And then suddenly I'm like, okay, I want that CloudFlare storage bucket. Oh, now I have to go to the, open a browser, which I feel like the browser is so slow now, which I think my Mac is slowing down a little bit, but also I'm just used to stuff going so fast in text-based format and in the CLI. So like relatively now, normal GUI experiences just feel slow when they're not actually slower. I've just become more impatient. I just want things to be fast and snappy, which is just, you know, how technological development is. We always want things faster. Anyways, I add to myself, open the browser, go to the CloudFlare website, create an account, log in with Google or I log in with GitHub, then I go and create a bucket, write a name, then I just like navigate it, go and click create API key, find the correct credentials, and the dashboard is super fucking complicated, right? Of course, if you get used to it, it's not that complicated. I guess you learn to use it, but for someone that's never been in there before, there's a lot of text on screen, a lot of buttons. I don't know anything. I have to search a lot to find the right buttons. Way too much. And then I'm asking Codex, okay, exactly what am I looking for? Which buttons should I press, blah, blah, blah. And in a sense, it's a simple process if you just take your time, but I want it to get done quick, quick, quick, and it's like impossible to do it super quick because there's just too many buttons on the dashboard. And this thing could so easily be automated in the browser by an AI agent if it just had access to my browser and it doesn't get stopped by bot protection. And that's like the limitation that they have bot protection, of course, but I mean, I'm not gonna need to work myself, but I hear about and see examples online of people just doing exactly that. And I think I kind of need to do it in my own browser and not in like a cloud browser or something. So, I mean, I do have the both the OpenAI and the Perplexity agent in browsers installed. I haven't used them in a long time because they weren't really useful, but this is exactly the thing that it would be useful for. Or ideally, I would want when I'm talking with the Jarvis system really for it to be able to take control of the browser on my computer. Maybe I could install that as a skill, actually. Oh yeah, I should probably do that. Yes, that's cool. Okay, because I was thinking I've seen like in the cloud app, they have the thing where you can control the browser, but then I would have to copy the prompt, paste it over there, and now all the context is in the Jarvis system. That's the whole point that all the context is in the Jarvis system and that's what we're developing. And I mean, it's gonna be cloud-based in the future, but I guess for now, as it's local on my machine. Yeah, I think I can actually just do that. I'm realizing now. I think there's just like a skill that you install or an MCP thing. I don't know, maybe both. That lets you control your, let's it control your local browser. So you just go there, click, or maybe I just do that step, like the login with GitHub or Google, if to just bypass bot protection. And then once I'm in the dashboard, then I could just tell it to do everything. Create the bucket, create the API key, find the correct keys, copy it over. But then I guess maybe there's a problem with like the keys kind of being exposed in a sense. And I was gonna say this, I actually realized the thing told me that the keys got exposed anyways from the Cloudflare thing, and I was supposed to re-roll them. And I said, okay, please remind me of that later. And then now I came back later and I asked it, what was the story you were supposed to remind me about? And it didn't mention those things. I just remembered it now. So that's still on the to-do list, I guess. And I guess that's a problem with having it automate that thing, which is like the main thing that's gonna have to get done manually. Then maybe I could just like not care about it that much. Like, is there really a big risk? Not really. I think the keys are gonna be exposed to what, like to the agent which is running just through OpenAI. Like, they're not gonna steal my API keys. And they probably, like, remove that from the training data anyways, or randomize it or something. So in a sense, it doesn't matter. I guess it would maybe potentially only matter if I started using a different AI model, Chinese model or something, maybe there's a risk. I don't know, or if I, it probably isn't though. Or if I install random skills and they're not actually security vetted. And that might start going through the context and find those and leak them, maybe. I don't know, I don't know enough about this. Yeah, I'm kind of done yapping now, yapped for a long time, just thought dumping. Food today, yeah. Breakfast, regular oats, lunch got kind of late. I wish maybe I don't know because the gym workout just ended up being so long because of the phone call. And I ate after. So fucking expensive, man. I bought like a protein shake and a club sandwich. I mean, I have pictures of that. And then dinner, we had sushi at home and I supplemented with a protein shake. I made a casein protein shake this time. And then that reminds me, there were one of the dinners yesterday, like one day ago or two days ago, I think maybe yesterday, where the images, if I'm gonna set up the system to read the images to try to understand my diet, or as one of the data sources that's used for a comprehensive understanding of my diet, which I am gonna set up at some point. Those pictures were a little bit hard to understand because I think like some pictures, I think are just all the me, all the things that were put there for my mom because it was a lot of different things. And then I have like a separate picture of what was on my plate. It might be a little bit hard to determine how much I actually ate versus what was just kind of there. I took two plates overall, I remember. And I know it's too late. I don't remember anymore. And the details don't matter that much anyways. Okay, I'm hoping I can get some more cool automation set up for the system tomorrow. And also that I can keep myself in check and not work too much on it and stay a little bit more in reality. Yeah.
463a535a79374d0626b1f003d55ec030ed74768ce6104876f2881b950b3d0d19_6cf6fac3ef33.m4a
Thursday, February 12, 2026
10:01 AM ยท 1:10
Essence

The speaker is struggling with a technical issue during a call while trying to transition to the gym, and is also reflecting on the meaning of a song.

Summary

The speaker is having trouble with a Facetime call, trying to switch it to their mobile phone so they can head to the gym. They're frustrated because they can't seem to transfer the call back to their phone. In the midst of this, they also briefly note that a song they're listening to is clearly about a restaurant.

View full transcript
Mm. Ja, sรฅ nei, liksom, bommer pรฅ, sangen er jo รฅpenbart om et spisested. Mm, okei, cool. Mm. Ja, bare dรฅ jeg bare med deg, sรฅ jeg har klart รฅ bytte Facetime til mobilen for at jeg skal begynne รฅ bare gรฅ mot gymmen, men bare fortsett. Nรฅ var det, nรฅ klarer ikke jeg รฅ bytte Facetime tilbake til mobilen, vet du. Jo, bro, jeg legger pรฅ, sรฅ ringer jeg deg tilbake fra mobilen.
50d7ad6675bff97696fd6ffad12b4dada91a6bc38cee5844a60141f3c2fc2b0a_47d851f43cc2.m4a
Thursday, February 12, 2026
9:54 AM ยท 6:34
Essence

The speaker is guiding someone through testing a new feature for highlighting and explaining song lyrics, focusing on Genius highlights and an AI-powered explanation tool, while also troubleshooting issues and gathering feedback.

Summary

The speaker is explaining how to use a new feature that allows users to highlight text, specifically song lyrics, and then create a "segment" using a bookmark icon. They clarify that a yellow highlight indicates a Genius highlight, and if nothing happens when clicked, it might be confusing. The speaker encourages the listener to test this feature on various songs, regardless of language or popularity, to see if it works as intended. They then guide the listener on how to search for songs, suggesting the most intuitive method is to simply click on a song, though a smart search field is also available. The conversation shifts to another feature: an AI-powered explanation tool for any marked text, which aims to provide explanations without needing external searches. The speaker admits this AI feature might need more "prompt engineering" to be truly useful, especially with metaphors or niche slang. They encourage the listener to test this by selecting a difficult or nuanced part of a song to see if the AI can accurately explain it, noting that their own experience suggests it struggles with metaphors and references. The listener expresses dissatisfaction with an explanation for the word "kjekk," leading to a discussion about the word's usage and the song's age.

View full transcript
Ja. Ja, men det er kanskje ikke sรฅ intuitivt hvordan de gjรธr det. Det du mรฅ gjรธre er at du mรฅ markere noe tekst fรธrst, sรฅnn som du bare markerer tekst i alle nettsider og sรฅnn. Sรฅ du drar over en del av lyricsene, for eksempel. Og sรฅ nรฅr du har gjort det, sรฅ er det en knapp som lyser opp for รฅ lage det som et segment. Det er sรฅnn bokmerkeikon. Ja. Og sรฅ blir den knappen grรธnn. Og hvis du klikker pรฅ den en gang til, sรฅ navigerer du inn i segmentet ogsรฅ. Hva tenker du da? Hva er det som forvirrer deg akkurat nรฅ? Fรฅ hรธre. Det er Genius highlights. Ja, ok. Men med den gule sรฅ ble det forvirrende, fordi du trykket pรฅ den pรฅ her, og ingenting skjedde, sรฅ du lurte pรฅ hva den gjorde. Ja. Har du prรธvd รฅ trykke pรฅ de gule greinene? Teste. Test hva som helst. Altsรฅ, jeg รธnsker jo รฅ ha det, men om det faktisk funker, det er vanskelig รฅ si. Sรฅ test hvilken enn sang du kan tenke deg om det er en engelsk, populรฆr sang, eller om det er en nisje, norsk sang, eller om det er en russisk sang, eller whatever. Bare test hva du hadde vรฆrt interessert i, liksom. For รฅ sรธke. Du er pรฅ mobilen. Ja, ok. Ja, du kan trykke pรฅ sangen, eller sรฅ er det litt skjult, men hvis du tar command K, sรฅ fรฅr du opp sรฅnn smart sรธkefelt hvor du kan gjรธre alle kommandoer. Men bare trykk pรฅ sangen, egentlig. Det er det mest intuitive for deg. Da fikk du sangsรธkskjermen. Merkelig sang, du. Ja, men den har jeg testet allerede. Men sjekk om det er noen Genius highlights pรฅ den, da er jeg usikker pรฅ. Ok, og sรฅ det andre du kan teste. Fordi poenget er at mange sanger har ikke Genius highlights, men vi vil fortsatt ha en mรฅte inni, men vi skal egentlig kunne fรฅ ting forklart hvis vi ikke vet hva det er, uten รฅ mรฅtte gรฅ til Google eller ChatGPT. Sรฅ jeg har en annen funksjon hvor du kan markere hva som helst av teksten, og sรฅ fรฅ det forklart. Men det bruker AI. Jeg tror jeg mรฅ gjรธre en del mer prompt engineering for รฅ gjรธre den nyttig. Men sรฅ velg en del, et ord eller en del av teksten som du liksom enten vil fรฅ forklart eller bare for รฅ teste det. Test det. Hvilken markerte du? Nei, ikke inn pรฅ beste verden. Ja, det er fakta. Men pรฅ denne, denne driver vi nรฅ รฅ oversette norsk til engelsk, for du trenger jo ikke det. Du er jo norsk. Du vil heller at vi forklarer slengen, at vi forklarer kin, kanskje. Nei, sรฅ jeg burde kanskje lage en person. Det var fett. Prรธv รฅ finne en mer vanskelig del av sangen. Noe mer nisje sleng eller noe hvor det er noen rare metaforer eller noe sรฅnt, og se om den liksom forstรฅr. For min erfaring er at den er ikke sรฅ god egentlig, spesielt pรฅ metaforer, sรฅ er det ofte ikke plukket opp, eller referanse. Snakke i bygget med absolutt alt. Sleng med ikke 21. godt stekt opp fรธr du dabbak. Snakke i virkelige norske smelte. Jeg vil prรธve 10 spen cheese nรฅr jeg passer chill. Eller er du kjekk sรฅ tar du den custom-made. Da er det litt. Hva bommer du pรฅ? Sรฅ du var misfornรธyd? Ikke noen kul forklaring. Ja. Hva sier du om kjekk? Tror ikke. ร…, sรฅ bra. Jeg fรธler at kjekk var litt sรฅnn at man begynte รฅ si det kanskje for noen รฅr siden, men denne sangen er jo hvor mange รฅr gammel? 10 รฅr gammel eller noe? De sa jo kjekk da. Hรธr i plis. Vรฆr sรฅ snill og ring.
a18004dd0f72b04d4d217b95f02767d2decb54cb86fa04f6936a030b576bce34_6094eb6166d3.m4a
Wednesday, February 11, 2026
10:29 PM ยท 18:33
Essence

The speaker reflects on their day, focusing on workout planning, the ongoing development of a personal data system, and the challenge of balancing focused work with other life tasks.

Summary

The speaker reviews their day, noting a failure to plan workouts beyond tomorrow's pull session and contemplating incorporating easy runs into their training, potentially through two-a-day sessions. A significant portion of the reflection centers on the early stages of building a personal data system, which involves integrating various data sources like Apple Health, screen time data from the Timing app (which uniquely combines iPhone and Mac data and stores it indefinitely), and workout data from Strava and Heavy. They express excitement about the potential for this system to automate tasks and allow for creative integrations, like AI-generated images for Strava workouts. The speaker also considers how the system could provide daily analysis of goals, time expenditure, and financial habits, lamenting a recent expensive protein shake purchase. Finally, they discuss optimizing work blocks, suggesting 60 or 90-minute sessions, and establishing a rule to dedicate intense system-building work to time spent outside the home, reserving evenings for smaller tasks and personal life to avoid over-focusing and losing track of time.

View full transcript
A little day review and thought dump before I go to bed. I'll also journal a little bit. I haven't written today, so I'm not going to reiterate anything I wrote there because the systems should capture everything and understand the different sources and how it ties together. I failed to plan my workouts for the rest of the week. That was one of the tasks I wanted to do today. I didn't. I've just planned for tomorrow. I'm going to do a pull workout. And then Friday I'll probably do either legs or a running thing, either like an easy run or an interval run. I'm wondering if I should do tomorrow like both pull and an easy run, and if so, maybe not in the same session, just spread out like I start the day and end of the day, but it starts being like too much since I don't live close to the gym. I would need to like pack two outfits and stuff. Summertime it would be much easier to do that. I could like go to gym and just run at home. I think maybe to optimize my training, that is like something you can do is start to do like two sessions a day where one is like a cardio thing, but it's like not intensive. It's like very easy run. They say you just need to like collect kilometers. I haven't looked into this, but I think it's a concept that a lot of people do, and it doesn't really fatigue you. It just builds that base endurance a little bit. But I mean, I haven't been doing easy runs like this on top of my normal training like ever or for a long time or never like seriously and consistently. But I think maybe that's something I should integrate with if I really want to be serious about my training. I'm not sure. I want to do research about that. Big thing that was on my mind today, which I didn't get to work on. Well, wait first, I guess I get to say that I have now actually like started really trying to make this system, which is cool. And it's of course in very early form and I'm kind of doing it while I'm trying to get this open claw to work, but I haven't really gotten any value out of it yet. I have some like API key issues. I actually don't know what it is. I spent very little time on it today. But separately, just locally, I've been using the codex app to start to build a project just locally on my computer, which is going to be the system. Maybe, I don't know. I'm just trying to at least build parts of it. And then we'll see. Regardless of whether that becomes the system or not, it's still like valuable. It's still like part of something that's eventually going to become the system. It's still connecting some of the data, identifying it, organizing it correctly. So that's cool. And you know, I don't really need to mention the details of that either because it's in the git commits and in the files and just also in my codex chat history. Although I don't know if I'll ever set up a system that also has access to that because I mean, ideally, I want my system also to have access to all my chat histories with different AIs, which is like the biggest one is the ChatGPT app. I have a massive chat history there, somewhat with the Gemini app, somewhat with the Grok app, somewhat with the Claude app. That's like minimal. But I mean, ideally, I want to get all these in there as well. I don't know if it's possible, but I'm guessing it probably is because I think there's like a legal requirement that they need to let you export everything. But I haven't looked into it. But if so, that's also something I want to do pretty soon. So I got some data sources in the system today, which is cool. And I know the next ones I want to get my Apple Health data. I've now exported it from Apple Health. I have a file in my iCloud drive, so I can add it to the system tomorrow, or at least to the inbox. And then I need to figure out how to actually process it and stuff. But I can at least like put it there. So it's kind of, you know, available to the system. I still need to figure out how to do the processing. Then I want to add my screen time data or more specifically the data from the timing app on my Mac, because it has a unique feature that I think no other timing apps have because it requires like a workaround. Apple kind of blocked it, but then they've found a workaround, I think, where it actually combines the screen time data from my, all my Apple devices like my iPhone and iPad, the iPad I haven't been using for a while, but essentially my iPhone and my Mac and the Apple Watch. I don't think it does. The iPhone and my Mac. And then additionally on my Mac, it does its own screen time or activity tracking, which is more comprehensive than the screen time app. But so it's actually the unique thing is that I get the screen time data from my iPhone also integrated into this full activity tracking on my Mac, which is super fucking cool. And everyone gets that like in the iOS settings in the screen time settings on iPhone, on Mac, you can see all your devices, but you can only see one month back and then it's deleted and it's actually deleted because, you know, Apple is kind of privacy focused and there's no way to like export it or get it out in a clean way. And there was apparently a trick before with like the database, but then Apple changed it. So I don't know how it's working now. I wasn't able to find like a hack myself, but this app works. That's, was the status when I looked into it, I guess it's a couple of months ago. And so I have this one and it's fucking cool. And this app lets, they then have an API, so I haven't looked into it again, but I think all that data is going to be available. And I think it also just stores it locally on my machine so I could programmatically get it. And then the app also has like an export feature, which I haven't looked into. You have to kind of choose to structure it in some way as like a report, but I think I can like export all my data as well, which is cool because not only does it allow you to export my data, but it also does a lot of increased tracking data on my Mac and it has the extra thing of, it takes that screen time data from my iPhone, which is not accessible in any programmatic format. It does two things. It takes it into itself, like via the iCloud sync to the Mac and then into this third-party timing app. It takes it into itself, so now that data is exportable as well. I think at least I haven't double-checked, but I think. And on top of that, it now also stores it indefinitely instead of the four-week limit that Apple puts. So it's actually super fucking cool. I'm actually so glad that I discovered that. Yeah, so that's something I want to, and I can do tomorrow, set up my system to be able to connect to that. At least like the initial phase, you know. And then my workout data, yeah. That's the other big one, yeah. My workout data, so I want. And I have Apple Health already, but then I want to add on Strava for sure because it can add, I guess, like the main data is there, but it just adds essentially my titles and descriptions and also my photos, if that's available. I don't know if it is, but my titles and descriptions because I can't put that in the health app. And then also I want to add heavy app integration because that's where I track my strength workouts. And again, that might not be necessary if it already has the Strava because I've already set up the way where it syncs the heavy stuff to Strava. And I think that's, and then I set up like an extra automation where it also takes all the heavy, not just the workout, but then all the heavy details like the textual description of the exercise that it also puts there on Strava. I think that automation just broke actually. Maybe my subscription ran out or something because it was set up on either make.com or N10. I think it was on make.com. But now as I'm starting to build this system out, once I later put it in the cloud, then all these like automations that I've made with different apps, I don't have them anymore. I just have them on make essentially. I'll just port that all over to my own system because it's just better, easier to control. And I can control it in natural language as well. Modify it at any time. Like my AI agent that has the context of my whole system can then tweak those automations if I want to and just run them. And that's so fucking cool. Dude, dude, dude. Well, I'm not sure. I don't need that because the automation already works. So now it may have broken, but it's probably super easy to fix. But it just feels so cool once I have that because it just opens the possibility to iterate on it so much faster. And then, for example, maybe I want to set up a thing on Strava where every workout, it uses like AI image gen to generate a image that's kind of associated with the workout somehow. And just post that on my Strava workout as well. And that would be like a unique thing that nobody else has on their Strava. And I could just set that up so easily and just like fun things like this. Oh, we cool. Yeah, I want to connect my workout data, essentially health and Strava and heavy into the system. And this timing screen time and activity tracking data into the system. And I think I can do that tomorrow. And then it already has quite a lot of

Yeah, so like happenings of the day, things I maybe thought about or did well or didn't do well or stuff. And at the end of the day, some like, you know, analysis, summary, what were my goals, did I, did my actions that day get me closer to my goals or further away? And analysis of my like my time expenditure in different categories and my progress on whatever I was working on. And maybe some stats like, did I train more or less than usual? Did I make more or less of these like thought stream recordings than usual? Yeah, also like how much money did I spend and like what did I spend it on and was that a good investment or not? That would be so cool. Dude, today I spent over a hundred kronor on a single protein shake. That's fucking crazy, man. After gym, I figured let me go to Joe and the juice, buy a protein shake there. And I just ordered it. I don't look at the price, but then when I paid, I saw the price and it's not gonna stop me from paying. I'm gonna pay anyways, but inside I thought like, holy shit, it's over a hundred. It was like 116 or something. I'm like, bruh. That's crazy. That's ridiculous actually. Anything else I wanna say? I still haven't gotten around to fucking cutting my hair and I just I don't wanna bother with it. I just wanna work on this thing. Yeah, the planned work blocks today were too short. I was like 30 minute work blocks. Like it's just too little. It goes too fast. I want to stay longer. Everybody always recommends 90 minute work blocks. So I might try that or 60 minute since I just don't wanna sit too long since I'm just to get a little bit. I think normally 90 minutes is fine, but now since the last day, so I was a little bit too much on it. I just still wanna... So I don't know, today, tomorrow I'll try 60 minutes or 90 minutes. I don't know. And also I think I need to have like a rule that like this main interesting work that I wanna do is kind of dedicated to the time where I'm out in the city and I'm like working properly because then once I get home, I need to put it to the side and leave the time to existing in real life or doing these small diverse tasks and stuff because otherwise I end up doing that main work also at home and I never have time for the small diverse tasks and communicating with people and I just end up over-focusing on the one thing. So I think maybe that's a good rule with my current setup. Like it's kind of time wasteful that I kind of go out to just get out of the house and do some main work while I'm out. And so I think that's when I should do this like main work. I wanna build the system or whatever. And then when I get home or also on the way home, then I need to just put that to the side and focus on the other small tasks or like do research on the things I'm interested in but not actually work on the system like that and just leave it as having ideas. I can capture ideas if I want to, like I'm doing now in this recording, but then leave it to the next day, the proper session to actually work on it again. Because today I came home and then I just wanted to do like a little bit more, a little bit more and then it just ends up like eating up my time. My time just like vanishes. Today was weird. I felt like I was pretty quick with all my working sessions and stuff and didn't spend too much time doing anything, but suddenly the day was like over. Time passed very quickly today. Yeah, I'm gonna go to bed now.
695f39e08cfa14b32ce031b67fd3e42ee9c991349504e555312ec887b091b3cc_210602870a55.m4a
Wednesday, February 11, 2026
11:14 AM ยท 0:15
Essence

The speaker adjusted their breakfast plans, saving part of it for lunch and documenting the change with a photo.

Summary

The speaker didn't finish their entire breakfast, so they decided to pack the remaining two pieces for lunch instead. They also took a picture of the packed meal.

View full transcript
For breakfast, I ended up not eating the full thing I prepared, and so that's why I left two of them, two of the pieces for like a packed lunch instead. And I took a picture of it.
9199a8d31d3358a88869a26b35093915817a1092307a3e719f19223bff74dad1_c84478f1a5a5.m4a
Wednesday, February 11, 2026
11:09 AM ยท 2:11
Essence

The speaker is experiencing an unusual heart sensation, feeling exhausted and stressed, which they attribute to an intense workout from the previous day rather than caffeine or current activity.

Summary

The speaker is noticing a strange sensation in their heart this morning, describing it as feeling exhausted or stressed, similar to the effects of too much caffeine or intense cardio, even though they haven't had coffee or trained today. They believe this feeling is a lingering effect from a very intense workout session yesterday, which pushed their heart rate close to its maximum. While they acknowledge it's unusual to feel fatigue in the heart or lungs in the same way as muscles, they're not worried, attributing it to their body recovering from the strenuous activity. They plan to stick to normal bodybuilding sessions and would avoid high-intensity activities like intervals or running in this state, though they believe light weights or an easy, low-intensity run would be fine.

View full transcript
You know, my heart is getting weird right now, like, it's just very exhausted or very stressed. Maybe it's a similar feeling to if you drink a lot of caffeine, like you feel the heart beating more. But also, I would say the same feeling as if you're very exhausted, like you do cardio training and your heart is working a lot. But that's an interesting thing because it's just the morning, I haven't been training, and also I actually dropped the coffee today. I usually have coffee every morning, but actually today I didn't. But I feel regardless, but I think the reason is very explanatory that it's just from the training yesterday, which was a very intense session, and that I'm still today feeling just, I don't know, like, I've never heard anyone say that you get, like, you know, fatigued or tired in the heart or the lungs in the same way as muscles. I think it's just something like that, from very intense, you know, very high heart rate, close to max heart rate workout session yesterday, that I'm just feeling, you know, the body being fatigued from that today. So I'm not worried about it, I'm just noticing it more than expected right now. And so I'm just doing normal, like, gym bodybuilding sessions, so I'm not worried about it. But I probably wouldn't do like intervals in this state, and I would be cautious of any type of running. I think probably if I did light weights or lightweight running, like an easy run, low intensity, it would be completely fine.
fcd4eb4b9b3bfe252e2024ed9528151ef0307b2e2507abc011cee5aec73d9b5d_e02d64d95a30.m4a
Tuesday, February 10, 2026
11:12 PM ยท 2:33
Essence

The speaker experienced recurring gas and a stomach cramp, or "hull," during their workout, raising concerns about their body's response to exercise and their memory of past occurrences.

Summary

The speaker is adding to a previous thought recording, noting that they experienced gas again today, particularly during their workout while running, similar to an issue they had last week. They describe the gas as bad-smelling and uncontrollable during intense exercise, though not as severe as the previous week. They also experienced what they call "hull" โ€“ a Norwegian term for a cramp-like, uncomfortable feeling in the stomach that makes breathing harder, often associated with running. While acknowledging that high-intensity exercise always affects breathing, they felt this "hull" more acutely today than usual, despite not pushing into high intensity particularly quickly.

View full transcript
Yeah, I just add on to the previous Thoughtstream recording. I just remembered that I wanted to add that I did have some more gas again today. I didn't really notice it throughout the day. There was either nothing or I had a little bit. I don't remember at this point, which is funny. It was literally two days and still I don't fucking remember. So it just proves how bad my memory can be sometimes. But then during the workout, when I started running, then it came, and that was the same problem when I was on this section last week, because I was like dropping something bad farts while we're doing the thing and like nobody can hear because it's loud, but like I can, the smell was bad. And like, I can't contain it when I'm doing hard exercise like this, like I just have to let it out. Or I don't know if I could contain it if I really tried, like I'm not really trying because I don't want that challenge, to be honest. And so I had that today as well, but not as bad as last time. A little bit, but like too much, like it should be nothing, I think. Yeah, it should be nothing. And I had it a little bit and that was that. And then separately, or maybe related, is I definitely got this, what we call in Norwegian, hull. I don't know the term in English, but it's this like feeling that you, uncomfortable thing you get usually from something like running, where you get kind of like a cramp in your stomach or you feel, it's kind of harder to breathe a little bit because your stomach feels kind of like full or bloated or cramped, something like that. And I don't remember if I'm usually getting that on these sessions previously. I mean breathing is always like interesting when you do like really high intensity exercise, but I think I got more of this hole today than what I usually get. I think most of the time I'm not really getting it or thinking about it today. I definitely got it, but it could be related to like pushing up into really high intensity really quickly. But I didn't really do that. Yeah, I don't know. I just noticed it more today.
2d82989608f0ab974d697164ee420ec5779fd689eca202fb0f429ae36c5e9b14_5f42e1d80add.m4a
Tuesday, February 10, 2026
10:52 PM ยท 14:16
Essence

The speaker reflects on a day of resetting, aiming for a simpler life less consumed by tech, while also documenting personal health observations and systemizing daily records.

Summary

The speaker is going to bed at a reasonable time after a good "reset day," shifting focus from excessive tech to a more balanced life. They aim to keep this recording brief, despite a history of long monologues, and want to streamline their journaling by only documenting information not automatically collected elsewhere. This includes noting what they ate for dinner, though they often take photos, and considering the complex definition of "healthy meals." They plan to stop repeatedly mentioning their consistent breakfast, assuming it's the default unless stated otherwise, and will rely on a smart system to capture other daily details like lunch, which they've already documented. The speaker then urgently needs to use the restroom. They recall documenting their first lunch during a stroll and a second lunch bought at a cafe, noting that photos and bank statements (if timestamped) could serve as records. They reflect on being somewhat antisocial at the gym but managed to speak with new people, despite initial anxiety. A significant positive development is the improvement of their acne, which they attribute to medication, noting a rapid reduction in pimples. They still experience dry skin and lips, a side effect of the medication, but it's less severe now. They also express concern about what seems like excessive hair loss, observing many loose strands on clothes and beanies, and plans to research normal hair shedding and consider hair care products. Finally, they remember they need to start taking creatine tomorrow.

View full transcript
I'm going to bed now at a reasonable time, good time relative to the last few nights, so that's very good. And overall, today has been good, as kind of a reset day, getting me back into living a more proper life instead of being too obsessed with the programming stuff and the tech stuff. I want to keep this thought stream recording brief, but I already know there's so many recordings where I've said that at the beginning and ended up yapping for a long ass time. I could note, I mean, now I've done recordings throughout the day, I could note now document what I've done for the rest of the day, but honestly, I think I don't need to because again, I want the system to be, I'm trying to think that, you know, I don't have this system yet, but I wanna set it up. And I know that there's so much data that's being collected automatically, like I don't need to say anything. I guess what I ate for dinner could be relevant, but this time, now I took a photo of it. And I think actually, I don't need to say anything else. Like I could add context, but why? I mean, relevant context, is that useful? Usually it will often be like mom's dinner, like she's made something. And she's like usually making healthy meals. Well, I mean, that's, this term like healthy meals, I feel like it's a very complicated term. Like what does that mean? And like, is food inherently healthy or unhealthy? Like no food is really like bad or good. It's just about how, like, that the diet overall suits the needs of your body and lifestyle. And that is not too much of something or too little. Yeah, so I don't think this point didn't really, I guess, earlier today, yeah, so breakfast again, yeah. So I wanna document the things that where the data is not being collected in any way. Just things I wanna mention, yeah. So the breakfast, again, it was the same. So, I mean, at this point, I'm saying the same thing every day, which means that I can stop saying it. And it means that we can kind of just assume that that is my breakfast unless I say something else. And maybe I have a different breakfast I forget to say, but you know, it doesn't matter that much anyways. But since that seems to be a habit that it's like usually I start every day with that similar type of bowl of oats and like a cup of coffee. So I don't need to reiterate that every day. I'll just say like normal breakfast, or I'll just not mention it. Lunch, I'll mention already on the thought stream today, so why mention it again? You know, for organization and for clarity, but, so of course, there's reasons and that, but then think about why not mention it because I wanna keep my system simple and I know the data's already been captured. So if the system is smart enough, then it will have captured it and I will not have to reiterate it right now. And so instead of reiterating in alpha structure, I should instead increase the goalposts and make sure that I design a system that's smart enough to capture it. I'm gonna need to take a shit right now. I'm lying in bed recording this on my Apple Watch. Fuck, I'm probably gonna have to go get up to take that shit. And then, yeah, when I was out on my stroll today, I documented the first lunch, but then later I bought also, I think, at a different cafe, which I've taken a photo of. And so then we have the photo and we have a bank statement that I bought something at a cafe. The bank statements are sometimes not that descriptive, but, you know, the timestamp together with the photo, or do they have timestamps? They probably don't. They just have dates. It would be really useful for me then to get timestamps on my bank records, actually. I don't think I can get that maybe through the Apple wallet. I'm not sure. Or if I took like all the notifications that my phone throws and log that somehow, but again, I don't think there's a way to really set up a system for that. Okay. Yeah, not much more to document. On the workout, I was kind of antisocial. I was a little bit social, but I was planning to talk more with this guy. I forget his name, but then it ended up not really happening. But I spoke with some new people at the end, so it was kind of good, to be honest, but I wish I was more social from the start. In the beginning, I felt sort of anxiety. I was just sitting alone, doing the warmup alone. Yeah. Okay, yeah, so my acne, that's something I've been wanting to talk about for days and I've just forgotten about it. My acne is getting better now, finally. And so I'm pretty fucking sure it's because of the meds. I mean, could be random variations or whatever, but really, like it's very consistent with me starting the meds and then it taking a little bit of time to kick into effect. And so that's fucking amazing. Now I have almost zero pimples again. Like it went away so fast, like overnight or over two days. I feel like it got kind of worse and worse and stayed bad and then like in two days it just vanished. And I think when I say vanished, it's like, I mean, you can still see regularities, but I don't have the like pimples, the like yellow dots that stick out, which are the fucking worst. And I think it vanished so fast because when I'm getting these things, they're all small and they all kind of vanish kind of quickly, but then new ones keep reappearing throughout the day and every day. So it's like, they always disappear quickly, but new ones also come just as quickly. But now they like stopped coming. And so that's fucking great. I just have like, yeah, one or two, three, I have some small ones on my face, but it really didn't bother me today. Like, yeah, that's fucking amazing. And tomorrow it's probably gonna be even better. So that's amazing. I still have some dry skin and dry lips, but like, it's not been that bad. Or I guess I've been so much inside now at home also lately. I haven't been thinking about it that much. Anyways, that's great. I was supposed to have started taking creatine by now, but I've forgotten about it. So that's gonna be a extra goal for tomorrow is to remember to start creatine. Although I'm not setting a reminder or maybe I should have. I don't know, we'll see. Hopefully I start that tomorrow. Now I really need to go take a shit, man. Oof, I feel the pressure. Um, I just wanna document this stuff. And then my hair, yeah, I've forgotten to cut it, but also past that, do like hair loss. I'm wondering if I have a lot of hair loss. I'm not sure. Because every single day, I see like my hoodies get a lot of like loose hairs on them just like throughout the day. Or if I rub my hands through my hair a lot, I'll get more like loose hairs that like have fallen off in my hands or on my clothes. And on white clothes, I don't really see, but on black clothes, it's very visible. Also, if I wear a beanie, it will like get a lot of hairs attached to the inside. Like when I take it off, that's a lot of like loose hairs. And like, I don't think there's supposed to be this many, man. Like I think a little bit of losing a few strands every day is normal. I feel like I'm losing more than that. I don't, maybe it's not like huge hair loss, like I'm afraid all my hair is falling out, at least not for now, but I think it's a little bit more than what is usual, but I'm not sure what is actually useful. So I should research that. Like, I don't know, maybe for girls with long hair, if that happens a lot. And then with the length, I mean, I would think the length kind of doesn't matter, the amount of hair strands is the same, but I think the length, it just makes it more visible when you do lose some or if they break. But overall, my hair does kind of feel the same. Like I don't feel like it has thinned, but it's just, I guess a little bit alarming if I see a lot of loose strands, but more it's just annoying because it's kind of like, you know, making my clothes dirty in a sense, especially the black ones. It's like, if you have dandruff, you get these like white things, flakes that can be very visible, looks unhygienic. And same with that, if there's just a bunch of hairs on your T-shirt or hoodie or whatever. So I should look into that and maybe, you know, some hair hygiene or better products to take better care of my hair to make sure it's healthy. And yeah, and maybe even products to prevent hair loss. That's something to consider. Okay, what else? Lips. Lips have been dry since starting the meds, but I haven't let myself bother by it too much. And now it's honestly kind of slightly less bad because I got that, they got so dry at some point. I think a little blood there, like it cracked, but now it's better. I haven't been using any moisturizer or lip balm, except for today. I did it for the first time because especially in the face, I had like a bunch of shit, like dry flakes and stuff and I didn't want to go out with that, so moisturize that shit. But I'm trying to kind of avoid using moisturizer and lip balm because I'm afraid it's gonna reduce the body's natural moisturizing of the skin and so you become dependent on the moisturizer. Again
5b4602d72001b41291b4d9ce05d9dd2854bb66e92ef55884578d5c5084807d2f_ad43ca54f4d7.m4a
Tuesday, February 10, 2026
6:56 PM ยท 8:33
Essence

The speaker is struggling with the technical setup of AI systems, particularly OpenCloud, and is seeking efficient ways to extract insights from personal data, especially voice memos, to optimize their fitness and overall productivity.

Summary

The speaker recounts their frustrating attempts to set up AI systems, trying both local and cloud-based solutions. They initially experimented with Codex, then delved into OpenCloud, which consumed a significant amount of time without yielding results due to unexpected onboarding complexities and issues connecting to WhatsApp and Telegram. This technical struggle led to a feeling of wasted time, impacting their personal goals like going to the gym. They are now focused on identifying high-leverage tasks for their AI system setup, prioritizing the integration of fitness and health data to optimize their training plan. A major goal is to automate the extraction of insights from their voice memos, which they currently record in Apple Voice Memos. While exporting and transcribing are manageable, the challenge lies in intelligently processing the vast amount of unstructured text to derive meaningful insights. They recently encountered a promising open-source Google project designed to extract structured data from unstructured text, which aligns perfectly with their need to build a higher-dimensional graph of information, encompassing entities, relationships, and temporal changes, to better understand and utilize their personal data.

View full transcript
Are we back? I've been trying to set up some initial parts of the system, but encountered some issues. So I figured I would try both locally and in the cloud. So locally, I just tried it very simply. Seems like everybody's using cloud code, but I just tried codex now because I had a subscription, so it's free for me for a month. And I mean, I didn't do anything, I just kind of started it. So I looked into how to give kind of my friends to kind of agents. Seems like just I kind of_REVOLUTE virtual card can work. It's so crazy. Then I asked it about the possibilities for the Google Maps position data thing. Seems like there was no real solution for that, so that's a problem. I asked it about if I could make my own app to do it, and it says, you know, Apple locks stuff down pretty badly, so it might be difficult. I'm not sure. Maybe I could make one kind of just as a dev, like it doesn't even need to be like an app store stuff, just for myself. Maybe that would be easy and I could bypass all the other permissions. But I think there might be like, yeah, when you put a developer app on your phone, there's like a seven-day time out on it, maybe. Then most of the time I spent trying out OpenCloud, and that took way more time than I expected, and I didn't get fucking anything done. So it was frustrating because I did the most simple setup. I just did like the host of your VPS where they have it like set up to work with OpenCloud, just to be like one-click install. And I click it, and OpenCloud dashboard is there. But it seems to have not been onboarded properly. Like you're supposed to go through this onboarding through the terminal that nobody told me about. The web thing didn't really work. It was confusing. And then I tried connecting it to WhatsApp or Telegram, and I wasn't really able to do that as well. And in the end, it seems like I may have been able to connect it to Telegram, but there was like some other issue with OpenCloud setup. And with WhatsApp, I think definitely WhatsApp has just been blocked. Like it seemed to be working, but not working at the same time. So I figured maybe to Google it to see if they have blocked it, but that might make sense that they would have done that because a lot of people are using it. It's like future issues and stuff. And then I read an AI overview by Google which said that they haven't quite blocked it, but I didn't look more into it, so I still don't know for sure. Yeah, so I got sidetracked. Plugged in to use more time than I expected, which is a problem. Again, time just fucking disappeared. You know, I said previously that I might go to the gym like an hour before just to fucking run on the treadmill, not really because it's productive training, but just because I want to do something always and not sit still. But it's ended up not happening. I'm trying to think about what's the highest leverage thing I can do, and specifically within the like AI system setup, what's the highest leverage thing I can do fast that's like not so complex, so I don't spend a lot of time just fucking jamming on stuff, but not actually getting any value out of it. And I think one very important thing would be to really create a, like pull all my fitness and health data together, mostly fitness, but like the health-related stuff that ties into fitness, just to make sure my training plan is good because that's one of the things that I'm like doing every day, and I'm afraid that I'm just like, you know, that it's a suboptimal setup. And then, yeah, as I said, if I can really get one way to have the AI that I can contact through any modality, messages, voice messages, actually I asked it about all the stuff, but we see that it's not set up. It's not set up. Yeah, I know. And yeah, I really want to find a way for all these thought streams that I'm doing. Yes, that's one of the most important things I can do to actually automate a system that can extract insights from it. So now that we've installed in the Apple voice memos record, I need to automatically get them out of there, which I don't even know if it's possible. I mean, it's very, it's like the easiest way for me to record them, but it might not, it might be locked down privacy-wise. Like I can export them as stuff easily, but it's a manual step. Then they will need to be transcribed, that's easy. But then, like actually getting insights out of it, the problem is just there's so much gap that the other ones don't handle it so well. It's just like too much stuff they can't handle the context. Just need a smart way of actually extracting insights from it. And I don't know exactly what that is, but just as I was scrolling on X, I saw one interesting post, which I would normally really not be interested in, but it caught my attention for exactly this. Seems like it was a new open source release from Google, a project that's made to extract structured data from any form of unstructured text, which is exactly what I'm trying to do, essentially. And what I ideally want is something even higher level. It's not just like structured data, it's more of a much higher dimensional graph. I don't know, you just want all the information, but how do you represent that in data is a super complex problem. There's a bunch of entities, relationships between them, different types of entities. And then there's the concept of time, things changing over time. I just got to... I'm just going to... Yeah, I know. And yeah, I really want to find a way for all these thought streams that I'm doing. Yes, that's one of the most important things I can do to actually automate a system that can extract insights from it. So now that we've installed in the Apple voice memos record, I need to automatically get them out of there, which I don't even know if it's possible. I mean, it's very... It's like the easiest way for me to record them, but it might not... It might be locked down privacy-wise. Like I can export them as stuff easily, but it's a manual step. Then they will need to be transcribed, that's easy. But then, like, actually getting insights out of it, the problem is just there's so much gap that the other labs don't handle it so well. It's just like too much stuff they can't handle the context. Just need a smart way of actually extracting insights from it. And I don't know exactly what that is, but just as I was scrolling on X, I saw one interesting post, which I would normally really not be interested in, but it caught my attention for exactly this. Seems like it was a new open source release from Google, a project that's made to extract structured data from any form of unstructured text, which is exactly what I'm trying to do, essentially. And what I ideally want is something even higher level. It's not just, like, structured data. It's more of a much higher dimensional graph. I don't know. You just want all the information, but how do you represent that in data is a super complex problem. There's a bunch of entities, relationships between them, different types of entities. And then there's the concept of time, things changing over time. That's interesting. I'm just going to... Just go through them.
bd42d78be15dde99f92e53211e03546112dce31835e44cd858f7b0a0bc0208ee_4a63cd9044ca.m4a
Tuesday, February 10, 2026
2:50 PM ยท 47:18
Essence

The speaker reflects on the potential of AI to infer their daily activities from various data sources, then pivots to a philosophical discussion about the subjective nature of importance and the pursuit of a fulfilling life through cutting-edge technology and optimized personal fitness.

Summary

The speaker begins by musing on how an AI could infer their daily activities, like visiting cafes and shopping centers, by combining data from their location, screen time, bank statements, and even Spotify. They note that while some actions, like specific app usage or conversations, might require deeper access (e.g., to their ChatGPT account or constant audio recording), a significant amount of their life could be understood without direct input. This leads to a broader reflection on technology's role in life, emphasizing that it should support and superpower real-life experiences rather than replace them. The speaker expresses excitement about living in a time of unprecedented technological tools, which allows for pursuing novel endeavors that push the boundaries of human achievement, contrasting this with more traditional hobbies. They then transition to discussing personal values, specifically the importance of fitness. They argue that importance is a subjective choice, rejecting objective morality or a single 'good' way to live. They briefly touch on the concept of consciousness as a spectrum, even within humans, before returning to the idea that pursuing revolutionary, bleeding-edge technological projects is a personally fulfilling way to live. Finally, they delve into their desire to become a 'superhuman athlete,' acknowledging the significant effort required but framing it as a pursuit of happiness in the process itself, rather than a fixed end goal. They envision creating a highly optimized, data-driven plan for fitness that accounts for human unreliability and allows for continuous adjustment, aiming to automate data collection as much as possible to reduce manual effort.

View full transcript
I listened to music a little bit and then went into another cafe to just read in some more, especially my hands were getting cold. Maybe I'm walking with them in my pockets. And I'm out again. And I realized, as I was saying that, that again, this is from data that the system should be able to know, like, infer exactly that without me saying anything because it would have my location and like the time of the location. Check a little bit more outside our banding the last most moment and then went into a shopping center here. It would be hard for it to know a specific cafe, but it would see a shopping center. Probably. And then maybe from the screen time on my devices, it could see what I was doing inside there. Like I'm, you know, I might be eating and then there might be, like, no data really collected automatically unless I take a photo of the food or something. But it could see my bank statements, see if I bought anything. You see how long I stayed there from the location time and screen time from all my devices to see if I was doing anything on them and if so, what. And from that, it would be able to infer without me saying anything. And then from my Spotify data, like when I'm coming music, it should be able to infer that I kept walking a little bit longer while listening to Spotify, bought another hot cocoa, and then walked a little bit more, then went into the shopping center. Didn't buy anything. It wouldn't, it doesn't know where I went, but you know, like a place to sit down. So I'd like to take a cafe. Then you should infer from my screen time that I was on my phone the whole time and I was playing with the lyrics learn app that I've made. So it should see the app and also the URL that I'm at. And then I flight hop into the ChatGPT app. I don't think you can see what I do inside the ChatGPT app from that data, but then you can see that I went into it. So maybe to add something, actually I went into the ChatGPT app to the codex section to send an instruction for an improvement to the app. Just kind of call that in the cloud, submit a PR, which is super fucking cool, by the way. I'm just, it still feels not that smart though, kind of limited. I just need to find a way to do this, but do it more with more planning or an orchestrator agent because it seems, it feels like the agent I have access to right now, it's just a little bit dumb when I do it in a couple of things. But it could be the way I'm prompting it. If I ask for more specific testing or completion requirements, it might do a better job. I haven't looked into it. But actually, since that could not be inferred from my screen time though, it could be seen that I went into the ChatGPT app. And then if it had access to my ChatGPT account, then the system could check my activity there and see that the only thing I did was send that one prompt to Codex Cloud. And so then it could infer almost everything I did. The only thing it wouldn't know is like whether I was with someone or alone, like whether I spoke to anybody at all. It would really know. And you know, other things from reality, but really like a lot of the stuff, a lot of the main stuff you see, just as I'm listing it, like by just combining the different data sources that are being collected automatically, it could really understand exactly what I was doing to a high degree. And you can take that further. Maybe I'm wearing something that's like constantly recording audio 24 7 because that already exists. I could do it with my Apple watch, but not like 24 7. And then that could infer if I was speaking to someone, if it was just background noise. Yeah, and then, you know, I've created that as like a video thing, like a constant POV feed or something, but that doesn't exist yet. So as I was saying there, I realized I wanted to go back to this hot stream and other stuff, stuff with the AI technicalities. Yeah, there's some more life stuff that I want to get in. So this is documenting the context a little bit, but also it just helps me become aware of the things that they're about to, that will actually guide my actions in real life because I've been spending too much time just like on the devices and computers and stuff. And it's like in real life that, you know, life is happening. Technology should just support that and superpower me in living real life. It's fucking cool because we're living in the future, you know, technology is evolving. I have possibilities and tools that no other humans have had before. That's fucking crazy. This means I can be doing new things that no people have done before. So then, I mean, it's cool to try to become a really good chef, for example, super cool hobby. But again, I'm not inventing that. I'm kind of just doing something that other people have already done and that's completely fine. You can give you perfect fulfillment and an infinite passion to chase. And that's kind of like extra cool to use that same concept of a hobby and a passion project or passion, but just also apply it to something that's on the bleeding edge of what's possible for humans so that the thing you're doing wasn't, is going to be like at a higher level, more legendary than what any other human have done before within that hobby because you put in the hard work and interest and dedication over time. But also it's just because like your tools are more powerful because you just lucky to live in this time. Now that does actually apply also within chefing, of course, because I mean, new techniques develop, new research, new tools. So even within that and just better ways of consuming information and learning. And so even within that, you can still take it to new levels that wasn't really possible before, but even more cool when you, like the leverage you get from technology is kind of limited because you still have to be a fucking chef. But when you try to do, set up these fully digital systems, the potential is just infinite and like, therefore the leverage is like infinite and the scaling is just infinite. Yeah, it's fucking crazy. Yeah, but some more concretely, like what's important in my life, bro. Fitness, hugely important. Why? Just kind of because I choose that it's important. You know, as an individual human with autonomy, you can just choose yourself what you want to deem important. I don't, I haven't done deep thinking into this. I don't think my philosophy is very developed, but just like intuitively, just in this moment without putting too much weight on it, I would say that I don't really think there's objective morality, objective good or bad, objective like what's important and not. I think it's really all like human creating concepts. Potentially I would say not just human. I do think that consciousness is not unique to humans. I think it applies to other living beings as well. And I think consciousness is something we've described throughout time as a binary thing. Like you have it or you don't, but I think in reality, it's a spectrum or not even a spectrum, but more of like a complex process, you know, really. But to choose an abstraction for now, we're going to say a spectrum where a being can be more or less conscious. And it does kind of feel like even within humans, it seems that some people are more conscious than others. And like, it's really hard to define this. It just seems like more, some people are more introspective, critically thinking more, like taking the world more on their own terms. Like I would say philosophers in general or people that engage in philosophy compared to people that don't, they kind of seem to be more conscious in a way. But again, this is not an idea that's really like developed in my thoughts at all, my beliefs, but I guess it's something I believe that I would be able to stand for at least for now. I'm also willing to change my mind. Yeah, let me stop yapping so much around and just kind of say what I think kind of like intuitively based on my understanding of the world as for now. Say what I said already. And therefore, also I don't think there's an objective like way to live a good life or a bad life. And therefore everything I'm saying about people being more conscious or living a better life or whatever, I'm kind of saying it like an objective thing. But then at the same time, since I said I don't think a lot of this objectivity applies, I guess I do believe that it's really just, in my opinion, that certain ways of living are just better than others or at least more in tune with my preferences. I guess it's more that it's just more in tune with my preferences or just how life makes sense to me. I don't know, this is fucking hard to talk about. I feel like I don't have the words. What's my point? I started diverging so hard again. What the fuck is my point? Living a better life. Yeah, yeah, yeah, yeah. Yeah, so just like doing something like the bleeding edge of technology that no people have ever done before and like revolutionary, that's just fucking cool. That's just going to keep me a great way to live life, in my opinion. Yeah, yeah, yeah, bro. I was just talking directly at the dog that was barking at me. Then, ah, yes, that was my point. Fuck, that was a long divergence from my point. Fitness is hugely important to me. And to me, it does kind of feel, you know, when I don't think too much about it, it does just kind of feel like an objective thing, like just like fitness is important. But I can acknowledge that it's really a subjective

or something, or a fucking yoga, what's it called like a yoga teacher, like a yoga, I think there's a term for it. Or, and also, you know, the flexibility and body control and balance of a dancer or whatever, you know, so a lot of these things. Now I'm just like listing random things. Of course, it sounds cool to dream big, but then at some point you also got to be realistic and it's like, bro, like, do you really need all this stuff? And do you realistically understand how much effort it actually takes to gain all this because, you know, people dedicate their lives to become pro athletes at different things. It's an investment. So really, I don't objectively need any of this. And there's no level where you're truly finished, but I just wanna be good at these things and also like be actively developing at these things. As a great quote from the top G himself, having things isn't fun. Getting things is fun. Great quote from another YouTuber, I don't even remember which, but it's, it's not the pursuit of happiness, it's flipped. It's the happiness of the pursuits. It's the happiness in the moment of being in the pursuit of something and enjoying the process. The pursuit of happiness is endless. It's futile because the goalpost shifts. It's never there. It's like reaching the end of the rainbow, but you can flip it and either discover or create the happiness in the pursuit. And the thing you pursue, you can kind of choose. It can be a reachable thing or it can be an unreachable thing. Both are kind of fine. But if it's reachable, you need to be careful because reaching it could actually be detrimental because you lose the happiness of the pursuits and you lose the meaning that you derived from it, which you used to base the meaning of your life in. I was looking for a better term than base, which you used to ground the meaning of your life, which you used to source the meaning of your life from. I don't know. Okay. I'm diverging a lot again. The point is that I just want to be a fucking superhuman athlete. And I mean, I don't want to invest all my time and energy into this, but I'll invest daily. And I think you can have like a great balance for this. You invest like one or two sessions per day with some rest days here and there. And then you eat a diet which supports that goal and you get enough rest. And then you still have like the rest of your life to do everything else, but you can almost like time block a certain amount of your time and I guess energy, but let's just take energy a little bit out of it for simplification. You can, and also I feel like energy is so adaptive. Like you can kind of get used to it instead of energy for other stuff. You can kind of time block a certain amount of, if you plan it properly, certain amount of your time each day and week and month and thereby year. And conceptually time block those for on like a multi-year, like a five-year plan or a 10-year plan, even let's say from, or let's say like seven-year plan from 16 and a half, from 23 to on 30. For just how are you going to be that dude who's in amazing fucking shape? And you can literally calculate how much time of your life you will give to that and consider if it's worthwhile and how, because there is kind of diminishing returns. So you can choose where you want to place yourself on that curve. And now I'm talking kind of conceptually because it requires having good data, having reliable algorithms for like predicting how the inputs are, like which inputs all the time, like predicting what output you're going to get from it. Like inputs are like training, diet, sleep, whatever. And then the output is like your actual fitness and performance and body composition or whatever. And then also it requires the kind of you actually adhering to the plan, which is like a huge fucking simplification. Just assuming that you as a human being is actually going to adhere to this perfect plan. It's not even like a too strong an assumption. It's just actually impossible. Like humans just don't fucking do that. But then you can make a smarter plan, which takes into account the unreliability of humans or yourself and make it still work, you know, make it kind of flexible. I'm getting very theoretical again. Fuck is my point. Conceptually, you can make a plan that you're happy with and it's always open to adjust at any point, but like, you know, the plan you're following is good now and it's good for a long ass time. So you don't need to think about it day to day. So essentially, yeah, my point was then you dedicate time to these sessions, which is primarily training, but could be also more like flexibility, yoga type thing, which is, you can consider training or not depending on how I want to define it. And then some time for maybe like more proper rest, but I think you also don't need to think too much about it. Just assume the body kind of just rests naturally, but then proper sleep and nutrition is also very important. But it doesn't really take more time out of your day to do it properly. It's just a matter of like structuring your life. It's not a matter of discipline either. It's just a matter of setting up the right routine and habits and structure. And then it's good without you needing to worry about it. And I really wanna, I mean, it would be great to have this perfect plan. I don't know if it's really achievable. I can definitely get closer to it. And that I wanna do for sure. That's a concrete task that I wanna be working on right now and know that I'm getting closer to this, building a better plan, collecting more data, making a more data-based plan, making like a system that's gonna analyze the data live and adapt over time, give me feedback whether I'm following the plan or not. I really need to track, okay, let's think about things around health and fitness that can be tracked as data, either automatically or manually. And just like that can be tracked practically. And for all the manual things, it's so fucking valuable to think about, okay, how can we take this from more manual and make it into a more automatic process so that it can run automatically. So I can get the value of the data without the hassle of doing it manually. All right, so you have just like health baseline stats that are collected through the phone or the watch. Then like basic activity through the day, like steps and stuff collected by the phone or the watch. Then we have specifically like fitness activity, like training sessions collected by the phone or the watch or manual input through like Strava or other services. Like Heavy, where I kind of track my workouts. Then we have kind of sleep and recovery data, again already collected. And then what more? Then what I really don't have, which I probably need, is performance metrics, performance evaluation. Am I actually getting stronger, faster, more fit, or like a better body composition? Here is where there's just a big fucking question mark. I actually don't know. That's a big issue. And this is something that I need to improve with like my life fitness system because I'm just not really evaluating properly. I'm like training. I'm in like a good, very good habit and lifestyle of just like training. It's really just part of my life. I don't think too much about it. It doesn't require too much willpower. You know, sometimes it's easy, sometimes it's hard. Overall, it's just a thing that I do because I think it's good for me. And I sometimes enjoy it in the moment, although not always. And I see the results. I see that I am fit and it's because I've been training for years. So that's great. And that's a positive reinforcement cycle to keep doing the same thing. But I'm afraid that my progress is way too slow and that I'm wasting a lot of time and effort into like performing training or performing exercise, but it not being effective training to actually make me more fit. And that if it was structured differently, more training or less training or just different training or at different times or different types, something like this, with just the same essentially, I'm afraid and I'm afraid that I'm wasting my time and energy. And I think probably it's true that the exact same time and energy investment, just like programmed smarter, would yield much better results from my training. That's what I wanna investigate because I don't wanna be fucking wasting my time and energy, right? I wanna get the highest yield for my input. I wanna optimize. It's not about getting the highest, like maximizing the highest or the fastest fitness progression because that requires a diminishing loss amount of investment in terms of time and energy. But it's about choosing where on that curve you wanna place yourself of diminishing returns. And then within that, make sure you, within the time and energy you're willing to invest, and like, yeah, within the time and energy that you're willing to invest, that you get the optimal returns for it. And here I think I'm losing out a ton. And I say, I think I'm losing out a ton. And then you can ask me like, oh, what makes you think that? And the thing is, I just, it's hard for me to prove. I can't say concretely. I just, what's it based on? Well, it's mostly based on fucking jealousy. Like the most unreliable thing ever. It's just like I see other people in the gym or in life or online who are like more fit than me. And I'm like, fuck, I'm jealous of them. And it makes me insecure. And I know I've been training for a long time. So I'm like, why am I not as fit as them? And then I started thinking, okay, well, I've been training a long, but you know, some people have trained longer.

But that's so fucking subjective, man, because even with the exact same physique, depending on my life otherwise, I'll feel sometimes happy about it and therefore confident, and I'll sometimes feel bad about it and fucking insecure, that I don't look good enough. It's a huge thing for me to upgrade my system in terms of fitness, is getting more objective evaluations of performance, which means I need to program this into my training. I need to be like, okay, on certain sessions where I actually see like how much can I do, or at a given intensity level, like, perhaps even if it's not a max session, I could be like at a certain run at a certain pace. How high or low is my heart rate? Because maybe I did the same run later when my heart rate stays lower. It could indicate that I'm more fit. So huge thing for me to upgrade my system in terms of fitness, is getting more objective evaluations of performance, which means I need to program this into my training. I need to be like, okay, on certain sessions where I actually see like how much can I do, or at a given intensity level, like, perhaps even if it's not a max session, I could be like at a certain run at a certain pace. How high or low is my heart rate? Because maybe I did the same run later when my heart rate stays lower. It could indicate that I'm more fit. It's a huge thing for me to upgrade my system in terms of fitness, is getting more objective evaluations of performance, which means I need to program this into my training. I need to be like, okay, on certain sessions where I actually see like how much can I do, or at a given intensity level, like, perhaps even if it's not a max session, I could be like at a certain run at a certain pace. How high or low is my heart rate? Because maybe I did the same run later when my heart rate stays lower. It could indicate that I'm more fit. It's a huge thing for me to upgrade my system in terms of fitness, is getting more objective evaluations of performance, which means I need to program this into my training. I need to be like, okay, on certain sessions where I actually see like how much can I do, or at a given intensity level, like, perhaps even if it's not a max session, I could be like at a certain run at a certain pace. How high or low is my heart rate? Because maybe I did the same run later when my heart rate stays lower. It could indicate that I'm more fit. So, but with strength training, though, like weightlifting for some of the strength goals, but mainly for like aesthetic goals, to be honest, then I think the training, the way I understand it, the training in itself is kind of a performance metric, because if you have kind of structured training planning, you do your sets of like three times eight or whatever of exercise, and you do the same like twice per week or every week, then that thing, even if you're not taking it to failure, but like staying two reps in reserve, it could still be that perfect metric because you should see you be able to over time increase the reps or the sets or the load essentially at the same perceived effort level. This makes sense when I see it in YouTube videos and in science, but I just haven't been fucking able to apply it in practice, man. I don't know, essentially, kind of maybe one of the biggest things is my frustration. You know, I do this training and then I go and lift and I feel like sometimes I'm lifting the same weights I was lifting when I started in the gym, bro. And sometimes I run and like, I know I'm running slower than I did when I was like 15. And I'm like, what the fuck? But I say I feel I know, but I don't know for sure because I don't have data. I just look at the numbers, but then it depends on if you did another exercise before, how your body is feeling in that moment. A lot of different factors. You know, your training, recovery on the days surrounding that day. But, so I think I need to make a program that has very controlled inputs, but also that some of the outputs are actually evaluated on a consistent basis so we can have some metrics. Because otherwise I'm just shooting in the dark a little bit and my perceived performance, perceived fitness level, it's just too subjective where it just fluctuates based on other factors that are unrelated. Like it will literally fluctuate based on my mood, whether I think I'm actually performing better or worse instead of what the actual data says. Or I will too easily forget that my performance there, even though it may have been worse than before, it's like clearly affected by the fact that I was in a worse recovered state or I did a different exercise before or something. I got muscle ups, bro. I did my first muscle up, what, like a year ago, two years ago. And since then I haven't really gotten better, man. I just do the fucking same. I can do like a couple of muscle ups. I can do like, I have sets of like one, to be honest, like maybe two, but really I need to do like one and then take a break and then take momentum again. Yeah. But let's move on to other things though. Just other topics, like more rapid fire topics that are important for me to focus on, keep top of mind. Improving relationship with my family for sure. In general, living a life of more human connection, for sure. And more connection with myself, more in tune with reality, more just like existing in real life and enjoying that instead of being, living inside the screen. More daring to express myself and just do what I pursue, what I'm interested in, say what I'm thinking, not hold back. You know, good to be nice to people and not too nice. Like I know with my family, I'm too nice, but they're also too nice. Like it's not just me. We're all concealing the truth too much. So we're all just fake with each other and it's a problem. Like it's a good trait in a sense, but in our family, it's a problem because it's, it comes from a source of these puzzles are very complex. I don't know, but at least a part of it, it comes from not just being nice, but also being perceived, being afraid to say the truth, being afraid to be confrontational in the occurrences where actually that would be the like, you're being nice by not saying anything, but the actual nice thing to do is to actually say something, but say it in the right way. Take that confrontation because it's better for both of you. What else in my life? Yeah, well, I'm saying I have these data sources, but they need to be fucking connected. And if that's not gonna happen unless I do it myself, like on my position data from every day, it's not gonna be passed into my AI agent anytime soon by any system that anybody else creates that's like in, for example, the ChatGPT app, which is kind of expanding into an operating system. I just set that up myself. If I want it soon and therefore I should just do it now. And there's a big question, of course, can I derive meaningful value from it? Well, I don't really know. For sure, it's possible. Will I end up actually getting that? I don't know, I should try. I am thinking now since it's like it's come to the point it has, dude, what I need to do fucking instantly, I need to set up an AI agent that can do almost any task on my behalf, which is given just full creative freedom and full tools and everything, but then how do you trust it and make sure it doesn't do too much? I need to find somewhere where I can set up where you can do like an impressively wide range of stuff, but it can also be fully trusted so that I can just kind of give free reins. One of the biggest pieces there, which is also, I feel like not happening automatically anytime soon, is giving the AI a credit card so it can actually make purchases. And I think it probably is the setup. I don't know which service company I would use for it, but I'm sure with like a single, because I haven't looked into it, but I'm sure like this exists and it's tons of alternatives and it's very easy. Get like a virtual credit card and just set a limit on it. And then it can already go do some cool stuff. And then to make it actually useful, give it a ton of personal information right now and reins to be kind of free and creative, understand the ultimate goal which is just kind of be like evil or surprise me maybe. And then a way to interact with it that's very intuitive. So I text my messages, maybe videos, maybe also call it. And when I have this and then start adding personal context and then it's going to need a ways to manage that memory or that amount of data if it's a lot, but at entry level, just if it has these capabilities, now I can ask you to, for example, order something for me and it can actually do it and just get it delivered and it would be cool then if it also handles all the emails related to the order that I don't need. So it just handles everything and I just, it just tells me when it's ready to pick up or it's just at my door or whatever. Like I'm struggling, man. I wanna be walking, staying a lot outside and not being on my computer. It's just too cold. Like I actually can't be walking outside, especially now since I don't have my gloves. And I'm wearing my backpack, I'm having my hands in my pockets when the backpack is helping weight on my arms, so I don't get blood into my arms and it gets cold. I guess I could just push through. It's just, it's just so uncomfortable

I just want to go to the gym now before the group session starts so late, right? I can go there max, like, I'm gonna go there early just because I wanna do something. I wanna be inside it. Now it's too early. But I mean, I can start like one hour. Now I'm stretching it, but like one hour before the actual group session, I can still be there and just start running on the treadmill for an hour or something. That was like really way too much, but just as a one-time exception, I think a good way to spend my time because I just don't wanna be. I don't wanna be sitting. I don't wanna be sitting on the computer. Okay, what other stuff do I need to do? Do my hair tie, get a color hairstyle. Figure out what I wanna do. Oh, and also the coding workflow is starting to be accessible through this even more hands-off format, like, for example, through this thought stream. Codex is really good now, but it's allowed to kind of go in there and pick a project. I wanna have one that manages just one project, but all my projects. That's like my main system that I'm making anyways, the kind of Javis. And that's the main one that I would contact with through any modalities. That one should then be able to navigate into control any of my projects and do it themselves and delegate other agents within the enclosed projects, just like one central hub, you know, to get in touch with anything.
7ff2c7151fee41d50a217f771d204c8e61c9dd74f91102dd71a8927a6247a0e2_c211429c8194.m4a
Tuesday, February 10, 2026
1:24 PM ยท 47:34
Essence

The speaker is grappling with the balance between their obsessive focus on building an app and their desire for a more natural, automatically documented life, all while contemplating the nature and naming of their personal AI system.

Summary

The speaker finds themselves outside again, having been unexpectedly engrossed in building an app using codecs, which feels like both a bad habit and a productive obsession. They're now seeking a quiet, non-windy place to walk, considering the castle park after finding the docks too cold. They recount eating a packed lunch in a cafe, deliberately not specifying its contents to aid their data collection system, and also bought a hot cocoa to warm up. The speaker then delves into how various data points, like photos, calendar syncs, and fitness tracking, should automatically document their activities, aiming to reduce the need for explicit verbal logging in their thought streams. This leads to a deeper reflection on their "system" or "AI agent," which they've discussed extensively before, and the challenge of finding a suitable name for it, like "Jarvis," while acknowledging that the interactive AI is just one facet of a much larger, impersonal system.

View full transcript
Okay, we are outside again. I stayed there longer than expected. I'm a little bit sucked into building on the app again using codecs right now. It's funny, it's like a bad habit and an obsession, but it's also kind of good because you're building something and making something and you're very interested. Like in a sense, it's very cool. It's just bad that I get too focused on that. I just forget about everything else. Right now I'm finally trying to find a nice and quiet place to walk. So I figured I would go like out to the docks and stuff. And it was quite quiet, but it's also a little bit cold right now, because slightly more wind. So now I'm going to try to find a place that's quiet but also not windy. I'm thinking I'll walk towards the castle park, the royal park, however you say it. In the cafe now I ate my packed lunch, which I'm not going to say what it is because I want to make data collection as much as possible. And a little bit sucked into building on the app again using codecs right now. It's funny, it's like a bad habit and an obsession, but it's also kind of good because you're building something and making something and you're very interested. Like in a sense, it's very cool. It's just bad that I get too focused on that. I just forget about everything else. Right now I'm finally going to try to find a nice and quiet place to walk. So I figured I would go like out to the docks and stuff. And it was quite quiet, but it's also a little bit colder I think because slightly more wind. So now I'm going to try to find a place that's quiet but also not windy. I'm thinking I'll walk towards the castle park, the royal park, however you say it. In the cafe now I ate my packed lunch, which I'm not going to say what it is because I want to make data collection as much as possible. And I took a photo of it this morning, so it's in my camera roll. And there is also metadata with that timestamp that I took the photo, although I didn't eat it then. But with me just saying this, that should be enough and then the system should be able to do the rest. Of course, that image, it can be hard to determine exactly what the food is from an image and how big it is, like how much it is. But I think it's harder for, like in real life, humans do a good job, but I think, or, well, they don't do it on a picture. I find it as a human much harder to understand how much of little food it actually is when I see it on a picture. So I'm guessing that the AI models, they're, like, you know, they're trained on pictures, so they should perform better on that because they're used to it. And it doesn't need to be exact, to be honest. Yeah, and then I also bought a hot cocoa in the cafe because I was just kind of cold. That was the main reason I wanted to buy something warm. It was looking nice. I was considering buying like more than a proper meal as well, but I figured I didn't really need it. Don't know if I should eat more before my training later today or not. I could say what the training is, but again, I shouldn't need to because it's booked through the Sats app, which means it's also automatically synced to my calendar. So there should be enough data in the system for it to infer it, or to actually just literally see it, to be honest. And then, you know, data is... Later my location will be there and my watch... Fuck, I forgot, I wanted to bring the heart rate monitor, like when you take around your stomach, for respect. That's the same reason. I forgot that one, but I'm my watch. And I forgot the other watch strap, so it's going to be loose. So maybe I should have checked for those ones. I don't know, we'll have to see. But in general, when I do these workouts, then at that point also, my phone and watch, like my Apple stuff, will be in fitness mode and it will constantly see increased heart rate data like I'm clearly working out. So all this working together, you know, is gonna essentially automatically document that I did a workout where I did it and exactly what kind of workout it was. If I do not one of these group sessions, but it's just like a normal gym session, usually I'm tracking it with the heavy app. So then I put the name, what kind of session it was, all the exercises I did, or if I can't be asked to track it and use like write in natural language, it will also sync to my Java and to Apple Health, the normal I'm not sure if it titles instead. I think it does. Okay, I'm rambling. My point is all this data is being collected. So that, like, if I do a thought stream just to think, of course I can think these things and say these things, but I'm also thinking about the thought stream that is supposed to be valuable because it provides context for the system so it can know everything about me. And therefore sometimes I'll explicitly list a bunch of stuff during the thought stream because I know it's a voice memo that's gonna be passed to the system in the future. So I'll explicitly list the stuff that I normally wouldn't spend the effort to link out so thoroughly about just like reiterating what I did and what I'm going to do. But this I want to get rid of as much as possible because this should rather be, you know, just tracked automatically somehow so that the thought stream is really just my natural thought stream. So I spent a little too much time, I think, going into the details of exactly how it works technically with the location data. Like, it doesn't matter that much. I wanna go into more data sources, but first I wanna maybe clarify, like, I keep mentioning the system on my AI agent, autonomous AI system or whatever. So maybe I expand on what this means. But honestly, I don't think I do. Like, to me, it makes a lot of sense, but then I'm thinking I should explain it to a potential listener or reader to understand what I mean. But honestly, no, I shouldn't because I've talked about this so many times on voice memos or in journal entries or on thought streams or whatever before. So like, the context is already there and like I've talked about it a lot and it's spread out. But yeah, it should be perfectly clear exactly what I mean. Not probably from only this voice memo, it's not that clear. But from the expanded context of all the fucking stuff I've documented at this point, like, it's very clear. I think at least. So I wanna have a good name for it though, just like when I'm referring to it for the thought stream or just for myself, like what's the name? I don't know. I'm just saying the system now because any other more specific term feels like, you know, it's losing some of the meaning by being too specific. That's why I'm just saying the system. It's literally kind of just that. Or you could say like Jarvis, you know? Which is this popular term. It's the name of the AI assistant in Iron Man movie. Jarvis. I like, I wanna make my personal Jarvis. But it's not really it because, or is it? I'm not sure. The Jarvis is kind of like the AI that you talk to. But then you wanna have, is that then also the system, or is that just the AI and then there's a system around it? I think it makes more sense to say that there's a system around it. And I don't know, in the movie Jarvis is like this AI and I guess he kind of is the system and the system is him. But in reality, uh when I build this, I know that the, yeah, I guess you wanna put the name, it's just the whole system and we give it like a human name just to make it easier to refer to and because we just like giving a little bit of humanity to it. But we know that actually it's not a personal human like at all, it's just a system. And the thing you're talking to is not the system really. You're talking to like one modality of the system, it's like a part of the system that's set up to be good at talking or text input or whatever. input output modalities. But you know that that's like a gateway for you to interact with the system. So in a sense, you're talking to the system or with Jarvis or whatever. But as a human then we would think that like the thing we're talking to is Jarvis, like the person or the AI. Really what we're talking to is just like a small part of the system. And then we might create this visualization. So we'll create this like interactive voice AI model and we'll add like a visualization to it. And that's going to be like the personal imitation of Jarvis or whatever. But in reality, that's like that's really not what the system actually is. It's really not what Jarvis is. It's just this like one visualization that's set up and you can create like a lot of others. It's like it's not really tied to that visualization at all. It's just like something we like as a human for it to feel more human-like. So it's enjoyable to interact with. But I should still have a name, I think. I could just also use Jarvis. I could say my Jarvis or Jarvis, just like I'm creating Jarvis. I could say the system. I could say AI system. Honestly, at this point, I think, yeah, it doesn't matter. Like, I can use any term as long as I'm undecided, use whatever term I feel like in the moment. I'm just going to assume that the entity consuming this information, which is going to be the system, like I'm gonna build the system to consume this information

Yeah, I don't know where I'm going with this. Alright, let's do a very concrete thing. There's a way to interact with Jarvis. You want to be able to talk with it, talk with him. Why am I gonna choose to say him and her, or why not? You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that feels natural to you, which is essentially the way we talk with other human beings. You don't want like every way of talking. You want to be able to talk with him the way that

And then I have YouTube videos there, some big files in Google Drive as well. So in a sense, all of this is memory, but it's more like a, you know, permanent storage. But I mean memory of different layers, but in the sense that's like a long-term memory. Then you need different layers of the memory that's more easily and quickly available because there are technical limitations. You cannot keep the full content of every pixel of every YouTube video I made, for example, in that active memory because it's too much data, but you could keep maybe the full transcripts or like a paragraph summarizing or describing this video. You keep that in memory, suddenly you compact the data size so much while still keeping a lot of important value and context, but it depends on what the task is. And so this system of a memory system, it's such an interesting concept to try and build and to investigate and to research. And the value is just like fucking infinite. And in the process, you'll understand better how human memory works as well. You kind of try to discover some things that evolution has already just discovered by itself and we're just not aware, but then you'll probably also find ways to move way beyond that with technology as we already have in some parts, but not in other parts. Because given how the tech is right now, for example, with the LLM models that we use, in a sense, the memory of the LLM, well, in a sense, it's nothing. You send the prompt, it's kind of like a stateless function. You send the prompt and it returns the response back or even it just returns like one character, doesn't it? And then you're like, keep hitting it with that character to make the next character. The way I understand it. Or what it actually calculates, I think is based off a prompt, it generates a probability for every possible next character, but then usually you will just like make it pick the highest probability. I was getting loud again. Let me get some more coffee. Yeah, so in a sense, there's no memory there. But since the model is, you know, trained, it does this calculation through weights. You can say maybe conceptually that there's memory built into the weights. So kind of built into this function, even though you're just passing a prompt and you're just one thing. Then we kind of expand on that by doing something smart with the prompt we're passing in. Instead of, yeah, just generating a character, now we wanna generate a sentence or a paragraph or more. And so what we do is that, you know, as it's responding, we pass that input and also the letter it generated back into it for it to generate the next letter or character or token, maybe. And so that's in a sense a level of memory, but still not what we would really call the memory when we're referring to this meta language, but in a sense it is, or it's like a level of state, I guess. You could call these like different layers of memory, maybe it's like different layers of state management or state preservation. And the next level up now is like, so at that level, we pass a prompt and we get a response. That's not just like, you know, one character, one token, but a full response. Next level is now, it's like a chat history. So with the prompt, we also pass the previous prompts and responses so that you can see the chat history because that's relevant for the next response to kind of follow up its context. And then we realize, okay, there's more things we can pass as valuable context. For example, why just do this conversation? Maybe the person has had other conversations with this LLM that they wanted to be able to remember for this conversation. Maybe they've informed that they're gluten intolerant and in a new conversation, it's weird if the AI suddenly no awareness of that because it's starting to feel very human-like since it has such abilities to understand natural language. We therefore assume that it also has the other abilities that comes with someone who can do natural language, which has always just been humans. We assume it has human capabilities, even though it doesn't. It just has natural language capabilities. So you're going to assume it to kind of be human and to have memory. And then you start a new convo and you see it doesn't fucking know that you're gluten intolerant, suggests you to eat this bread or whatever. Like, what is this thing trying to kill you? And so then you can create this next level of memory system where it could pass in all of the chats you've had with it, also into the context. But you start running into this problem somewhere that there's a limit to how much context you can take. It's the context window of the model. There's a limit to literally how many tokens you can pass in in kind of a prompt. And then, ooh, something I never actually heard someone discuss, like let's, because the way I've heard it explained, they always say there's a context limit to the prompt you pass in, then you get the response back. But, you know, I think actually at algorithmic level, you know, it's not making a response, it's making it one character at a time, which means that, let's say you theoretically fill up the context window 100% with a prompt, well, then it calculates that. It generates the first character of the answer, but then it's going to keep generating the answer. But in order for it to keep generating the answer, you now need to pass it for the next iteration of the algorithm to create the next character. You need to tell it what is written so far, so you need to pass it that first character that is written. And that would now be too much for the context window. So then you would have to cut something off from your prompt. For example, just cut off the first character then to fit in the context window. Now, I'm pretty sure they don't do it like this in practice, like actually fill it up like that. I'm sure there's some like buffer solution that there's a certain amount you get to the prompt and then there's like an extra amount that you can take because it's using that to look at what is written so far. Or there's some smart compression or I really have no idea. That's just an interesting nuance that I never thought about because the way you get this explained is always at a slightly higher abstraction level. It's usually explained when they explain context that you pass one prompt and you get one answer, and there's a limit to the context you can, like the character count or token count you can pass in the prompt for the context. But then they never discuss how that works on the character by character level of generation of the response. Yeah, anyways, I digress. So, what the fuck was even my point? How did I get here? How did I get here? Let's see. Yeah, I was talking about different levels of memory system or different abstractions of memory within an AI agent or within an LLM because it ties into like what would be the memory system in the Jarvis that I'm trying to make. It's just a fun thinking exercise to think about different layers of memory. Yeah, so then what you need to do when you start filling up the context window, if you wanna pass in the full conversation history or a long ass prompt, or you wanna pass in context from other conversations, you're running into this limit of the context window and therefore you cannot always include everything. And if so, you need to do some smart context management. So you need to kind of weigh things and choose what's more important to fit into the context window. And then even if you know the context window and only pass data within the context window, there's also this concept of context drop or context rot where stuff that's passed towards the end of the context window, even though it's mathematically and algorithmically inside the context window and is therefore calculated by the LLM, research has shown that it's in the actual results that data has much less impact, like the stuff. If you push stuff towards the end of the context window, then it literally just like in practice is kind of ignored to a larger extent. It seems the AI kind of forgets about it, kind of prioritizes the beginning of the prompt if it's so huge. As any other context management, you need to choose how much of that space are you going to dedicate to the current conversation history? How much of that are you going to dedicate to giving info from other conversations the person has had with the LLM? How much of that space are you going to dedicate to giving maybe just like general information about the user that's stored somewhere? How much of that space would you dedicate to other data that might be relevant? So then there's this whole concept and system of context management for a single prompt in order to enable the LLM to essentially be as smart as possible for that user query. And the way you want to do the context management could be very different depending on which person that is, how much data that person has, and what specifically they're prompting about right now. So the context of their interaction right now, the goal of that interaction. And then above this, now you start getting into having kind of like separate files for data, like the user has images or video files or just large text files and books, whatever, which they kind of upload directly in the ChatGPT interface or they have them on a computer somewhere. And then like how is the AI now going to use this data? For all like text files and PDFs and stuff, it could just like read the text contents and then just like put that in itself as part of the prompts. But depending on the size of the document, you might hit the context limit again. You might have to do something else. For images, how do you do that? That one, I don't have so much knowledge of, but I think before it was done through models that were trained to identify objects in images. You pass it an image and it would convert that

Okay, it's not like sentience, it's not magic. It's complicated. It's a function. I don't really understand. And there's a scale that's hard to comprehend in terms of the size of the model, the amount of parameters and data that goes into training and then the compute, and even just for generating the answer, like the amount of compute that goes into generating the answer. It's hard to comprehend for a human. But it doesn't feel so mysterious. But the multimodal models, I haven't gotten, I mean, I haven't looked into it, so I haven't gotten the same explanations given to me. And therefore they feel more mysterious and magical. Yeah, anyways, yeah, so that's, there's like higher level memory management problem. Then let's say you have a bunch of files, text files, video files, images, whatever. It's starting to get quite large and there's different ways of viewing the files you can. If it's text, you can read it as text or you can summarize it, convert it into still text, but just like a much shorter version, trying to extract the essence without having the complete text contents. And then the AI model can do that itself. Or it could issue perhaps a smaller, cheaper AI model or perhaps a bigger, more powerful AI model with a larger context window to take that document and compress it into a less data heavy object that still contains the same information or a lot of the same information. I'm getting a little bit tired of you happy now, to be honest.
e5446d8072a9afdf48687beba7123406fe90fb75c185598c066bd506db9532eb_50f183ddb379.m4a
Tuesday, February 10, 2026
11:47 AM ยท 18:58
Essence

The speaker is exploring how to automatically integrate their Google Maps location data into a personal system, reflecting on the challenges of data access and the broader philosophical implications of critical thinking and system building.

Summary

The speaker is walking and considering where to eat lunch, but intentionally avoids stating their current location aloud. This is because they envision a future system that automatically tracks their location, making manual input unnecessary. They note that while Apple's location tracking is privacy-focused and not easily accessible for personal use, Google Maps has been collecting and storing their location data for years, which is easily exportable. The speaker reflects on the nature of this exported data, questioning its level of detail and whether it's a simplified summary rather than raw GPS coordinates. This leads to a broader philosophical digression about the importance of critical thinking and building one's own understanding, rather than solely consuming information. They believe that engaging in personal critical thinking leads to a more grounded and resilient worldview. Returning to the technical challenge, the speaker discusses the feasibility of connecting their cloud-based system to Google Maps to continuously fetch new location data. They suspect there's no direct API for this and that the only current method is a manual export from the mobile app, which is unacceptable for an automated system. They brainstorm potential, albeit clunky, solutions like using an automated browser agent on the Google Maps website or an Android device to perform daily exports, acknowledging the limitations and inefficiencies of such approaches. The speaker concludes by noting they are getting cold and will find a cafe to eat their packed lunch, continuing the recording later.

View full transcript
We're out of the bathroom again. Walk a little bit more and then pick a place to eat my lunch. Now, I could say where I am right now, which could tend to be valuable information when going through this recording or transcript later. But, and it would be very easy for me to do, but I'm going to specifically not do it actually because part of the system that I want to set up is automatic location tracking. So I shouldn't ever need to like say where I am because I always have like my phone on me. Always, it depends on my lifestyle, but right now for sure I do a lot of traveling. I mean, most of the time, almost always I have it on me, right? Um, and also if I'm making recordings like this, then I have a device with me to record them with. And it's usually my phone. So, um, So I'm not gonna say where I am because whatever entity goes, entity or system goes through this recording, if it's gonna try to synthesize the information or find value or connect it with other data, should also just have the data or have access to the data of my location at any time, which is currently already being captured automatically. I think probably my iPhone or like Apple Maps or the operating system does it, but that is like Apple's very privacy focused, so it's not really useful for me. I don't think there's any, the way it's like captured and I think it's like compressed or automatically deleted or whatever. I haven't looked too, I have not looked too closely into it, but I think the way Apple does it is not, it will not like allow me to get the data from my system in the way I want. But Google does it. So the Google Maps app captures my location like all day every day. And they've saved it for years. I can go back to, I don't remember, many, many years, like 2016, 2014 maybe. And I've had it, not on continuous, I've had it off for periods. I mean, sometimes on purpose and sometimes on accident. But it's been on for like a lot of that time. And I'm not sure how high the accuracy is, but I think there could be like, it could be as precise as like every minute of the day it's checking out your location. I'm not sure. But then, and then they let you export all this data super easily from, so I couldn't do it from the like Google export website anymore or data Google takeout, I think it's called. But I can do it from the Google Maps app on the phone. It's something with the way Google stores the data. So the Google Maps on my phone has all that data and it's super easy to export. The file was not even big at all, even though it's years of data. I didn't look closely into it about how high the accuracy is because I don't think it gives me, I mean, I'm not sure. I didn't look at it, but I don't think it gives me like every GPS coordinate captured for like every minute of the day for every day. I think it's more of like a simplification, like they capture all that data, I think, and then they do an analysis of what were like your main locations and movements throughout the day. And then that's what you get. I'm not sure. That's what it shows in the app, but the data file might collect more. Yeah, I'm not sure. And now I realize I'm just saying that I'm not sure many times. It's not necessary. It's something that's going to be interesting to investigate later. This is already being captured. So in the system that I want to create for myself or just have in the system that I wish existed, you know, that I can kind of like see a little bit as a visionary. Like I see it's coming. It's just a matter of like how fast and is it something you build yourself or something that you can kind of get from built for you as a kind of public software service from a company. That question, I don't really know. I haven't thought too highly about it. That would be a really interesting question to think about though. That's definitely something that would be an interesting exercise for me in trying to think at a higher level and trying to think for myself instead of just consuming information. And you would like to do both. Like you'd want to consume information, look online, see other people who ponder the same question, of course. But then also like very much do your own unique thinking about this and then combine them. But I think I and I think most people fall into the trap because it's easier of just consuming all the information and then never or very rarely doing like your own critical thinking. So it's too little of a personal critical thinking per individual. And I say too little for what? I mean, it depends on maybe what you consider as the goal. But I would just say, yeah, you start getting into philosophy at that point. I would just say it's like, it's a better existence if you do more critical thinking yourself. Like you're more conscious, maybe it's like a higher level existence or it's just like, it's just naturally better. And I know these are, you know, vague claims. That would also be something for me to investigate, I guess. It gets to the point where I don't really want to like reason about it or give an explanation. I just like feel like it's intuitive. It's like a higher level existence. You're more aware, I guess. One more concrete thing you could say is that there's, you're more grounded in your opinion. Some things, you know, they're actually based on you reasoning about the thing, which means that no information that can be thrown at you can shift your opinions super quickly because you've already been skeptical to your own position and explored the space, considered different perspectives and arguments and data. And then you try to synthesize that into understanding for yourself and try to draw conclusions. Ideally, concrete conclusions, but sometimes you have to leave it as just like, it just seems to you ambiguous. And then from that, now you know, since you kind of put work into it and if you trust your own reasoning, then there's no like, nothing that can happen, no information that can be thrown at you to suddenly break your worldview, to suddenly fuck you up because it's already grounded. And that seems important to me at just a basic level. I don't know. I don't wanna bother more with this digression right now. Back to what I was talking about, automatically collecting the location data. Yep, so this data is already being collected and it exists. So it's just a matter of getting this and now connected to a larger system, make it available for a higher level system. So I need either to export the data and copy it over, so I have a static copy that I can put it anywhere so it's accessible for my system. Or I need to be able to set up a connection between them somehow so that I let Google or the Maps app on my phone, the Google Maps app, just keep storing the data, but it's always accessible like through some connection, whether that's API, MCP, or whatever. Also, like MCP is this term and tool used more and more in the AI age, but it's still API under the hood, right? Like it's just like, that's not something I should investigate because I don't know for sure. But what I think it is, is that it's still just API and then there's just a natural language explanation wrapped around that, so it's easy to use for an LLM. And then you want an LLM that's good at doing tool calls because it kind of connects to the MCP and it reads instructions and then what the LLM actually ends up doing in the end is running commands and I think it just literally writes the API calls. Or it writes a more natural language command and then the MCP server converts that to the API call, but I think probably just the LLM does it itself so that the MCP is literally just like an instruction book. Like it's just the API docs, just maybe simplified or structured better for LLMs. I don't know, that would be interesting to investigate more in detail, like how that process works because my understanding now of it is at such a high abstraction level. It's literally just that, you know, it's a connector to connect different services, which is used for AI agents specifically. Like it's a way for an LLM as a agent to connect to another service. And I don't know the details lower down on that. Guess we have the location data. Yep, just need to connect it to a system. Now, I don't know if this is possible, so that's a problem. And I think possible is a bad word to use because I feel like in tech, you know, it's never not possible. Everything is always possible, but it's just varying degrees of complexity and time and resources needed to make it happen, make it work, make it real. So I think a better word is feasible because it kind of, it just disregards the question of whether something is possible or not. And instead you just look at whether it's possible and the investment required to make it happen is not too much. That's got to be a better way to define it or to describe it. It's feasible. If it's feasible, it's possible. And all the feasibility of something really depends on your tolerance for, or on your kind of budget, like your investment budget, your work budget, or processing budget or whatever. Fuck, now I'm getting stuck in definitions, man. It doesn't matter. I think my, I think when I use the terms, I don't need to define them so explicitly. Like it's understood intuitively when you know the language. It's just like, it's possible and you actually want to do it. Like the complexity or the cost or whatever is not too much. Then it's feasible. And then, and that might depend on, you know, the value of the thing because it's more, if it's more valuable, then you might be willing

Like this, I will just set it up in the cloud on like a VPS in the cloud probably. So I don't think there's any way to connect that to my Google Maps data to have it like actually be able to fetch new data, like for example, the data from today or yesterday. It's easy to export all my data up until now and then I can put that as a file in my system. But then it will only have historical data up until the point I want it to be able to find new data. So for example, you know, if it's working on something or synthesizing something or using data from me and my life from like today or yesterday, then I shouldn't have to do any manual steps to provide or give it access to that location that it should just have it or have access to it. So for this specific data, my automatically collected position data from Google Maps, here we need investigation. Now I'm really just saying what I think because I haven't done the investigation closely. I think there's no API or MCP we can use to connect my cloud system to Google Maps to get that position data, the timeline data automatically. I think the only way is to go literally in the mobile app and click the button which is like export your data and then you pick a file location on your phone or on the cloud folders on your phone and then it saves it there, which is not acceptable. It's too manual. I'm not gonna do that from my own phone. So I think the way to do it is that you need to have like an actual device. It can be iPhone or Android and then maybe Android is better, like it's easier to automate or something. I don't think it works on Google Maps. Wait, how does it work on Google Maps website? I have no idea. Well, that would be the best if it's maybe on the Google Maps website, you could set up an autonomous browser agent to just go there and kind of extract it from there. Otherwise, you would need like an actual device, probably Android, if it's easier to automate, that has the app and I've logged it into my Google account. And it could like once a day do the file export from the Google Maps app, which will, I think, enforce you to export all the data and not just from the last day, which is again a problem. But then you can get the file and then it would be easy to automatically post it, you know, just send it to the cloud or post it first, like cut out everything except the new data and then send it to the cloud. But that seems like a horrible solution actually and probably if you do that every day and the Google Maps app enforces us to do like the whole file, probably Google has even built in like a rate limit at that at some point if you do like every day because it's just like it doesn't make sense. I'm getting kind of cold, so I'm gonna hop into a cafe and then eat my packed lunch, I think. Then I'll just continue on the next stream recording when it suits me.
a14a34305e9bea2a117ecd86b5b825811dcd2498799ab91aa3f9ad716680f55b_7e79701cb8df.m4a
Tuesday, February 10, 2026
11:36 AM ยท 9:24
Essence

The speaker is exploring a new method of personal reflection, moving from traditional journaling to voice recordings, which they term "thought streams" or "stream of consciousness" recordings, to capture their thoughts more spontaneously and efficiently.

Summary

The speaker is recording a voice memo while walking outside in a noisy city, acknowledging potential audio quality issues but believing it will still be audible. They explain their evolution from traditional written journaling to voice memos, finding that voice recordings allow for faster and more comprehensive expression of thoughts. They've now taken this a step further, making recordings that are less structured than journaling, which they've started calling "thought streams" or "stream of consciousness" recordings. This method involves simply speaking whatever comes to mind, mirroring their internal thought process but externalized. They highlight that this approach, while prone to digressions, helps them stay focused on specific topics, like a project they're building, by externalizing their reasoning. The speaker also notes the benefits of recording these thought streams, including the ease of recording with current technology (AirPods, Apple Watch), automatic transcription, and the potential for building self-awareness by capturing these fleeting thoughts. They express a desire to integrate these recordings more seamlessly into a personal AI assistant system, moving beyond the Apple ecosystem for better data accessibility.

View full transcript
Okay, today I'm probably gonna make a lot of recordings. Now, I'm currently recording this uh through my AirPods as I'm walking outside with a hoodie and a beanie on. And I'm in a city where there's noise, so uh it all might mess with the audio quality quite a bit, but I think it should still be audible just for my brief test. Um And I'll probably walk into a more quiet area. I think this is, it's not the worst case scenario, but it's like a pretty bad scenario and I think audible of my test. So, I've been having this uh concept of uh journaling where I used to, you know, for a while, just journal normally in a book, maybe like every day at the end of the day or the beginning of the day, write a little bit like what's going on, thoughts and feelings. Um I just got a like a strong smell of shit here in the city. I don't know, like dog shit maybe. Fuck, I still smell it, what the fuck? Yo! That, what the hell? That's horrible. I don't know, so I've been journaling for a while and then that kind of gradually transitioned into me more and more doing just like um voice memos, uh voice recordings instead because I figured it'd achieve a lot of the same as the journaling, but I could express a lot more a lot faster. And now I'm taking this like gradually further where I'm at times just like making a recording, just like dumping thoughts where at this point I wouldn't really call it journaling anymore, so I think I'll, I was just thinking about the term a little bit while I was on the um tabana, the subway. And I think it would make sense to just call it a thought stream, like a thought stream recording, or it could be like a brain dump, mind dump, thought dump, but I think I like the term of stream. I'm just kind of like thinking out loud as I'm recording. And I think it's much of the same process that would go in my head without talking. A stream of consciousness, yeah, that's something, that's like a writing technique. Stream of consciousness. It's essentially the same thing, just like write whatever comes to your head. Or for me now, say whatever comes to my head. Because that's like the quickest way I can express myself until we have some type of tech engine in my thoughts. Then the quickest way for me to just express any information uh to transfer information from my brain to a system is through talking. Or I guess it would be more through like a video, like you talk and it has the video input, so then I could express stuff with my body language as well, I guess. Um Anyways, yeah. As you can already tell in this thought stream, there's gonna be just so many digressions like this, right? Start trailing down one path on one topic and uh going into detail of it. Maybe in the middle of a sentence completely forget what I've talked about. I'll stutter in my words, stuff like this, but yeah. The goal is um right in the moment to, there's a couple of reasons I'm doing. First of all, I think it's just good to be talking since I've been sitting and not talking so much uh the past few days, just like uh building the app all the time, kind of like socially isolated. It's good to just talk a little bit. Then I think in the moment while doing it, it helps me stay on track with my thinking, which might be funny because I just said I'll go on these crazy digressions, which I do, but I think that's fine because I, I like focusing on it. I'm not trying to hardcore focus on one thing right now, but I find it when I kind of stream my thoughts out loud like this, if I'm trying to focus on a topic, like for example, a project that I'm building, it helps me stay focused and actually think logically and rationally about that project and follow a chain of logical consequences or a chain of reasoning. It helps me follow a chain of reasoning where if I'm not doing a thought stream out loud, but just thinking, either sitting still or walking, thinking just in quiet or um maybe being in the gym but trying to think, or like exercising and in a way trying to think. I find if it's just like in my head, I'm just gonna like think about the thing, I end up not really thinking that much about the thing. I just think about everything else like I get way distracted. Right now I don't have a specific topic to focus on. So then we are gonna embrace these distractions. Because I do think it's kind of interesting in itself to capture um a thought stream, something that you usually don't capture just goes on in your head and you kind of forget about it and it happens, you know, all throughout the day, every day. And it's fun to collect data on that to just maybe build more awareness of it. Yeah, so one of the reasons for recording this is to use this or this concept or tool in general is that it can help me keep my thoughts focused on one topic if I'm trying to like think about something, which I am a lot of the time. And like have high quality thoughts about some. Um yeah, and then yeah, the other reason is that of course I'm capturing it because I could do the talking without the capturing part, but I do think there's a unique value added there as well. First of all, it doesn't cost me anything. I just have to start the recording. It's very easy. It runs very well like in the background on my phone. And now that I can record through like my AirPods and my watch, I can do it very like hands-free. It's like very easy and the little like it saves sync to the cloud. Storage space is not a problem. The syncing is like instant because the files are not that big. And they get transcribed automatically now by Apple. Most of the time the transcription is not perfect, but it's good enough, so. It's just very, very easy to do and to get to the point where you have the factual transcript easily available on all my devices. But to actually get to a value that I need to set up some more connections so that um because I think it's still kind of locked into the Apple ecosystem and I can go in and like copy it somewhere else or maybe export it, but it's not as automatic as I would like. I need to be able to set up a more general system like my personal AI assistant system or whatever, which is actually stored my data amongst which is these without me having to like send all, send it over whenever there's a new one. Like when I record it and I finish recording it, it already automatically syncs to the cloud and stuff, but it's the Apple cloud. Then it should already automatically also be accessible to my personal system. Just going to the bathroom right now, I'll pause it for a while.
749c5262e7b41ca9f1bfe79339e23fd58c983805f3374dff56a6600c746e0960_64ce75809c63.m4a
Tuesday, February 10, 2026
11:35 AM ยท 0:23
Essence

The speaker is testing the audio quality of their AirPods while wearing a beanie and hoodie, noting potential noise interference.

Summary

The speaker is recording a voice memo using AirPods while wearing a beanie and hoodie, and is testing the audio quality. They anticipate that the clothing might rub against the microphone, introducing some noise.

View full transcript
Now I'm recording a voice memo with the AirPods, with a beanie and a hoodie and stuff, so I'm just gonna test the audio quality quickly. It's also rubbing against hoodie a little bit, so I'm gonna, you know, add some noise into the microphone.
f36c9442d5bfbc462647e70e74933f4d29d8504f08dc005f0a5ec15d0bc523f6_baf7b08ffcc2.m4a
Monday, February 9, 2026
2:30 PM ยท 0:39
Essence

The speaker is recording a voice memo and asking Ammer to sing a chorus or part of a song again.

Summary

The speaker has started recording a voice memo and is addressing someone named Ammer, asking if they would be willing to sing a chorus again, or perhaps another part of the song. The speaker seems to be clarifying if what they're referring to is indeed the chorus or just a section of the song.

View full transcript
Nรฅ kan vi spille inn, Ammer. Vil du synge en gang til? Eller alternativt. Det er refrenget, eller? Det er en del av sangen.
52a69481cf982ebb2450ad709a86714432802973f189af2417ee79625cd3f905_29332259aba8.m4a
Monday, February 9, 2026
11:40 AM ยท 34:14
Essence

This memo details a series of observations and suggestions for improving the Lyric Learn app's mobile experience, focusing on lyric synchronization, UI/UX, and bug fixes.

Summary

The user observed that the LRC format likely includes end timestamps for words, as lines unhighlight immediately after completion, even if the video continues. They also noted that instrumental sections cause lines to unhighlight after a period without lyrics, suggesting either an automatic unhighlight or a hidden empty line with a timestamp. They propose that the raw lyrics view should display empty lines with timestamps if they exist and that the "clear lyrics" button should be removed to prevent accidental data loss. They also suggest adding real-time parsing validity feedback for lyric editing, disabling the apply button if the format is invalid, and showing a toast notification upon successful saving. The user then discussed the lyric offset feature, stating it needs a more intuitive UI outside the settings menu, allowing for real-time adjustment while the song plays. They propose a per-line offset functionality within the time sync studio to handle extended scenes in YouTube videos that aren't part of the song, ensuring that an offset applied to a line propagates to all subsequent lines. This per-line offset would require chronology enforcement to prevent timestamps from overlapping. They also suggested displaying the raw timestamp, applied offset, and effective timestamp in the time sync studio for clarity. Further UI suggestions include updating the main view's left timestamp with playback progress and fixing a bug where switching to a segment sometimes auto-plays when it should remain paused. They propose a "freeze" state for temporary pauses (e.g., when opening menus) that lifts upon dismissal, resuming previous playback. The time sync studio's seeker should sync with the main view's seeker upon opening/closing settings, and the active line in the main view should be focused in the time sync studio. They also noted a bug where clicking to seek sometimes resumes playback when paused and identified visual issues with the blue time sync data border on wrapped lines and when combined with Genius highlights. Finally, they suggested removing the "clear" button for audio source selection, adding an apply button with validation, and implementing an unsaved changes warning when exiting the song settings menu.

View full transcript
Some things I noticed while testing the Lyric Learn app on mobile. I always thought the LRC format only had timestamps on the beginning of the words, but I suspect that it actually has it at the end again and I just never realized. Because when playing a song and getting to the last line, but then the YouTube video continues a little bit longer, it unhighlighted the line right after it was finished. And secondly, there was a part inside the song where it's both in the song and the video, there's just like an instrumental. So there's, you know, a line, it advances, advances, advances, and then there's like an instrumental. It's like 20 seconds with audio playing but no lyrics. And then I saw the line did at some point unhighlight. So either there's an ending timestamp, well, I saw the line unhighlight, and then I went in the text box to view like the raw lyrics inside the app, and there was no like empty line in between, it was just like that line and the next line, but then, you know, like 15 seconds between the timestamps or something. And it did visually look to me like the line got dislated at some point. So either the app is automatically unhighlighting the line after like 10 seconds, something like that, or it had an ending timestamp. Or there could be that there's an empty line with timestamp in the original data, but my app doesn't show it to me. But if there is, I suspect there may be, if there is, I want the app to actually show it, so when I go in the settings and I view like the raw lyrics, I wanna see the empty lines as well with the timestamps if they exist. I mean, yeah, it should just kind of be like a raw representation. I'm a little confused on that. Also in the raw lyrics, I don't wanna have, I'm confused by the buttons. I don't wanna have the clear lyrics button because I feel like you can hit it by accident and it's already easy enough to just like edit the text field. I want it to visually show a parsing validity, so as it's being edited, it should show whether it's like a valid or invalid format. And that it doesn't need to show valid really, it's only if the format is invalid, it shows invalid, and then, but it's a little bit of a subtle thing because there might be, you know, editing it live, we don't need to pop it up like big or something. See something red somewhere, show currently invalid format. And then there should be, yeah, a single button that just apply. And if the format is invalid as they're editing, then the button is disabled. But whenever it's valid, then they can click that to apply. And if they do, we should see a toast, like new lyrics saved or edited lyrics saved or something. If they edit it and don't click the button, then we don't save the change. We still see it in the text field, but we never actually save it to the state of our app. If there was able to edit the field and then close the menu without pressing the apply, you know, then it never got applied. But that's specifically for the text field because usually otherwise I want things to kind of auto-apply. Then we have another button there, it says reload from current sync. Honestly, I don't understand what that button means. So I don't know if we should have it or not. Reload from current sync. Yeah, I actually don't know what that's supposed to be, so I think we should just remove it, or at least I should figure out what was kind of the intention of that one. I think the AI just added it. Then above for the... We have the lyric offset. I gotta do more thinking about that, the UI for that. It cannot be in the settings menu like this. Like, we need to be able to kind of just drag it and adjust it as we're seeing the... Or like, click the button to adjust it or drag it to adjust it as we're playing the song and seeing the lyric lines advance because then that's the most intuitive way to align it correctly. So it needs to be either available on the normal main view, through, you know, it should not always be there, but like, click something to reveal it there. Or we have to, like, put it perhaps in the sync view or something, the time studio sync. I'm not sure what makes more sense. But whenever I am setting the offset, I wanna be able to just write a number value there, but I also, and I wanna have the plus and minus buttons to set it, but I also sometimes just wanna play the lyrics and see how they're highlighted as I'm doing this, which means it needs to be either in the main view or the time sync studio or somewhere where I can like play it and see the highlighted lines as I'm playing with the value of the offset. Because then I'll see the highlight actively updates, like jumps up and down when I'm adjusting the offset. And as I'm playing it still like advancing normally, but then I'm kind of nudging backwards and forwards, so I can do that until I feel it's right. And also with those input modalities, there should also be one that you can drag, where you drag like left or right or up or down depending on the kind of placement in the UI and what makes more sense. Which is also a way of adjusting the value instead of using the plus and minus buttons. And when you drag the offset, we should be, you know, it's just controlling the visual highlight of the active line and the auto scroller for auto scrolling to the active line. So then that should be updating like constantly as we're dragging, not only on drag release, but constantly. And then I had a thought, because these YouTube videos might sometimes add an extended scene like anywhere within the video, which is not in the song. So actually it's not enough to set offset only at the beginning of the song, and this cannot be resolved by just going in and adjusting the timestamps because then you would have to shift all the timestamps. It's annoying. So actually we need to be able to input an offset at any point in the song, essentially any line. Whether that should be done through the main view or the time sync studio, I'm not sure. At this point, I think probably, yeah, time sync studio definitely makes more sense, which means we can just put the offset controls in there as well because it's kind of the same. So instead of having your offset global for the song, it can be applied on any line. Now, most lines will not get an offset, like usually we'll not use it on the song if it's synced, or we'll use it like once globally for the song, and then occasionally we're going to use it on multiple lines if we have to, but I mean, it's a niche feature. I want the functionality to be there, but just understanding, you know, how much it's going to be used. So in the time sync studio, on any line, you need to be able to set offset, and what that does is it uses that offset everywhere from that line and everywhere after. Like it propagates to everything following from that line in the same way usually now when we're setting the offset, it's global, and so it applies, you know, equally to every line. But since we're setting on a per-line basis, it's going to apply to every line from that one and later in the song, but it doesn't apply before that line. And that means we're again going to, there's like an edge case we need to think about where the timestamps or the offsets could be set in a way where suddenly the lines based on their synced timestamps and the associated offsets, suddenly we might like break the chronology of when they should be played, which is bad. So we need to enforce this. How? By we just limit the value range that's allowed to change the offset, where we check if changing the offset further would make, would break the chronology by making it overlap with something else. What's that going to be? I think it is essentially if you drag the offset negatively that so much that a line's now kind of new timestamp when you calculate the offset would be earlier than the timestamp of the preceding line, then that wouldn't work, so we just, we don't allow them to do that. We set, we use the same limit that we use elsewhere of like 0.1 minimum difference between timestamps, so that you cannot adjust the offset further than that. And I think that might actually only go in like the negative direction, like the time backwards direction. I'm not sure, but I think moving forwards, it doesn't matter because the only thing it would do in that case is just make the previous line active for longer because it takes longer before it advances to the next line. I would encourage the AI agents to just do some extra reasoning about that. I'm just thinking about it now. That's how I see it at least. And that means when we're displaying the line timestamps in the time sync studio, there should also be some information about the offset, and then for like the timestamp, I don't know then if we should show, I think that it should be able to see the timestamp value that's directly set for the line, which is kind of what we store in our file, in our data. But then you should also be able to see the offset value set for that line, which then applies to everything from that line and below. And then you should also therefore be able to see like the kind of in practice actual timestamp for the line, which is going to be, you know, the timestamp and then combined with any offsets that apply to the line. So all this information would be useful to see. And then the duration still as we have already. And since this is now starting to be a lot of information, do some thinking about the time sync studio to make, like, look at all the functionality that's currently there

Also, in the main view, when viewing the full song or a segment, the timestamps go from segment timestamp start to segment timestamp end, which is good, so that's going to go from 00 to the duration of the song for the whole one, before segments, you know, you see kind of the real song timestamps, even though we're moving within them. But I also want the left one to update with the playback, so it's essentially the starting timestamp, plus the seek duration within the segment, or on the full song, you know, it just goes from 00 and it'll show the timestamp. The logic is the same. Then I also noticed a bug, I'm not sure exactly what the logic will cause it, but sometimes when I'm on a song, and it's paused, and I'm viewing the full song, and then I switch to a segment, then the segment starts auto-playing, which I don't want, like the pause and play states should be almost always the same, so that if it's playing, whenever you like switch to something else, it still keeps playing, and when it's paused, then it just stays paused when you do other actions. It should never automatically start playing if the last thing you did was like, pause it and then you navigate it. But sometimes if it's been playing, and then some of our actions might like pause it while we're doing the thing, but that's like only temporary, so that when you go back, it resumes playing again. For example, if I'm playing the song and then switch to a segment, I'd like it to automatically play. That's only if I was already playing, and then, you know, we pause it when we open the segment navigation menu. That's like a temporary pause. So essentially we should have the global play pause state, but also there should be more of like a temporary like play override or pause override or something that's gonna pause it even though the kind of global state is play. That's how it should work logically, and that happens, I think the logic is mostly correct in the app already. Like that if we're temporarily pausing when we open the segment navigation menu, or if I open the settings, temporary pause. Let's just call it a freeze, just to make the terminology different, so if I say freeze right now, I'm referring to this temporary pause, which means that, you know, if it was already paused, then it's just the same, but a freeze means that we're gonna lift the freeze at some point. And so if it was playing and then we freeze, you know, it's getting paused, but then we might like lift the freeze when we dismiss the menu or something, and that's gonna lift the freeze. And so resume playback, lift the temporary pause. So we freeze when we open the segment navigation menu. Freeze when we open the settings menu. No, let's just pause when we open the settings menu. But I want the seeker in the time sync studio to be synced to the normal seeker in the main view. It doesn't need to be like live updating in the background, but like every time we open settings or close settings, that's when we'll just make sure that they're the same. And also the, so that means if I'm like in the middle, if I'm in the middle of the song in the main view, and then I open settings and go to the time sync studio, I want the seeker to be at the same timestamp that I was just at, so kind of in the middle. And also if a line was focused or in the main view, it's more like if we have this like active line highlight feature, and we also auto-scroll to focus it. So if there's an active line, meaning that there's the line that represents the timestamp we're seeking at, but it only exists when we have a time sync data. Well, yeah, so if there's kind of an active line, then when I open time sync studio, that line should be focused. And we have like a concept of focus inside the time sync studio for like navigating them, so yeah, that's it. I'm also noticing this slight bug overall, both in the main view and in the time sync studio where playback is paused, but sometimes clicking to seek on a line actually resumes playback, which I don't want again, like if it's paused, you know. I don't want these, there's a few like random things where it seems to like resume playback or lift the freeze or whatever when it shouldn't. There might be something related to unloading and reloading the dock or the seeker, I'm not sure. Also in the main view, when we have the, we made a new like appearance toggle to show if a line has time sync data, this like slight blue border, but it looks great if the line is one line, but when, sometimes when I have the big font, if the line is wrapping, it doesn't look so good because it only has the height of one line on the border, even though the line now kind of takes two line heights. So fix that. Also with this blue highlight that shows which line we have time sync data for, if there's also a genius highlight on that line, then that has like its own padding around the line and therefore this blue time highlight line that we have, it ends up getting pushed to the left. So normally they're like perfectly in line, they're all aligned, but when a line has a genius highlight, then our blue indication gets pushed further to the left, so it's out of, yeah, it looks off. So if possible, keep the genius highlight the same, but move the blue one, like don't let the blue one get pushed to the left in that case. So it's still perfectly in line with the blue indicator that we have time sync data on all the other lines that don't have genius highlights. In the settings where we choose the audio source, again, we don't need a clear button there. But I do like having an apply button actually. And you know, to check it's a valid YouTube link, that's gonna work for our app. And also kind of similar to what we're doing with the law lyrics, the button should be disabled unless, you know, the input validation passes. And then, just if there's a state where they've done edit, either in the audio source or in the law lyrics field there, but they haven't clicked apply yet, then we should do like extra warning when they try to exit the menu, then we do if they then try to dismiss the settings menu or move over to the time sync studio, if they just try to go away from the song settings, we should pop up like, hey, you have unsaved changes. Do you wanna discard them or go back? And, you know, then if like I go back and then they'll see visually on the fields whether the apply button is kind of like lighting up, indicating that, you know, it has changes to apply. Also in the time sync studio, just a very small thing, in the seeker there, we're not showing the timestamp because we're showing it like at the top instead with current. So let's just add on also at the seeker, then it has the small like text above where it says quote seeker. Let's just add on to the right of that, we'll show the actual timestamp of the current seek. And in general, this requires some advanced design thinking and planning, but the whole time sync studio is just too crammed on mobile. So things should be moved around or sizes should be changed or some things should be hidden behind toggles or like more easily navigatable or scrollable. And then specifically for the seek bar mapper on mobile, I find it really hard to control, so the handles need to be bigger on mobile. Also when dragging a timestamp marker in the timestamp indicator bar, I want it to temporarily light up with just the blue highlight color or something while it's being dragged. Or if it's like clicked to focused or, you know, when a line is clicked to focused, I want the corresponding timestamp indicator bar to light up, to be kind of focused as well, or just like visually focused. And then there's a new feature in the main view. When I'm playing the song and we have the synced lyrics, I want another appearance toggle, and this one is gonna be for kind of visualizing how long it's left of a line or how long it's until the next line starts, because it can help when I'm trying to read along. Now we don't have word-by-word sync data, but we do have the duration for each line where we know like when the timestamp of the next line is gonna start, but the user can't really see that when they're just viewing the lyrics. So there should be some form of like progress indicator on each line. Like now I'm just explaining the state when they have this turned on. Like when it's turned off, it just, you know, it's already correct, but this new feature, this new appearance thing, when it's turned on, it's gonna show on every line in the main view, on the active line, we should be able to see a progress indicator, which just moves linearly with the duration of the line before it's gonna proceed to the next line. And there's a question here, do I want it to be following the length, like the width of the text on the line, or be the same width regardless? I think same width regardless. And even some lines wrap and stuff. So I'm not sure exactly what's the best way to implement it. So here's, I'm just kind of coming with an idea and a suggestion and you can do it based on my suggestion if you want to, or if you understand the motivation and the utility, but you have a better idea for UX implementation, then you just take creative reins and do that instead. What I'm thinking is that we'll show under the...

The active line will do, will have like a horizontal line, which is a full width or quite long, or it's kind of like a bar, it's going to have some thickness as well, but it's not going to be that thick, mostly just wide, which is going to be the progress indicator for this line. So it's going to, it's going to fill similar to how like the seeker bar fills or like a progress bar would fill from left to right, which means that even when empty, you should see, be able to visually see the starting and ending point, like you should see the width, but it's quite subtle, but then it fills in linearly during the duration of the active line. And then that's mean that's, that means that the moment when it hits the right side, the end, then we understand, okay, then it's going to progress to the next line. And this might be displayed to not mess with the layouts. It might be possible to do like a bottom border on the line. I'm not sure. Whichever way that currently we are displaying the time sync indicator on the left side, which is not messing with the layout if we do it like similar on the bottom, but then we know we need the added functionality of being able to show it kind of as a container and to show it progress. So I'm not sure about the code implementation, but that's how I want it to display. But it only needs to be on the active line. Also, some more syncing of the seeker in the dock. When on a segment and then navigating back to the full song segment, let's keep the same timestamp and if we have an active line, just keep the same line active and thereby implicitly also like focusing the view on that active line. Similarly, if in the full song, navigating to a segment, as long as that timestamp and active line is part of the segment, then, you know, we'll keep that timestamp and the active line and implicitly therefore also focusing the view on it. If there's no active line, then, you know, of course it's fine. Or if the current timestamp of the song is just outside of the segments, then we'll just move it to the start of the segments when we load it. And this again should also interact smoothly with if we navigate to the time sync studio, like maybe we're on a segment in the main view and then we go to the time sync studio. Well, again, just like go to that line and timestamp. And similarly, when we go back from the time sync studio, I think each interaction does need to be implemented. It's more just like, like the state is just, there's just one central state for the like playback timestamp and therefore also for the active line, which is just comes implicit from looking at the timestamp and then which line has time sync data that matches it.
4a57a5d883dece09d4b089cfb4dd3adf7c0aa38d39e3786014002ff1d93c14d6_92bd6f86520f.m4a
Sunday, February 8, 2026
2:05 PM ยท 17:19
Essence

The speaker is testing four AI-implemented versions of a new practice writing mode feature, focusing on UI, control placement, and specific styling and interaction details to refine the user experience.

Summary

The speaker is evaluating four parallel implementations of a new practice writing mode feature on both Mac and iPhone, specifically looking at UI elements like button placement and size. They prefer a small, unobtrusive button on mobile to trigger the mode. A key concern is the placement of controls; the speaker strongly advocates for controls to be at the bottom of the screen, appearing above the keyboard when active, rather than at the top as all current versions do. They plan to simplify the UI in practice mode by disabling normal dock and audio playback features, dedicating the entire screen to practice content, with practice mode controls at the bottom and a clear way to exit. This means the app will enter a distinct "writing practice mode" state, where the stage shows practice content, the dock is hidden, and practice controls, with a fixed height and full width, will occupy the bottom space. The speaker particularly likes version 3 for its underline styling and how it handles hints, showing only one letter at a time, though they want to refine the hint's capitalization. They note a minor layout shift issue in version 3 where underlines don't perfectly align as lyrics are filled in. The plan is to move practice controls to the bottom, allowing underlines to start from the top, ensuring consistent font size and minimizing layout shifts at the top of the screen. On desktop, practice controls will be in a sidebar, while on mobile, they will remain at the bottom.

View full transcript
Okay, I've asked AI to implement the new feature of the practice writing mode, and I have run it in four parallel instances, so we have four versions of the feature. So I have one tab for each, and I'm testing this both on my Mac and on my iPhone in Safari on both. So first, I'm gonna look at the UI, where the button is located, and what I like. So I'm on the phone, like the button to trigger the practice mode, and how much size it's taking. Because I see three of them put it in the same bar, and one of them put it in that other bar, but we're gonna reorganize this anyway, so that doesn't really matter, but just one like small button in one of the bars on mobile. Yeah, I think that makes sense, to trigger practice mode. When it's on there, the button disappears, but I can exit it. Now all of them have put the controls at the top, which I don't like. I think they should stay at the bottom. Having the controls at the bottom, and then just text is. And then you have the keyboard appears, and the controls should appear above the keyboard, and then you'll have the text input at the top. Now all of them have put the controls at the top, which I don't like. I think they should stay at the bottom. Having the controls at the bottom, and then just text is. And then you have the keyboard appears, and the controls should appear above the keyboard, and then you'll have the text input at the top. Yeah, so regardless of which mode we choose, the controls should be moved down, so that, you know, the keyboard is active, and then the controls appear right above the keyboard. And then if the keyboard is dismissed, then they can swipe down to the bottom. And for simplicity, while we're in the practice mode, we're gonna disable all of the normal dock features and audio playback features that we have, and specifically have all the screen space focused on practice. Of course, it could be possible to have both, but it's just more complex UI, so for now, we're just gonna ignore it. Which means on screen, when the practice mode is enabled, it's gonna be the like practice view in the stage. Then the keyboard if kind of text is focused, and then we'll have the practice mode information and controls at the bottom of the screen. And we will not have the normal dock or the normal song info, and there will just be a way to like finish or exit the practice mode, which then will bring us back to the normal stage view. Yep, so all controls on the page will be removed in the practice mode, except for the practice mode controls, and we'll just therefore make sure that there's always a way to like cancel or finish or escape the practice mode to get back to the normal main view that has the dock and the controls that were not stuck in this view, since we're kind of hiding the controls that we normally usually have there. So this should be a whole, it probably is already, but this should be a whole separate state for the app and for the stage where it's in writing practice mode and the stage is there for showing writing practice content. And the dock is no longer visible, which means then the stage, you know, it should get whatever space it has, so then you could theoretically expand to full screen height, but then we're gonna add in the writing practice controls, which kind of are placed similarly to the dock, although it's a different component. It's again gonna take whatever space that the content require, and then whatever vertical space is left is what is given to the stage. And we'll make sure that these practice controls, you know, once it's created, it's not gonna cause any layout shifts like it has a fixed height. And full width, of course. Alright, for the styling of the underlines, I quite like version 3, so I might choose that one to be the one I continue with just because of that. I also like the way the click to move the focus around works in version 3. I also like the way in version 3 pressing the hint only shows one letter of the next word, because then you can still choose the thing over which I guess what the word is before you hit the letter. I wanna change the hint slightly though, so that, because currently it's always showing a capital letter. I only want it to show a capital letter if the first letter of the word is capital. Otherwise, if the first letter of the word is, you know, not capital, then show that instead. Okay, so we're definitely gonna iterate on version 3, because I like the way the UI works. There is a problem with small layout shifts I see as I'm filling in the lyrics, the underlines, they don't perfectly match, so they sometimes shift further to the right as I'm filling in the words. It seems like there's more space between the words after I typed than when there's just the lyrics. Most of the time it doesn't shift, but occasionally it does, and I don't understand why. Although I'd say it's great. Yeah, then there's some more things. Now, since we're not gonna have the writing practice controls at the top anymore, we're gonna have it, you know, at the bottom, it means the underlines can start from the top. And that is nice, because, you know, as in a normal stage, the lyrics start from the top. And that means, ideally, the font size is still exactly the same, and there's nothing above covering it, so the underlines should match up exactly with how it was just looking in the stage view, and depending on whether the font size is big or small, which means there shouldn't be any layout shift. The only layout shift is potentially at the bottom if, you know, the lyric controls might take more or less space than the normal, like, dock and whatever other bars are there. But that's fine, because not really a layout shift, it just, you know, replaces the bottom thing, and it's gonna reveal slightly more or less of the stage, or make the stage, I guess, slightly taller or less tall. But it shouldn't affect, like, the stuff at the top. It should look exactly the same, just that, essentially, the letters are replaced with the underlines styling in the practice mode.

Also, when in the practice mode on desktop, let's just have the practice mode controls in the sidebar, so we don't even need a dock in that case. But on mobile, we'll have them on the bottom.
40d8b5715cf4684e4eecddcaf7d4c8170f77279321da714aaceee22411be2ae7_2aec3cc94b43.m4a
Saturday, February 7, 2026
8:39 AM ยท 66:07
Essence

The user is testing a new behavior prototype for a music application, focusing on mobile interactions, visual feedback, and loading states, while identifying bugs and suggesting design improvements.

Summary

The user is testing a new behavior prototype for a music application, primarily on mobile, and is recording their thoughts to identify issues and suggest improvements. They note a bug where a "genius highlight" remains focused after its corresponding bottom sheet is dismissed, and request that it unfocuses simultaneously. They also dislike the yellow color of the markers on the seeker bar, suggesting a red alternative for better contrast in dark mode. For the small/big lyrics toggle, they want it to function more as a trigger with a blinking effect rather than a persistent highlight, and for the text to remain centered during size changes with a gradual transition. They are mostly satisfied with the deselection of text when clicking outside a selection, but specifically request that text remains selected when interacting with the three lyric action buttons (copy, save as segment, AI explain). Regarding the "save as segment" button, if a segment already exists, they want a yellow-backed toast notification saying "segment already exists" and the icon to be filled in. For the AI explain button, the text should remain selected when the bottom sheet opens. Finally, they want to simulate loading states for new songs, even with mock data. When a new song is selected, the app should open instantly, but the lyrics stage should display a skeleton loading animation for 1-3 seconds. This animation should consist of pulsing, rippling gray rectangles representing lines, which should be able to transition between small and large sizes like the actual lyrics. They also request changing the highlight color for the marker and loop toggle buttons to avoid confusion with the unique yellow of genius highlights, and for genius annotation cards to have a muted yellow background.

View full transcript
Okay, I created a new behavior prototype on v0 by Versell, and I'm just gonna test it. And I'm just recording in the background so I can dump my thoughts of anything that I notice, that I like or dislike, or that's especially the bugs, but yeah, we'll see. It's very good. It's for sure better than the previous one. Again, I noticed an issue on mobile. I can open the slide-up sheet by clicking a genius highlight, which focuses the highlight and opens the bottom sheet, is what it's called. That's correct. Click the highlight, it's focused, bottom sheet slides up, and shows it. And then the bottom sheet is also dismissed correctly by pressing the X or above the sheet, but when I do that, the genius highlight should be unfocused at the same time, because I exited the sheet, but the genius highlight is still focused, but now with the sheet being closed. It's a state that shouldn't be possible on mobile. Because on desktop, yeah, so essentially, just make sure when the bottom sheet is dismissed, that the genius highlight is focused at the, or the genius highlight is unfocused at the same time. I'm now still testing on mobile, by the way. For the markers, I don't like the color, since the website is in dark mode and the seeker is kind of white or gray. The markers being yellow, they're hard to see. Let's change them to red, like a light red color, or a deep red maybe. Well, they should be visually distinct on top of the seeker bar once I drag it. So usually the seeker, when it's not progressed there yet, then it's dark, but then as it gets there, then the white bar expands, or the gray, the marker should be visible either way. Either you could switch color or we just pick one that's just better contrast against both and then use red instead of yellow. Specifically for the small slash big lyrics toggle. Although we do consider the toggle, we're gonna change the styling on it a little bit where it's not really gonna, usually a toggle has an on and off state, and when it's on, the toggle is kind of filled in or highlighted or something. But we're not gonna do that with this one, specifically on this one, because we're already kind of changing the inside icon from the, where we're like capitalizing one of the A's and then undoing it when it's off. So I want the click effect, instead of being like a toggle, like when it's on, it's constantly highlighted when it's off, it's not highlighted. It should instead work as more of a trigger where when you activate it, it kind of blinks, and then when you deactivate it, it also blinks. Or like every time you trigger it, it blinks. So we'll consider the button more of a trigger than kind of a toggle button, but what it triggers is a toggle action. Also when the text changes from small to big, it should, or from big to small, it should stay centered on the same line approximately. Because right now it seems to be top aligned, so whatever, if it's small, then whatever word is at the top when I make it big, that one is still visible. But then a lot of the stuff at the bottom, of course, overflows out of the screen because the font is bigger. Let's instead have that come from the center. So when it's small, as much as when I'm making it big, the line that was approximately in the center will still be in the center and then you'll see above and below it, but it will, of course, be cut off both above and below compared to the small font since it's now taking more space. And the text, we're changing the size and the font weight, but currently it's an instant transition, but it should be like a gradual transition, which can probably just be done with like a CSS transition. I'm not sure. Just like a very quick transition instead of it being absolutely instant. Now, dismissal of the selected text, now I'm testing on mobile, it's mostly working correctly. It's exactly what I wanted. When text is selected and then you click anywhere outside, for example, if I click in the stage but not in the selection, just anywhere else on the stage but over another line, then text is deselected, but that takes, kind of absorbs the click so it doesn't also click the seek on the lyrics in the same click, which is good. It's exactly what I wanted. Same happens if I tap like the buttons in the dock. Then it's not absorbed, but the action is performed and the text is deselected at the same time, which is good because that kind of indicates that I'm not using the text selection. But that's already working correctly. But there's one unique case and that's when text is selected and then I click one of the three lyric action buttons. Those should specifically not deselect the text, but keep it selected. I'm not sure if this is a platform limitation, like if I can natively, if the native system just enforces it when you click anywhere else to unfocus the text, but hopefully it's possible to not do that. Perhaps we detect the click and then prevent default or whatever when it's on the action button and text is selected. I'm not sure about how you do this technically. But it's just on mobile, we have those three buttons at the bottom which are only enabled when text is selected. Already working correctly. When I press them, it shouldn't deselect the text. So the text should stay selected. We have the copy action. That one's good. We get the toast notification that it's copied, but also the text should still stay selected. Then we have the save as a segment button. We get the toast notification again is good, but the text should also stay selected. Additionally on mobile, we have a feature where if you select text that's already an existing segment, then the button doesn't work anymore. Now it's just green to indicate this segment already exists so you can't create it. That's good. But since on mobile, like on desktop, you can kind of see when you hover the pointer over it that it's no longer clickable because you see there's no hover effect and the pointer doesn't change. But on phone you can't see that. So it still feels like it's clickable. So what we're going to do is that when they click it, we're going to show a toast notification, which is now kind of like an error or a warning. So this one is going to have, just like instead of the neutral background that all the other toast notifications have, this one is just going to have a slight yellow background to it to indicate like a warning or error. And it's just going to say segment already exists. Specifically when you try to click this green filled in version of the save segment button. Also when it is green, currently the icon is an outline style, which is, it should be for the most part, but specifically when it's the filled in green version, when the segment already exists, I want the icon to also be filled in indicating that it's kind of activated. And as I already said, since it's the lyric action button, well now I think technically this one in the code, in this state where it's green, it's not a button anymore, it's just an icon. But again on phone you don't really understand that, so I'll like naturally try clicking it. So again, click on this one. I guess it should still be a button and not just an icon, but now, you know, I'll show that notification as I said and have a different style so it doesn't create another segment and should not deselect the text. And then the third lyric action button is the AI explain. This one triggers the bottom sheet, which is correct, but in the process it also deselects the text, which means when the sheet slides up, there is no text selected anymore and so the selection mirror is empty and so the function is lost. So again, for this one, but it applies to all the three lyric action buttons, make sure the text is not deselected when they're pressed. It should stay selected because it's going to require an actual like deselection action to deselect it, which normally is the user just clicking outside the text selection or another button in the dock. It's like anywhere on the screen that's not the text selection essentially. But uniquely with the AI explanation, when that one is triggered, the text should stay selected, bottom sheet slides up, similar to when a genius highlight is focused and the bottom sheet slides up. There they read the selection mirror, the explanation, and then it's important that when they dismiss the bottom sheet, then the text gets automatically deselected in the same way that the genius highlight gets unfocused. Again now, another comment. Since this is a behavior prototype and we're using mock data and some lips and stuff, we already have all the data here, but I still want to simulate loading because I want to see how it looks. So when a song is picked, you know, from the gallery, we assume it's already

The in the app it's loaded, we have all the data, because it's like saved in local storage. So there shouldn't be any loading anyways, it just kind of be instant. Or you know, yeah. But when they search a new song, which we can still do in this app, since we have extra mock data for like the search results songs, even though they're not still in the gallery. So this is the data that now for the prototype, we actually just have it on file, but we're gonna pretend that it's being fetched from the API, which means there's gonna be some loading time. So I'm gonna test right now, searching a, like we by default just have five songs in the library. I'm gonna search and pick a new one. Okay, so I picked a song and it opens instantly, which is incorrect. The lyrics, we're not gonna have instantly because like the search results give us just song metadata. I'm not sure exactly, but I think they gave us like the title, artist, potential album, like duration of the song. And it gives us the cover art. So it's enough to show them in the search results and to make them clickable to open the page. And since I want the app to be fast and smooth, when we click the search result, even though we don't have the lyrics data yet, we want it to open instantly, which the app already does correctly. And then it should, you know, show the main view, but we don't have the lyric data yet because the backend call to get the lyric data is only triggered when they pick this new song to open it. Because we don't wanna do it in the background for all search results because it would be, it would like spam the backend too much. So when they pick a search result, and you know, it's a new song not existing from the gallery, so we're gonna fetch the data from the backend or in this prototype, just like fake it because we have the mock data. The stage, which shows the lyric content, should show a nice loading animation. For this prototype, let's just do every time there's a new song, we're just gonna pick a random duration between one second and three seconds. That's gonna be the loading time. As the lyrics are loading, we should see a kind of skeleton loading indicator. I'm not sure if that term in its own is descriptive enough. If it is, just use that. But if I had to expand kind of what I see, although I don't know the perfect UI terminology for this, I imagine we, while loading, so this is like an extra state I guess we have to add into our logic. While loading in the stage where we see the lyrics, first of all, some things are disabled. The genius highlights toggle is disabled, is it? Okay, but don't worry about that. Focus on, I don't know what should be enabled and disabled. Okay, focus on the loading of the lyrics. So, you know, each line, we don't have the lyric data yet, but we still, you know, know how lines are displayed in our app. But since we don't have the words, each line should be this kind of like, just a rectangle. You know, like where the height is essentially the height of the letters on that line and the width is maybe kind of random or based on some perlin noise, so it's a gradual transition or it's something like predefined. I'm not sure, it doesn't really matter. But, you know, some form of, so it looks kind of like natural, so that you see a line is represented by this kind of gray rectangle. And there's one for every line that's visible on screen. It should be scrollable in that case, well, it wouldn't matter. So, yeah, stage should not be scrollable. It should just exactly fill the stage content with like lines and then we don't need any more past that. Or it doesn't even need to fill the entire one. You can fill like almost the stage, but then we can leave like one or two lines empty at the bottom. It doesn't matter. Just like make sure we don't have any scrolling, that it's like not overflowing at all. Yeah, and then I also want the animation instead of the lines just having a constant color, I want there to be kind of a pulsing or a ripple animation. So a line should kind of pulse where the opacity or the lightness kind of increases and decreases gradually back and forth. But then this should also kind of ripple so that, you know, all the lines are not in sync with the same opacity or lightness level, but you kind of see if you imagine there's a certain height where the, you know, the lines towards the top are like more strongly visible and the ones towards the bottom are less. And then you see that really just each line kind of changes its opacity or lightness. But what it looks like to a human is like, is like there's a wave of the light of the focus kind of moving down, kind of rippling down. Now I want the text size toggle to still, the lyric size toggle or trigger, yeah, both, to still work while the lyrics are in this loading state. Which means now of course it's not text, it's these lines, but it should, we should still be able to switch between the large and small lyrics, and there might be some unique quirks with, you know, how currently some of the lines like wrap when I make them big or some of them wrap even when they're small. So I'm not sure how this is gonna work for the rectangles because it's not really text, but do something smart here so that, you know, we can transition it from big to small. We might just allow them to flow out of the screen or something. You shouldn't enable any scrolling, but it's fine if it's like cut off at the edges. Yeah, I'm not too sure about that animation. Okay. But again, even if it's big, I think it should not be scrollable in the loading state. Okay, and then it does that and then when it, in the real app that would be actually waiting for the backend to return the lyrics and the prototype, we just set the random duration. And then once the data arrives or the fake loading finishes, then we just show the lyrics instead. And since this like, these like loading lines should match the size of the actual like text lines exactly. So when the lyrics pop in, we shouldn't experience a big layout shift. Of course, it's gonna be a little bit different with the word potential line wrapping, word wrapping and stuff. And of course the rectangles are replaced with text, but it should essentially match the same, you know, like size and line spacing and whatever when the lyrics come. Also, I already said to use the, change the marker color, but I also noticed the light toggle highlight for the marker toggle button and the loop toggle button. They use like a yellow color. I don't like that because I found it, I find it confusable with the genius highlights and they are uniquely yellow, which means I don't want that to be our highlight or accent color in the rest of the app. So choose a different color for those buttons or just for the style in general, so that we keep the yellow kind of only to the genius stuff. Also, when we're viewing a genius annotation in the bottom sheet or the sidebar, make that card for the genius annotation have the same genius theme yellow in the background of the card. And it can be kind of muted. It doesn't need to be a super strong yellow, but it should still like be there in the color in the background. Also in the lyric size toggle, in the inside the icon or the text that we have inside the toggle button, the text should specifically be not bold when the toggle is off, but bold when the toggle is on, just inside the button there.

Now for more loading. As I said, there's some stuff that's gonna load. So just to list all the things that, you know, all the data that comes externally that we need to think about having loading states for and we'll sometimes show loading indicators for. It's the lyrics, as I said. Then we have the actual audio playback comes through. The way we do this is that we get a YouTube link for the exact song on YouTube. And then we have, like in the real version, of course, here we're faking playback in the mock, but in the real version, there will be an embedded YouTube player which we then completely hide and hide all the controls because we use our own controls. But we still have the embedder technically in the page because that's why when we play, it streams the audio through that embed and that turns into our app playing the song. Which means that once we have the link, the YouTube link, then it's ready to play. There's no more loading, I think. There might be some slight loading to load the song the first time even after the link, but I don't think there is. I think once we have the link, we can assume we have all of the data, like it's ready to play, to stream audio. But getting the link is an async call from our backend that might take time, which also is triggered after they pick the new song from the search results. Then this is a separate parallel async call that goes out to get the link. And this might happen slower than getting the lyrics or faster than getting the lyrics. We don't know. So this is an async call and I'll note soon how that should affect our UI. Then getting the Genius highlights is another separate async process, which again starts once they pick the new song from the search results and open it. Which again means some things need to be disabled and potentially show loading. And finally, the AI explanations are async operations, but these are unique because they do not happen when the song is opened and it doesn't matter if it's a new song or an existing song. They are specifically triggered when the user selects any lyrics and then presses the AI explain button. And so this, whenever this button is pressed, it's always gonna require a data fetch and a waiting time, even if it's a song we have from before. Because, you know, we're not running AI explain on every single combination of lyrics. We only do it when they pick a specific one and trigger it. Which means... Yeah. Okay, so for each of these async operations in this behavioral prototype with mock data, we want to simulate the loading. So as I said, for loading the lyrics, you know, just pick a loading time and we have some slight randomness there just so I can see when I play with it that it's sometimes slightly different to experience how that feels like. Do the same. Use the same randomness and range for getting the audio and getting the Genius highlights, which means when I open a song, these are like three different parallel loadings and they have the same range of like how long they might take, but we just do a unique random call for each of them so it might be... So they might get different values, different times, and that's fine because that's gonna simulate how we don't know which order the data is gonna arrive. So I already said what we're doing for the lyrics. And of course then, while it's showing the skeleton loading and not the actual lyrics, then of course it's not selectable or anything, which means that's fine. Our lyric action buttons are just gonna stay disabled because there's nothing to select. For the Genius highlights, as they're loading, Genius highlights need to be disabled. And also, if lyrics are loading, like if we get the Genius highlights back before we have gotten the lyrics, the Genius highlights still need to be disabled because we don't have the lyrics to put them on. And I specifically want to show the loading of the Genius highlights inside the Genius highlight toggle button. But I still want, even if they are loading, I still want the button to work in the sense that it can be enabled and disabled because I want the user to be able to control this state is just whether it should show the Genius highlights over the lyrics, and I want them to still be able to turn this on and off even if we don't have the lyrics or the highlights yet if it's loading. So the way we're gonna show it is just by changing the icon inside. So the toggle or the button is still enabled, the toggle can still be clicked on and off, and we're gonna see it being on and off by the light, you know, turning yellow or turning off as we already have. But inside it, we have of course the G as the kind of icon. And this is what we're gonna replace. So while Genius highlights are being fetched or loading, or while we're simulating it in the prototype, instead of showing the G inside, we're gonna show a circular loading indicator inside. Mind you that it's still clickable to turn it on and off, which is gonna affect the highlight or background color of the button, but it's still showing the same circular loading indicator inside. Once the Genius annotations have been received, the data comes back, then we put back the G inside the button instead of the circular loading indicator. Now, if it comes back and it turns out there's no Genius highlights for this song, essentially we get it back and then we look at either there's, yeah, there's like, there's no highlights to put on the lyrics of the song. Then we got the data back, so we're gonna, you know, replace the loading indicator with the G. But if there's no highlights to put, then we will just disable the button. Should we do that? Hmm. No, let's do this. Yeah, don't disable the button, sorry. But if we're on a song and we're already loaded, we've already done the, you know, loading of the Genius highlights, but there's actually no Genius highlights to show on the song, then when they enable the toggle, it should show a toast notification like, sorry, no Genius highlights for this song, or whatever, just like phrased informatively and concisely. They can still toggle it on and off because I like them having that control over appearance, but we just show the toast notification like that there's no highlights to show. And that also means if they don't actively press it on and off, but it is on, just like when they open a new song and they load it and the toggle is on, and then it's fetching the lyrics and the highlights, and then once the data comes back, it sees that there's no highlights to put on the lyrics, then it should again show the toast notification, even though they didn't actively click the button right now, it was just on from before, or they turned it on as we were fetching the data before we got the lyrics and highlights back. Well, once they come back, that should also like trigger a check to see if it's on, but there's no highlights to show, then also show that same toast notification, sorry, no highlights to show. And then they can still turn it on and off as they please. And we'll show the toast notification each time they turn it on on the song. You know, they can still play with it as much as they want. And since the highlights might come back before our lyrics, what we'll do in this case, we're not gonna try to show highlights on the like skeleton placeholder for the lyrics. So what we'll do instead, it's obvious from the UI that the lyrics are still loading, and it's obvious from the UI of the Genius highlights toggle button whether it's on or off. That's fine. And then it's obvious once the results come back because the inside the Genius highlights toggle button, the loading indicator will be replaced with a normal icon, the G. And so if that comes back before the lyrics have come back, it's just fine. We see that the button, you know, is off or on, but in the example that is on, we see that it's on, but it's still not showing any highlights on the screen because there's no lyrics there yet. It's just the skeleton placeholders. And that's fine. And it's only once the highlights have come back and the lyrics have come back that we do that check for whether there actually is any highlights to put on the text. And that's at that point where we would, when the lyrics appear in the stage, and if Genius highlights are on, that's where we do the check, like if there's no actual highlights to display on the song, then we show the toast notification informing that there's none. Okay, now about for loading the audio. As we're waiting for the audio data to be delivered, essentially the YouTube link, or as we're faking it in the prototype, we need to disable some features of the dock, some of the playback features. So the seeker itself should appear disabled, and we don't have any timestamps to show regarding, like, you know, duration into the song and total length of the song. Play pause button should be, enable or disable. Play pause button should be disabled.

Loop toggle should still be active and available, though, because it's just a toggle. Macro toggle should be disabled. Rewind button should be disabled, and segment picker and segment anchor should also be disabled, which means there is no way to trigger to open the segment navigation menu, that's good, because you would have to click the segment anchor. But then in the segment anchor, in addition to it being disabled, you know, it also usually shows the name of the segment that we're on, and by default that's the full song segment. And that's fine. Even though we haven't... What is that fine? I think the segment anchor should be disabled, and instead of showing the title of the active segment, which by default is going to be full song anyways, but we haven't, you know, gotten the lyrics yet, so we don't have the segments yet. We could pretend like the full segment, full song segment still exists, but I think actually in that case, while... Oh yeah, this is interesting. Actually, the enabling or disabling of the segment picker, segment anchor, it's not tied to the loading of the audio, it should be tied to the loading of the lyrics. So while lyrics are loading, it should be disabled, and in the title, instead of saying the name of the segment or full song, it should say fetching. And then in the case that the lyrics come back before the audio, you know, parts of the dock are still disabled, but then as the lyrics come back and then they show in the stage instantly, then this segment anchor should be enabled instantly as well at the same time, and then by default shows the full song segment, but it can now open the segment navigation menu to pick the other automatically created segments or any custom created segment, which, you know, still is going to work fine even though potentially other stuff is still loading. Okay. Specifically for the loading indication, while loading the audio, the seeker bar and play pause button, while disabled, they should show a pulsing animation. And I think this exists already, but the song should always be paused when it's like opened. And also, side note, completely unrelated to all this, if you're like in a song working, whatever, and then you open the main navigation menu, which is for like switching to a different song, even though it might be dismissed to go back to the same song, when you open that menu, it should pause playback of the song, which I think it probably already does, but just double check that. And that's the same when opening the, when you're in a song and open the segment navigation menu, we should again pause playback. But specifically, if we were in a song and it was playing and they press the segment navigation or the segment anchor to open the segment navigation menu, or they open the main navigation menu, then playback is paused while that menu is open because they're probably going to switch to a different song or a different segment. But if they end up just dismissing that menu instead or picking something else, then we resume playback. In this specific case where like playback was already going and it was kind of automatically paused by them opening the menu, then we'll automatically resume it if they dismiss the menu. But if it was just paused when they opened that menu, then you just, and then they dismiss it and it's just still paused. This logic might be in there already. I'm not sure. Actually, I see specifically that playback is not being paused when opening either of those menus right now. So that should be implemented. Okay, now so far all that testing and comments were made from mobile, although a lot of it is, just applies regardless of device, but some of the things like the bottom sheet is specifically on mobile. So now I'm going to test the same stuff on desktop and I'm just going to point out if there's anything else. And now I'm on desktop. Yeah, a few notes. We normally have a hover effect hovering over the line because we have the click to seek feature. That's good. But specifically when text is selected, the first click outside the text selection, even if it's over a different line, it's only deselecting the text and not clicking to seek. This is already working correctly. But the problem is that even though this is working correctly, it's still showing the hover effects and the mouse pointer that's like for clicking on something when I hover these other lines while text is selected, even though the first, so it indicates that the click would click the line and click to seek, but it doesn't because we have stolen that action by first deselecting the text and requiring a second click. Therefore, if text is selected, the click to seek feature is essentially disabled because the first click will just remove the text selection and not perform a click to seek action. And it's only when after that then text is deselected and then click to seek is working as normal. So the only thing we need to change is to make sure while text is selected, the click to seek feature should be fully disabled, which means the lines are no longer clickable, which means there's not a hover effect. There's not, the pointer shouldn't be the like link clicking pointer. It should just be the normal pointer kind of indicating that you're just kind of clicking in empty space because that's intuitively going to tell you that also text is deselected when you click because you're just clicking outside the text selection. Another change on desktop. Now, do I wanna do this? Maybe not. I actually see specifically that playback is not being paused when opening either of those menus right now. So that should be implemented. Okay, now so far all that testing and comments were made from mobile, although a lot of it is, just applies regardless of device, but some of the things like the bottom sheet is specifically on mobile. So now I'm gonna test the same set on desktop and I'm just gonna point out if there's anything else. Okay, now I'm on desktop. Yeah, a few notes. We normally have a hover effect hovering over the line because we have the click to seek feature. That's good. But specifically when text is selected, the first click outside the text selection, even if it's over a different line, it's only deselecting the text and not clicking to seek. This is already working correctly. But the problem is that even though this is working correctly, it's still showing the hover effects and the mouse pointer that's like for clicking on something when I hover these other lines while text is selected, even though the first, so it indicates that the click would click the line and click to seek, but it doesn't because we have stolen that action by first deselecting the text and requiring a second click. Therefore, if text is selected, the click to seek feature is essentially disabled because the first click will just remove the text selection and not perform a click to seek action. And it's only when after that, then text is deselected and then click to seek is working as normal. So the only thing we need to change is to make sure while text is selected, the click to seek feature should be fully disabled, which means the lines are no longer clickable, which means there's not a hover effect. There's not, the pointer shouldn't be the like link clicking pointer. It should just be the normal pointer kind of indicating that you're just kind of clicking in empty space because that's intuitively gonna tell you that also text is deselected when you click because you're just clicking outside the text selection. Another change on desktop. Now, do I wanna do this? Maybe not. I actually see specifically that playback is not being paused when opening either of those menus right now, so that should be implemented. Okay, now so far all that testing and comments were made from mobile, although a lot of it is, just applies regardless of device, but some of the things like the bottom sheet is specifically on mobile. So now I'm gonna test the same stuff on desktop and I'm just gonna point out if there's anything else. Okay, now I'm on desktop. Yeah, a few notes. We normally have a hover effect hovering over the line because we have the click to seek feature. That's good. But specifically when text is selected, the first click outside the text selection, even if it's over a different line, it's only deselecting the text and not clicking to seek. This is already working correctly. But the problem is that even though this is working correctly, it's still showing the hover effects and the mouse pointer that's like for clicking on something when I hover these other lines while text is selected, even though the first, so it indicates that the click would click the line and click to seek, but it doesn't because we have stolen that action by first deselecting the text and requiring a second click. Therefore, if text is selected, the click to seek feature is essentially disabled because the first click will just remove the text selection and not perform a click to seek action. And it's only when after that then text is deselected and then click to seek is working as normal. So the only thing we need to change is to make sure while text is selected, the click to seek feature should be fully disabled, which means the lines are no longer clickable, which means there's not a hover effect, there's not, the pointer shouldn't be the like link clicking pointer, it should just be the normal pointer kind of indicating that you're just kind of clicking in empty space because that's intuitively gonna tell you that also text is deselected when you uh click because you're just clicking outside the text selection. Another change on desktop. Now, do I want to do this? Maybe not.

On desktop, it's already correct that when a genius highlight is unfocused, the selection mirror is cleared, and therefore the genius annotation disappears from the sidebar. This is good. But there's a problem if, you know, the other way we get text in the selection mirror and in the sidebar is by selecting text, and then selected text shows in the selection mirror correctly. Then I click the AI explain button to get the AI explanation. Already works correctly. And then next, when I unselect the text, the selection mirror is correctly cleared, but the AI explanation stays on screen. That's incorrect. When the selection mirror is cleared, in this case, the AI explanation card needs to disappear in the same way that when the genius highlight is unfocused, the genius annotation card disappears. Also, text selection should be completely disabled in the sidebar and the dock and the bottom sheet, which is not the best practice UX, to be honest, but it's because of the current logic that would mess with our selection mirror functionality. And so for now, the easiest is just to disable text selection on anywhere else than the lyrics and the stage. Also in the segment navigation menu, text selection should also be disabled. The only place on the whole website that text selection should be... Also in the main navigation menu, text selection should be disabled. Search field there, everything. The only place in the whole app that text selection should be enabled is on the lyrics in the stage. Also a small note. Since the only real screen is the main view, and if we open the main navigation menu, although it's full screen, it's really just a pop-up over, then there is a case where we might dismiss that menu, and if we do and go back to the song, I want the text to still be selected if it was selected before. So that means two things. If text is selected and we open the main navigation menu, we need to make sure the text is still selected. It's not deselected because the main navigation menu just appears as an overlay above, and so we don't see it, but the text should still be like natively selected below. That means that if text is selected, then clicking the disk to open the navigation menu should not deselect text. And therefore, when dismissing that menu, if dismissing by clicking the X button to dismiss it, that click should again not deselect the text, so that when we get back to the main view, the text is still selected in the same way. But, of course, if we navigate to another song through the main navigation menu, then, of course, it's a new song, so then the text is deselected because it's a new lyrics on the stage.
0b62a3df72de12b827910cacd8d2d098faaceaac3875419693b9b3cf80fa033e_05de06ee8c57.m4a
Friday, February 6, 2026
11:31 AM ยท 2:15
Essence

The speaker is impressed by the AI agent's independent decision to include keyboard shortcuts in the navigation menu, a detail they plan to incorporate into their official documentation.

Summary

The speaker recorded this memo to highlight a specific, unrequested feature implemented by the AI agent: the inclusion of keyboard shortcuts (Command K to toggle, Escape to close) within the main navigation menu. They appreciate this detail, noting it's a desktop-only feature, and plan to add it to their official documentation. While generally pleased with the design, fonts, and colors, and how the agent handled various highlighting styles, the keyboard shortcut integration was the only truly novel and unprompted addition worth specifically mentioning.

View full transcript
This voice memo is specifically to point out things the AI agent did, which I like, which I don't think I specifically mentioned or asked for in the docs, which means it's essentially just chosen it itself or invented it, but it's something I might want to store in my actual docs as part of the actual implementation plan. So first of all, it's like the design and fonts and colors and everything. I'm happy with, but I'm not gonna go here verbally spelling all that out. I like that specifically in the navigation menu, the main navigation menu, it added some small info where it says the two relevant keyboard shortcuts, which are Command K to toggle menu and Escape to close. So it shows that kind of in small in the bottom. I thought that was cool, but only on desktop, of course, because that's where we have the keyboard shortcuts. It handled correctly, like how it's going to look with the genius highlights, but also lines being highlighted if it's active or if I click to seek to it, where you just kind of different kinds of highlights with some have colors, some have kind of like text size and background. So it all works well together. Although I'm again not going to spell out exactly how it did it, so it doesn't really matter. You know what, that might be the only thing that I didn't explicitly list out or that it's not just like a kind of general design decision that I'm not gonna spell out, actually. Yeah, everything else is essentially it. So this voice memo is kind of useless then. It's just this one thing.
075f596b37b52ae8b3e8693bf031112f2f2b9656c4676d78d213c09fcd0464a9_9ca40f88e0fc.m4a
Friday, February 6, 2026
9:55 AM ยท 93:20
Essence

This memo details a series of UI/UX issues and desired improvements across desktop and mobile versions of an application, focusing on navigation, song state management, text display, and seeker functionality.

Summary

The speaker begins by noting a desktop navigation issue where the menu can be dismissed with Command-K even when no song is active, which shouldn't be possible. They then clarify that on the navigation screen, the currently active song's information (cover art, title) in the sidebar should be clickable to dismiss the menu, similar to how song gallery and search results work. They confirm that local storage is successfully saving practiced songs in the gallery across reloads, but markers are not saving and appear to be random, suggesting they should be removed by default. The speaker also wants to adjust text sizes, making the 'big' text larger and 'small' text smaller, and increase the font weight for the 'big' mode to resemble the Spotify app's lyric display. Regarding song state, the speaker decides that while the active song should be stored in local storage to prevent returning to the home screen on reload, the specific active segment within a song should not be saved. Instead, loading a song should always default to the full lyrics segment. They clarify their terminology, explaining that the 'main view' refers to the stage, sidebar, and dock, and that navigation menus and mobile slide-up sheets are considered overlays rather than separate screens. They identify an issue with the auto-segmentation algorithm for lyrics, suggesting it needs more advanced processing, possibly using an LLM, to accurately split songs into verses and choruses. Further UI issues include the stage's bottom scroll on both desktop and mobile, which scrolls too far, leaving empty space below the last line. The speaker estimates this excess scroll is about 30% of the stage height and needs to be corrected. They also address seeker functionality, noting that on desktop, dragging the seeker causes unwanted text selection and feels laggy. They suspect the lag is due to either excessive processing from updating on every mouse movement or an intentional but poorly implemented easing animation. They desire instant, precise seeking that teleports to the cursor's horizontal position while dragging, maintaining marker snapping. On mobile, the seeker is too thin and also exhibits similar lag, requiring a thicker design and instant, precise seeking based on relative delta X. Finally, they observe that on mobile, dragging on the dock causes the entire page to slide up or down, which seems to be a native browser behavior they are unsure if they can disable.

View full transcript
starting on desktop, the navigation page looks good, but there is an issue that you can close it with command-K, even though there's no song picked yet in the initial state when there's no previous song or current song, so it shouldn't be dismissible in that case. Like, it's not dismissible by X, but still with command-K, that shouldn't be possible. Afterwards, also, when there's an active song, as I said in my notes that I wanted it to be, just to go back to the song, which is not, but honestly, let's change my instruction. Let's just say on the navigation screen when there's a currently active song, in addition to where we have the song gallery and search results that are already clickable, but in the left view and the sidebar top view and mobile where it shows the current song, that one should be clickable as well. Not like the whole container, but the song info, like the cover art or the song title or metadata, like that, the song info should be clickable, which also just kind of dismisses the menu, like it takes you to that song, but since it's active, it just kind of dismisses the menu instead of reloading the song like a thing. Otherwise, it's good. Now I'm going to test that one on mobile as well. First of all, I see it is storing with local storage, it seems, so that's good. I wasn't sure if that was going to work, but it does seem to be working. Let me double check for that. Yeah, it does seem to be storing at least, you know, practiced songs in the gallery, even past reloads and stuff, so that's good. I haven't tested storing the actual state within songs. That's really and also not that important for this version 1. I don't think there's anything to say within the song except for marker sections, so I'm going to test that. It's not saving the markers, so it seems it's the markers that it's showing. I'm actually not sure where they're coming from, if we store that in the mock data. I don't think we did, so I think it just put random markers. So that should be discarded even for the behavior prototype. I don't want to have any markers there by default, only the ones that the user has set, and then that should be remembered. For the big and small text, now I'm looking on desktop, I think the difference should be bigger. So let's make the big text slightly bigger and the small text slightly smaller than what it is currently. This is kind of a side note, but I remember when I said previously or in the notes that it's gonna save the state for the song, like any previously practiced song, I was thinking it should also say which segment is active, but actually thinking about it now, I don't think it should. Like whenever you load a new song or just like switch song, you should always choose the full lyrics segment initially. Like even if there's a song which the last time you used it, you practiced a specific segment, then you navigate it into a new song and then later you navigate back to that song, it should just go to the full lyrics segment again on that song. Which I think it is doing already correctly because it's just not storing that kind of state of like which segment it was last left on. I think that's only an in-memory thing and not a local storage thing, and that's completely fine. However, I do notice when I reload the page, when I'm in one song and I reload the page, it goes back to the home screen. So I do want to store in the local storage the state of which song is currently active. And so if no song is active, which is the initial state when they first use that for the first time, then we get taken to the navigation screen, but otherwise it's usually going to load directly into the main view where we have a song active and we see the full lyrics. I guess my terminology is a little bit switched or mixed. When I say the main view, it's the view with the stage and the sidebar and the dock. It's not the navigation menu or the main navigation menu because, well, that one is also full screen, but I've kind of defined that as a pop-up or overlay above. So there's only really one screen kind of technically, even though the navigation menu is full screen, it's not really a screen in itself, it's just that overlay or pop-up. And then on mobile, the slide-up sheet is also just a pop-up or a slide-up overlay over the single main view and also the navigation menu on phone, mobile, although it's full screen, it's also still just like a pop-up overlay. So that's confusing terminology, but I mean, that's what I mean when I say main view. Maybe I should rename it to something, just like main screen, main view. I think main view is fine. And the main view always has an active song, otherwise we can't show it. So that's like only when we don't have an active song, then we default to the navigation menu instead. And then, as I said earlier, we don't allow dismissible of the navigation menu if there's no active song. Right. Then within the segments, I noticed the like auto-segmentation algorithm needs to be improved because I just said to split on empty new lines, split the paragraphs, but some songs have that and some songs don't, and some songs have it a lot, at least the way they're formatted in our lyrics source, so it doesn't work like this. I need more advanced processing for that, but that can honestly be a future note. I might actually want to do like an LLM processing for that. So let's take note of that, but yeah, that's something that needs to be changed or improved, the algorithm for automatically splitting the full lyrics of a song into segments, which is going to represent like the verses and the chorus, like one segment per verse or chorus. Then within the stage view, when I scroll, the scroll at the top is great and overall the scroll on the page is good where the main page is not scrollable for the most part. I'll comment more comments on this later. So all the sizing, positioning and scrolling thing is good and then the stage is scrollable as I wanted. And note, now there's no scrolling indicator in the stage, like usually on a page there's a scrolling indicator on the right, so you see how far down you are. We don't have that in the stage right now, which is kind of nice because it's clean, but also it might be useful to have it to show that it is scrollable and to kind of know how far down you are. So I'm not decided on this, but definitely a note for something to think about and maybe implement. But there is a problem with the bottom scroll on the stage. The top is fine, but when you scroll to the bottom, it goes too far. It should stop earlier. As I specified in the design documents, it should not allow scrolling further than the bottom line kind of being in the bottom of the stage. Whereas now it allows scrolling a little bit further, so when I scroll down to the bottom, there's some empty white space below the bottom line before the dock. And it's the same on both desktop and mobile. I'm not sure if in the code this was omitted, if it simply like forgot about it or used the wrong calculation or the wrong hitbox or something or if it's more like literally a calculation of the height, which in that case is wrong, so I can point out it's hard to be exact, but it looks like if we consider the stage height, which is almost the full screen height, but not fully because we have the dock below, then I see of the stage height, you know, it's supposed to limit the bottom scroll and the last line kind of is revealed from the bottom. But after that, it scrolls about 30% of the stage height more, leaving the bottom 30% just empty if I scroll all the way to the bottom. And so that should not be possible. It should stop before. That's approximately. So I don't think that number should be used for the calculation. I think there is probably some other better HTML or CSS thing to do based on the hitboxes or content or something. I don't know. That's just to give a reference for how it's appearing. And again, that scrolling thing goes too far on both devices. And regarding the text size thing, which I mentioned earlier, where I wanted to make the big mode bigger and the small mode smaller, that applies to both, except that on mobile, the big size is already good. But otherwise, it should be changed. And then when we have the big mode, let's also make the font weight higher, increase the font weight, like make it bold or like extra bold or something. I want the big mode to be more similar to what it is in the Spotify app when you're playing the song and reading the lyrics. I can find a screenshot for reference if you know how it looks like, then you do if not, then it's fine. But it's like not counter big when you're watching on a computer, to be honest, but it's like meant to be super easy readable. And then we have the small mode is more like the practical actually reading if you're like focused.

Then, there's some annoyances with the seeker, which I'm not sure if we can fix. It depends on the technical... How do you say? It depends on what's actually possible in a web app, which I'm not sure. But, starting on desktop, because it's different on desktop and mobile. With the seeker on the desktop, click to seek works great. But, when I drag, it's a little bit weird because... First, while dragging the seeker, if I drag also to the side and up or down, it's selecting other elements and performing hover effects on other parts of the screen, which I don't want to. So, while I'm dragging the seeker, it seems like every other part of the page also just consider a normal kind of mouse drag from that position on the screen to any other place, which means it selects text if I drag over it, or if I hover over any button or toggle. I was going to say when I hover buttons, then it shows the hover effect, but actually it doesn't, so I was incorrect about that. But it just selects text, which is annoying. I don't want that. So I'm not sure exactly what to do here, but we need to disable something while dragging. We essentially need to disable whatever other normal mouse drag or mouse interactions would happen, because we're specifically, you know, started a drag on the seeker, so then at that point, the focus is on the seeker. Nothing else should be affected by the drag. And I guess the only concrete thing that I can see affected is that text is selected. It doesn't seem like it's doing hover effects on other things because I guess it knows that the mouse is already down, is why it's not doing the hover effects because I'm holding it down as I'm dragging, right? So I guess it's only text selection there that makes it look weird. And then the other thing with the seeker on desktop, when I'm dragging, it's just laggy. I don't understand why. What I mean by laggy is that in the left to right thing, it's always seeking correctly to the exposition of the cursor, but it just feels slow. Like it's lagging behind the cursor. It seems it doesn't update so much. I would want it to update every time, every frame where the mouse moves. Or at least if the mouse moves horizontally. It's hard to describe exactly what the lag is. It just seems like there's kind of a low refresh rate on the seeker position while dragging, but also it seems like the more I'm actively moving the mouse, the slower it is. So maybe it is already trying to update on every mouse frame and that's just too much processing, which makes it lag. Or there's something else, I'm not sure. Because it seems like it's better at seeking if I drag the mouse quickly and then stop. Whereas if I drag it but then also keep moving it, it seems like as I'm moving the mouse, it prevents the seeker from actually seeking there. So for example, if I start all the way on the left and I drag all the way to the right, it's going to essentially instantly follow the mouse. It looks smooth enough to be honest. But if I drag the mouse, it starts at the left and then I click the seeker, click and hold and drag to the right and I keep wiggling the mouse over there. Then it moves to the right super slowly. Like as long as I'm wiggling the mouse, it kind of freezes and only when the mouse is standing still, it keeps seeking and then it tries to move quick, but then I never move the mouse faster, wiggle it a lot, then it kind of freezes again. So it kind of goes, even though the mouse is already all the way in the right, the seeker kind of jumps over there because it seems to just get frozen every time the mouse moves too much. So that's weird. I don't know exactly why that's happening. My best guess would be maybe that it's trying to update on every mouse movement and therefore there's increased processing, which actually just makes it lag. But then it seems that maybe actually it's built in some like easing of the movement or like a max velocity of the seeker or something to kind of give it this. It seems like it's smoothed out to make it feel more smooth, maybe. Especially when moving short. But I don't know if that's actually built in or if it's just from lag, but I don't want that. I want it to like, because it seems here when I drag the mouse from left to right that it's trying to like move there, gradually animate it instead of just teleporting there. Which is, seems like a small UI detail that tries to be nice, but actually it just doesn't work out well and actually is annoying. So let's just disregard that if it exists and instead when seeking on the desktop and like dragging, it should just always, while dragging, just like, every time the mouse moves horizontally, just, you know, teleport or instantly seek to that time stamp. Now on mobile. So on mobile we have the slightly different seek. First of all, the seeker on mobile is too thin. It should be double the height that it is currently. For easy, just to make it easy to hit with a fat thumb, you know. By the way, side note, on desktop I said it should, you know, teleport to the mouse, which is true, but we still keep the snapping behavior if you're close to a marker. And it's good that it works even if I start to drag on the seeker, but then move my mouse a lot up or down while I'm dragging, it still just kind of ignores the vertical movement and keeps the exact horizontal movement, which is perfect. Just gotta remove that text selection on drag over to mobile. Seeker needs to be thicker as I said. And then here we have the relative delta scrubbing or whatever it was called. Which works okay, but again, it seems to be lagging in the same way as the one on desktop. So therefore I'm assuming also that the solution there is similar. Where we should make sure it's always like teleporting or instantly seeking to the right time stamp, but now instead of it being, you know, the direct exposition of the finger, it's based on the relative delta X through the drag motion. That's already how it's working, we just gotta wanna do this like instant and precise seeking instead of whatever animation or lag is going on. A big problem on mobile. The page in itself is not scrollable, so it's the right... It's the right like size and height and only the stage is scrollable as it's supposed to be. But there's a problem which might be out of our control. I'm not sure, it might just be the effect of it being a web app. Since the stage is scrollable, like if we're at the top or the bottom, it's a little bit different, but as long as we're like in the middle of the stage, tapping and dragging anywhere there just makes it scroll normally while the dock stays the same. That's how it's supposed to be, that's good. But if I, specifically on mobile, hold on the dock and then up or down, the whole page kind of slides up or down. But then as soon as I let go, it bounces back and this seems to be like an inbuilt native browser behavior, which is why I'm not sure if we can disable it. Because, like specifically if you do it down, the feature for that is the, you know, like swipe down to reload the page. And but then if you do it up, there's nothing, but I think it's just by the browser meant to indicate that it is actually the bottom of the page. Actually, I'm gonna try it in Chrome as well on my phone right now. Because this was Safari. Yeah, it's behaving the same in Chrome. So I'm not sure if we can disable this, but if possible, then we should. Like, when you're at the bottom or swiping up from the dock, it shouldn't be able to slide up more and just reveals like blank space below before it bounces back. It's not a big problem, but it just makes the UI confusing. And again, if you swipe from the dock down to reload the page or if the stage is scrolled all the way up to the top, then it understands the stage can't be scrolled more, so the scroll kind of escapes the stage and instead goes in the whole page and again tries to swipe it down to reload. And I hope I can disable that. And our next point is that this kind of interacts with the seeker on mobile as well, because when I'm dragging to seek left or right, if I, during the drag, also start moving my finger vertically, it does the same like page movement, like if I was dragging from the dock vertically. And so especially while dragging the seeker, it's very important that we can disable this because it makes the UI bounce around and hard to follow. As I'm trying to drag horizontally, but I drag a little bit vertically on accident. So it's a very confusing layout shift. Hopefully this is possible. It might be like a native browser behavior, but it kind of breaks the app.

honestly, the way it's designed right now, so gotta investigate that. Then regarding the seeker snapping to markers, it works fine on desktop. On mobile, it's hard to see if it's working or not. So either it's not working at all and we gotta fix that, or if it is working, the precision needed is simply too high, so it's like for the, you know, fidelity of a phone. So in that case, we gotta increase the kind of magnetic power or the snapping distance to markers on mobile. I'm not sure how much though. Yeah, even past that, there's some weird behavior with the seeker on mobile that I'm not able to define or identify exactly what it is, but sometimes it seems to snap to where I let go of the dragger where I started the drag instead of staying at the position it's like ending up, you know, which is correct relative to the dragger. So I'm not sure. Next point, the toggle button for big or small fonts. I wanna just change the design of it. So the highlights is already working correctly with the toggle kind of being on when the text is big and off when the text is small. I wanna change the icon. So currently we have a T as the icon. I want instead the icon to be the two instances of the letter A next to each other. So like AA. But I also want it to change when the toggle is off or on. So when it's off and the text is small, I want it to be one capital A and then one non-capital A. Like the, you know, the left one being capital and the right one being not. But then when it's big, I want them both to be capital and for to be bold, like a bigger font size. Then for the Genius highlights icon, it's now just a normal highlighter icon, which of course makes sense, but I wanna replace that potentially with the Genius logo. For now, we'll just replace it with a G and the G should be bold as well, like the letter G. Small note on the disk that, you know, is spinning when it's playing and then freezes when we pause. Currently, whenever you pause, it snaps back to like zero rotation. And then when you resume, it starts rotating from that zero rotation again. I don't want it to do that. I want it to preserve the rotation value when it's paused. So initially when you load the song, you can be in zero, but then if I play, you know, one second, it rotates a little bit and then I pause. As it's paused, it should stay at that rotation. And then next time I play, continue rotating from that point. For the dock, I wanna reorganize it. So from left to right, like we have the seeker that is good. And below we have the row with buttons. I wanna reorganize it. So from left to right, it should be the disk, then the segment anchor, then the three buttons that are not the play pause buttons. So we have the rewind button, the loop toggle and the marker toggle. They can be in, should it be in that order? Yeah, that should be marker toggle, then loop toggle, then rewind trigger and then the play pause button from left to right. On mobile, I want all these, so together it's six elements. I want them just spaced evenly in the row of the dock. More space between, like evenly between. I think just space evenly. Or no, I don't know if the dock should have its default padding or whatever, and then the row, you know, it puts the outer elements all the way at the end, but then it's even spaced like in between them, but it puts the outer element like the first and the last, like all the way at the edge, whatever that's called. Like within the padding of the surrounding container. On desktop, since we have more space. On desktop, the play button should be centered horizontally in the dock. And otherwise the controls can have the same order of the controls that I just did. Same order on desktop and mobile. Does that make sense? That means we would only be using half the dock width on desktop for the controls. No, that doesn't really make sense. But for now, let's just test it, because I don't know what else to do. So I'll do like that. Then the icon for the rewind button should be changed. Now it's like a circular kind of reset icon, which is okay, but it's confusing because it also seems like a loop toggle. So now we have two circular things. So let's change it to being the usual like skip backwards icon. Let's like skip all the way to the end leftwards. So it should be like the triangle with the line or the two triangles or the two triangles with a line. I think it should be the one leftwards, the two triangles and the line, which to me signifies like go all the way to the end or to the start. I mean. For the dock, I want to reorganize it. So from left to right, like we have the seeker that is good and below we have the row with buttons. I want to reorganize it. So from left to right, it should be the disk, then the segment anchor, then the three buttons that are not the play pause buttons. So we have the rewind button, the loop toggle and the marker toggle. They can be in... Should it be in that order? Yeah, that should be marker toggle, then loop toggle, then rewind trigger, and then the play pause button from left to right. On mobile, I want all these. So together it's six elements. I want them just spaced evenly in the row of the dock. More space between, like evenly between. I think just space evenly. Or no, I don't know if the dock should have its default padding or whatever. And then the row, you know, it puts the outer elements all the way at the end, but then it's even spaced like in between them. But it puts the outer element, like the first and the last, like all the way at the edge, whatever that's called. Like within the padding of the surrounding container. On desktop, since we have more space. On desktop, the play button should be centered horizontally in the dock. And otherwise the controls can have the same order of the controls that I just did. Same order on desktop and mobile. Does that make sense? That means we would only be using half the dock width on desktop for the controls. No, that doesn't really make sense. But for now, let's just test it, because I don't know what else to do. So I'll do like that. Then the icon for the rewind button should be changed. Now it's like a circular kind of reset icon, which is okay, but it's confusing because it also seems like a loop toggle. So now we have two circular things. So let's change it to being the usual like skip backwards icon. Let's like skip all the way to the end leftwards. So it should be like the triangle with the line or the two triangles or the two triangles with a line. I think it should be the one leftwards, the two triangles and the line, which to me signifies like go all the way to the end or to the start, I mean. Then when text is selected on desktop, as specified in the document, that pressing the escape key should deselect text and that is not happening. So it was forgotten about by the coding agent or it's not possible on front end in a web app. So I'll have to investigate that.

But it's not working right now. Also I notice, I'm testing on desktop, I don't know if it's only desktop or both devices, but currently, you know, we have the click to seek on the lines and it's highlighted when a line is hovered. But currently it's actually possible to click or hover in between the lines and then nothing is highlighted. So there's like a small gap in between each line where you're not hitting any line. And I don't like that. I want, when you're like hovering the text, there shouldn't be any gaps in between in terms of the clickability and the associated hover effect. And that brings up a related issue, which is that This one, I cannot explain why it's happening. It's a weird bug. I notice normally if I select text, I also see it in the selection mirror in the sidebar. And then when I click, when I select some other text, it updates, which is good. And then when I click in the empty space in the stage on the left or right, the text gets deselected, which is good, but it does not update the selection mirror, so the text is deselected, but it's still showing in the selection mirror. And it's only on like another click in an empty area that the selection mirror is updated. And I don't know what that is happening. Again, that could be more of like an issue with inbuilt, what we have access to that maybe like whatever text selection object that we're listening to actually doesn't update. But I mean, that would sound weird because the text is clearly being deselected, so I would assume there's an update event that we can detect. Yeah, so it definitely seems like the selection mirror is a little bit out of sync with what text is actually selected in the stage. And that's weird, like it should be perfectly synced, like it should be just literally checking what's selected right now and just showing the same thing. It seems like it's maybe trying to dynamically update in a different way, which is off. Also, I've noticed when I select text and I click somewhere else, it didn't deselect it. But if I click again in the empty area of the stage, then one click deselected it, but the selection mirror does not update. Another click in the empty area of the stage updates the selection mirror, so now it's not selected. But if I deselect the text by clicking not in the empty area in the stage, but rather in the empty area in the dock or the sidebar, the first click deselects the text, but the selection mirror hasn't updated yet, so it still shows the same text that was selected, which is the first issue. But then secondly, even on a second click anywhere in the sidebar or in the dock, it doesn't update the selection mirror, so the selected text just stays there. When really it should have been, I mean, just updated on the first click when it deselected the text. But then also it's behaving differently than when I click in the empty area in the stage where then it updates on the second click in the stage. So I don't know what's going on there. Also, I just realized a kind of new feature that I want that wasn't even defined in the docs. But we have the selection mirror. It's gonna, by default, you know, show mirrored text that is selected using like native text selection. But then added the other thing that we can also click genius annotations, genius highlights, and they're not like selected like natively with the text selection, but we say that like focused in our UI. And so then that line is shown in the text selection mirror instead, even though it's not like natively selected with the text selection feature, it's kind of like UI wise, it kind of looks like it's selected in our app, although it's a different selection effect than, you know, the native text selection. And so in order to avoid any edge case with like people both selecting text and focusing a genius highlight on the same time, what I did is what I specified, which is working correctly, is that if text is selected and the genius highlight is pressed, then if he selects the text, which is good. But now I want to add a new thing. This concept of focusing a genius highlight, I also want to have that on normal lyrics lines, which means that when we click to seek, I don't, we already have the highlighting and stuff on the lines, you know, the currently active line. I want that, I just want to add the concept that that one is also kind of focused in the same sense as we focus a genius highlight. And so before I said that, you know, you click a genius highlight to focus it. And then if you click a different, only max one can be focused at a time. So if you click a different one, it focuses the different one and unfocus on the one you focused on. We're going to broaden this concept so that it can apply to not just genius highlights, but any line or genius highlight. But there's still going to be a max of like one unit that is focused at a time, which means a normal line can be focused or a genius highlight can be focused. And then selecting any text is gonna unfocus the focus line because this is controlling what's in the selection mirror. So then we want the selected text to be in the selection mirror instead. And as well, if any text is selected and then a line is focused by, you know, it being clicked, that shouldn't deselect any text. So currently parts of that are working correctly, but we don't have the concept of focusing on normal lines. And as I said, there's some issues with the deselection and updating the selection mirror. And then specifically for the edge case where a line has a genius highlight on it, then it's kind of different. If we have genius highlights toggled off, then it just appears as a normal line, which means when it's clicked, the line is focused. But if we turn on genius highlights and click this line, now we're not really clicking the line anymore because the genius highlight is covering it and absorbing the click. So we're clicking the genius highlight instead, even though we're clicking on the same line. And now we're focusing the genius highlight instead of just the line, which is kind of similar, like the selection mirror will show the same thing, but it's different because now in the sidebar or the sidebar sheet, now we can show the genius annotation for that line, which we normally don't have if we don't have the genius highlights on, then it's just like you can kind of choose to save it or get any explanation as we have. But specifically when the genius highlights are on and then you click a genius highlight and you focus it, now you can see the genius explanation. So that's a combination here of some debugging of some feature that's not working correctly and also adding slightly some like new, modified functionality. Okay, now I'm going to test the same text selection stuff on mobile to see if it behaves similarly because I was just testing this on desktop. Also, one more issue on desktop actually. When we're selecting text, like normally we have the click to seek on any line, but when we're selecting text, you know, I don't want it to click to seek because we're not really doing that action. We're like dragging to select text and that's currently working correctly because the app is distinguishing between like clicks on a line, which seeks to them, or the drags, which are selecting text or the... Yeah. But I noticed specifically on a line, if I do text selection on desktop still, if I start and end the drag on the same line, then it's counting it as a click and it's focusing the line or it's like clicking the line. It's seeking there, which I don't want. And it's also reflected by the hover effect. So when I'm hovering one line, it shows a hover effect for the kind of like click to seek. And then as I'm dragging, the single line shows the hover effect. If I start dragging multiple lines, the hover effect disappears. So it seems like, you know, it's not trying to click that line anymore. But if while I'm dragging, I actually kind of just drag it up again, so I'm back to just dragging on the same line, then it has the hover effect again. And in that case, if I let go, it seeks there. So it seems it kind of behaves like a click on the line if I end up ending my drag on the same line as I started it. And I don't want that to happen. I want it to only be when I like click a line without dragging or selecting text. That's the only time that clicks to seek. Then I want a new feature. No, disregard that new feature thing. We don't need it for this version. OK, now I'm going to go over to checking the text selection on mobile. But first, one note that I noticed in between on desktop where we have the keyboard shortcuts, we have A which toggles, you know, the big or small fonts. But then I tried doing Command A to select all text, which is my default keyboard shortcut that we need to allow. But somehow that one has been that one is being blocked by us like absorbing the A tab. So the A for changing, toggling the lyric size should not be happening if they do Command A, then we should just do normal Command A, which is, you know, selecting all text. And I think we can just leave that up to the native system. We just need to make sure we're not blocking it.

Also, I'm pretty sure I specified a keyboard shortcut that was to rewind all the way. We have the R, of course, but I'm pretty sure I said that command left or shift left or something should also rewind all the way to the end or to the start and also the other way to the end. That's not working right now, so let's make sure that's implemented. It should be command and the arrow left should go all the way to the start and command and arrow right should go all the way to the end. Okay, now let's test the text selection on mobile. So on mobile, first thing is the hover effects when clicking a line. It's kind of different because on mobile, you're sometimes kind of tapping and holding on the stage to drag to scroll. And then it's weird because it shows a hover effect on the line you're holding on to drag, even though you never click there, you just drag. So on mobile, there shouldn't be any hover effects for the click to seek or focus line feature. You know, you see a highlight on the active line, that's good. But when doing any form of drag or scroll, I don't want to see any of those hover effects. It's only when tapping a line that it like seeks there or focus line feature. And now we see a highlight on the active line, that's good. But when doing any form of drag or scroll, I don't want to see any of those hover effects. It's only when tapping a line that it like seeks there. So we don't even need the hover effect in that sense because then it just like seeks there and highlights the active line. So I clearly see it. Secondly, on mobile, if text is selected, then it gets deselected if I tap outside the selection, which is good. But if I tap somewhere in the stage to deselect outside the selection, it both deselects and like seeks to click on the line I press. I don't want that. I want the text deselection to take precedence when I click anywhere outside of it, even if I click on top of other lyrics. And it's only like on another subsequent tap that it actually clicks to seek. So essentially when text is selected in the stage and then I click somewhere else in the stage, it shouldn't do any clicking to seek on that first click, it should just deselect the text. It's kind of like I'm dismissing the selection and then now the stage is back to, you know, being able to click to seek. Thirdly or fourthly, I'm not sure, next point on mobile. It seems like the lyric action buttons never light up, they're just permanently disabled. So they're supposed to be normally disabled, but when text is selected, they're supposed to be activated. And they're not right now, which means I cannot perform any lyric actions. And I also see if I try to click them while text is selected, it just deselects the text, which is not good because they're supposed to perform an action with the text. That could be simply because they're disabled and therefore they work as an empty space, which deselects the text. But like ideally, they should be activated and also when clicking them, the text should not be deselected, it should stay selected. But then if no action is performed, but text is deselected, then they should be disabled again. And I'm probably gonna want to even emphasize more their presence with the text while text is selected by either not even having them on the screen at all until text is selected and then they kind of slide in to very visually show that now there's something new. Or maybe in addition to activating them, I'll also literally make them like pulse or like really light up or something. But for now, that's too much. We just want them to be not disabled, just like normal buttons active while text is selected. Following the problem that text is deselected when I press a button, I do notice that it doesn't only happen on those disabled buttons. It also happens if I tap any of the other buttons. For example, if I tap the Genius highlights toggle or the text lyric size toggle, or if I press buttons in the dock, I notice the text is deselected as well, which is... Yeah, that's fine actually. It's only specifically if I'm not only disabled, but when we first interact with it, if I click either of the three lyric action buttons, then that should not dismiss the text selection, it should stay exactly as it is because we don't know the user might want to perform more actions, so just keep it selected a little bit longer and they can always deselect it by tapping outside or any of the other actions, down the buttons or in the dock, which will also deselect the text because none of the other features are related to the selected text. Interestingly, regarding the issue I mentioned earlier on desktop, even while doing text selection, if I did it only on one line, like the drag starting and ending on the same line, it would still kind of click to seek that line. That was an issue, like I didn't want it to be like that. Interestingly, that does not happen on mobile, so there's something desktop specific there, but I don't know exactly what it is. Okay, moving on. I cannot test the slide of sheet right now because the lyric action buttons aren't working, which is unfortunate. We're moving on to the segments picker. Also another note, I want to make a keyboard shortcut for opening the segment navigation on desktop. I don't have a keyboard shortcut for that currently. And regarding the keyboard shortcut up down, it scrolls the stage, which is honestly kind of nice. And it might be interacting, yeah, it seems that it's also interacting with, if I use, I'm on Mac, if I use option up and down, it does like a page up or page down. And if I do command up and down, it does like all the way up and down. That's honestly nice. But I was just thinking it might feel intuitive that instead of it scrolling up or down, it's pressing up or down actually seeks to the previous line or the next line. And then we could also combine that with the scrolling. Because then if we seek to it, but also center the line in the screen right after, then it's gonna kind of feel like a scroll as well. Although it's gonna be kind of more jumpy than what is currently. And we can even just trigger that seek kind of because we have the F shortcut for focusing, which is gonna center the active line if the user has scroll. So we can just kind of like, you know, seek to the previous or next line and then kind of trigger the focus action to center it as well. But honestly, I'm not gonna do that for now because I'm not sure if I want that or the current behavior. And yeah, so it's just a note, something I might want to do in the future, but maybe not. But one keyboard shortcut that is not working is F, which is supposed to focus the active line, meaning that no matter what the scroll position of the stage is currently, it should center it at the active one, should scroll so that the active line is centered or within like minimum max scroll limits. That was not working right now, so that one needs to be implemented. Anyways, moving on to the segment navigation. First of all, for the segment anchor, segment anchor looks fine, but we're gonna change the icon we have on the right. So currently it's like a, it's called maybe arrow or a chevron or something. It's just like the kind of the angle bracket. I'm gonna refer to it as an arrow right now, but I do like the icon that is that it's not a normal arrow, but more of a like greater than sign or an angle bracket or whatever it's called. It's like a best way to describe it, I think it's like a, I guess a V or a greater than sign that is being rotated. But it's pointing the wrong way, so in the normal view, viewing the lyrics, this arrow should point up indicating that clicking it will open a menu up because the dock is at the bottom. And then...

While the menu is open, currently that arrow is pointing down, which makes sense, so we'll leave that the same. And then it was a nice rotating animation, let's keep that one, but it's gonna go between being up and down instead of like sideways and down. Okay, then in the segment navigation, on desktop it shows at the bottom, press S to cancel, which is here, press escape to cancel, but it also shows on mobile, which does make sense because we don't have keyboard shortcuts, so that should be hidden on mobile. Just like, you know, any other keyboard shortcut annotation or like hints, but it seems like all the other ones are done correctly, it's only this one that somehow managed to slip through. Also regarding the scrolling issue I was mentioning much earlier, since it's being a web app, essentially, if you, I was talking about on phone, mobile, how it sometimes, if you're at the top or the bottom of the page or like it's dragged from the dock, which is in a fixed position, it like will like overscroll the page to the top or the bottom, which is maybe a like native browser behavior that we can't control, I'm not sure, but it's kind of bad for our user experience. I noticed there's actually the same thing on desktop. I'm in Safari on desktop, just that it's more subtle. And so on desktop, you have more area to do the scrolling because you have the stage scrollable, so it's not an issue unless it's all the way at the bottom or all the way at the top, where then it works similar to like any other area. But we have the whole sidebar we can scroll in and the whole dock, or I mean, we can't scroll in them, but like you can still attempt the scrolling action in them. And what happens in that case, since the page is not scrollable, it tries the same like overscrolling in the top or the bottom, but on desktop, it's only like very slight. If possible, that could be disabled, but honestly, it's not a big issue on desktop, in Safari. I'm gonna test it in Chrome as well. But like if it can be disabled, then let's do it. Yeah, I see it's happening the same in Chrome. Also, I noticed interestingly, because I mentioned on desktop, there was no scrolling indicator inside the stage, which is maybe nice, maybe not. I'm not sure. Just a note to be aware of. I might wanna implement that. But I now realize when testing in Chrome that in Chrome, there actually is one. But only on desktop. Okay, so of all the four things I tested, I tested in Safari and Chrome on desktop and mobile, but I'm on an iPhone and Mac, but I use Chrome and Safari on these devices. The only place I see a scroll bar in the stage is on Chrome on desktop. And here it's particularly ugly because it doesn't match the design of the page. So ideally, just for unity right now, if possible, we would remove that one in the Chrome on desktop, if it's possible. And then we're gonna note that we might wanna implement a scroll bar in the stage where we're gonna just show the default one, but honestly, we'll probably wanna style it a little bit ourself. And so it could be much more subtle, kind of matching the minimal design of our thing. And I'm not sure I wanna do that, but I think there should be one. Okay, back to the segment picker navigation. In the segment picker, I was testing on Safari on my phone. I would expect the same overscrolling issue that we have in the normal view when either we're at the top of the stage or we swipe down from the dock that it tries to like swipe the whole page down to reload, which that feature I don't really want. I wanna remove it if possible. I noticed something interesting in the segment picker. If there's a lot of segments, it's scrollable similar to the lyrics in the stage. But sometimes, even if I was at the top, it got rid of that issue of the page, the whole page scrolling down to reload because instead it would just take the segment list, even though it was already at the top scroll, it would just kind of like overscroll that one instead and bounce it back, just like drag a little bit extra down, not too much, like kind of subtle, and then just bounce it back, which is much better than the whole page going down because it just like very clearly shows we're at the top of the list and it absorbs the scroll within that stage container so that it doesn't try to scroll the whole page down and reload it. So that seems like if we, but then it happened a few times, but then kind of stopped happening and now it's just scrolling the whole page instead. But perhaps this is a hint to a trick that we can, so now it seems just literally kind of random when I do it on my phone in Safari. Sometimes it happens, sometimes it doesn't. But maybe this is a hint at a trick that actually works and actually used where we just make sure this stage container where you'll do the main scrolling, even if it's at complete top or complete bottom, we'll just have a like slight overscrolling and bounce back effect inside that one and make sure we always absorb the scroll instead of letting it end kind of escaping out to the page. Now then we would still have the issue on mobile if they try scrolling from the dock where it then scrolls the whole page, which is a problem. I don't know how we can solve it. I need to investigate it. But that would at least make it better when we're inside the stage, which is the main area we're gonna scroll. Still gonna be a slight issue. The main thing is like when we're dragging to seek on mobile, then it's very natural to slide a little bit. Up or down as well, since we're imprecise and that like triggers this vertical scrolling from the dock, which I wish I didn't have. So I need to find some way to disable that, to be honest. Which this trick specifically on the stage is not going to solve, but it's at least a hint at what we can do. Otherwise, the segment navigation is working great. Although I do notice since we're only showing one line and then a cutoff of the lyrics, let's change that. Let's show three lines of the lyrics in the segment navigation for each segment. Which means they're going to take more vertical space overall, but that's fine because it's scrollable. Okay, I think that's got to be about everything. There's also another thing which I don't get to test right now, which is that I specified, you know, in the sidebar or the slide up sheet, it's normally not scrollable, but the annotation or the annotation might be so long that it's gonna overflow. And in that case, I want it to have a scroll, not on the whole sidebar container or the whole slide up sheet, but just on the container of the annotation. Or on the slide up sheet, I might actually want to have it on the whole slide up sheet, but on desktop at least only on the container of the explanation because I want the buttons above and the selection number above and stuff to still be visible. So on mobile, I guess that's undecided, but it would be fine if it's the same. But I don't get to test it right now because as I said on mobile, we don't get the slide up sheet because the lyric buttons, the lyric action buttons don't work. And on desktop, we just have the Lorem Ipsum explanations, which are like always the same length. So I don't get to test how it would look like if it was overflowing. It's not always the same length, but it's never too long. Actually, I just resized my window on my browser on my desktop and I do see that it is working, which is cool, but I want to change the scroll behavior because it's a little bit annoying right now. So I'm now looking specifically at the Genius annotation. Okay, so it is scrolling, it's just looking slightly ugly. So we're gonna change it. Yeah, I see. So now there's a container which either contains an AI explanation card or a Genius explanation card. And then around the card is like an invisible container and then the whole card scrolls inside the container, meaning that the card gets cut off, but it looks off because the card has like rounded edges and background color, which then gets cut off by the wrapping container when it scrolls. Instead, I want to keep the card the same, but scroll the content inside the card. And specifically, yeah, both the title, which is like AI explanation and Genius annotation, and the actual annotation. Yeah, so kind of keep the card there the same. Yeah. Let's scroll inside the card, which also means then for the sizing of the card currently, if it's going out of the screen, the whole card itself goes out of the screen and is cut off. Or it goes like down into the keyboard shortcuts that we have below. It goes out of this container and is cut off. I don't want that. I want the card to go down to the bottom of the space that it has. Like, you know, it shrinks to its contents, but if the contents are too big, it will grow to the maximum space that it's allowed with still, you know, the padding around or whatever. And so that you also see the rounded corners at the bottom. It's not cut off, but then the content inside might be cut off, you know, because you'll see the text hits the bottom of the card and it's still cut off. And then in that case, we should be able to scroll inside the card, scroll the content down. While the surrounding card container with the background color and rounded corners and stuff stay the same size and position. Also on desktop in the sidebar.

Since we don't have an overview of all the keyboard shortcuts in the dock and stuff, but then we chose to have a few hints, but yeah, I chose to have a few hints in the sidebar. It says like space, play pause, G highlights, A size. Let's just remove that, so that the sidebar has, you know, below the appearance toggles and the selection mirror and the lyric action buttons, and then comes the annotation card. That annotation card should have all the space down to the bottom of the screen, and then, you know, with reasonable padding around. So we're not gonna have any of the keyboard shortcut hints below there, because it doesn't make sense, because we're showing some, but not all. So we're gonna keep the other ones we have, like in the navigation menu. We have some hints. Those are fine. And in the segment navigation on desktop, we have one. That's fine as well. But let's remove those in the sidebar. Because if I wanted to have those, I wanted to also have like for all the other dock actions, of like rewinding and marking and looping and toggle and stuff. And we don't, so let's remove that. I mean, we still have the tooltips, which is really the most important thing. Like the hover tooltips. Speaking of the hover tooltips, I noticed they don't work in the dock. So let's have that. They were great in the sidebar on the appearance toggles and the lyric action buttons. They should also work in the dock on the play pause button, the rewind button, the loop toggle, the marker toggle, the segment anchor and the disk. None of those have it right now and they should have it. Also, the disk should have a hover effect on It should have a hover effect where it just I don't know how it would be, because normally the buttons were highlighted in them with the disk. I think you should just like just expand it slightly while it's being hovered. And then we already have the pointer hover effect, like the pointer changes and that works, but just the disk looks exactly the same. Just have it go like slightly showing it's clickable. Yep, that's everything for this note.
d6a7cfe47372a19d91c0d7310ea2fe1f92e13448b7acb2fb4bd1e75466537831_0cfe61402e99.m4a
Thursday, February 5, 2026
5:41 PM ยท 17:00
Essence

This memo details a conversation about fitness routines, preferred gyms, and an invitation to a group fitness class, concluding with a casual invitation to play billiards.

Summary

The speaker discusses their active lifestyle, which includes nearly daily yoga and five weekly weight sessions, supplemented by surfing, snowboarding, paddle, and dancing for cardio. They express a preference for a newly discovered, spacious gym over a less appealing one that feels "low down." The conversation then shifts to group fitness, specifically "high rock sessions," with an invitation extended to a friend, noting the classes book up a week in advance. The speaker also mentions getting a fake ID for a younger friend, Albert, to play billiards, and extends an open invitation to the listener to join them for billiards later that evening.

View full transcript
What kind of training are you doing, man? I do yoga nearly every day. And then weights. I do like five yoga sessions a week. I do like five weight sessions a week, like around five. And you like cardio? You got your surfing and snowboarding and stuff. I do yoga nearly every day. I'm not doing really much cardio. I play a bit of paddle. The yoga is quite good cardio. But you got your active lifestyle. Yeah. Yeah, definitely. I dance. Oh, shit. Yeah. That's actually really good cardio. Well, the best cardio is the one that just feels like play. Exactly. Yeah. Nice. Well, and then also, like, I have the subs membership and like all the gyms are nice, but since I can't choose, I go to the, like, nicest one of the nice ones. So I just discovered this one recently. I didn't think it was so big and nice, actually. This is one of the best ones. Yeah, definitely. Store up. Yeah. That one's not so practical for me to go to. I used to go to Colosseum. Yeah. I don't love that one. What's worse about that one? I just, you just, you feel, you're so low down, you know? I never thought about it. Yeah, yeah. I love, they have the BS gym hall for some of the group sessions. I do the, like, high rock sessions there. Very fun. I would love to try that one day. I should get back into, like... Well, let me invite you to one of them. Yeah, I'd love to come. They just, they're fully booked very early, so you got to do one week into the future. But I'll suggest you one and then you can let me know if it fits your schedule. I'll wait for you out there. Yeah, Julia is the best. I work with her. She's the best. She's like, I work with her. Bro, I was there. I've never been there before. My friend suggested me. I was there two days ago. Yeah, they have a really nice little bar as well. Well, and then also, like, I have the subs membership and, like, all the gyms are nice, but since I can't choose, I go to the, like, nicest one of the nice ones. So, I just discovered this one recently. I didn't think it was so big and nice, actually. This is one of the best ones. Yeah. Yeah, definitely. Yeah. That one's also practical for me to go to. I used to go to Colosseum. Yeah. I don't love that one. It's okay. What's worse about that one? I just, you just, you feel, you're so low down, you know? I never thought about it. Yeah, yeah. I love, they have the BS gym hall for some of the group sessions. I do the, like, high rock sessions there. They're very fun. I would love to try that one day. I should get back into, like... Well, let me invite you to one of them. Yeah, I'd love to come. No, just, they're fully booked very early, so you got to do one week into the future. But I'll suggest you one and you can let me know if it fits your schedule. I'll send you the stuff. Well, you gotta send me the photos and stuff. Are you Albertite? Bro, I'm going to the time table. Are you the same age as Albert? Wait, you didn't text me. No, I'm quite a bit older than you. How old are you? 23. I see. Nice. Yeah, because at the moment, Albert can't come out and, like, play billiards and shit with me. I've got him a fake ID. I took him actually to use... I took him to use Decathlon, which has been funny. But yeah, I'm gonna go and play billiards with a friend later. So if you do get some energy, come by. Well, it was amazing meeting you. Yeah, you too. And we'll see you soon. Sounds good, brother. Have a good one. Have a great night.
63325d277208e5e46dd2fab681a4a53f5191ab3f81e38a91e9d8ca24671b93b6_d20aaa0486eb.m4a
Wednesday, February 4, 2026
12:23 AM ยท 24:30
Essence

The day was a frustrating cycle of perceived progress and regression on an app development project, compounded by AI tool limitations and a desire to balance planning with building, alongside personal reflections on social interactions, fitness, and the surprising return of a past habit influenced by environment.

Summary

The speaker spent the day engrossed in app development, feeling a frustrating mix of progress and regression. They're stuck in the planning phase, using notes and AI chat to define the app without visual designs or code, fearing they're over-planning instead of building. Attempts at behavioral prototypes with Vercel's v0 were underwhelming, missing features and struggling with context management. A major setback involved Gemini's canvas feature, which silently removed existing information when new details were added, turning their carefully built definitions into a "smudged mess." This has forced them back to manual note-taking, with the challenge of synthesizing information from the now-too-long Gemini chat. They're debating whether to continue with detailed text definitions or jump directly into coding with V0, fearing the latter might lead to messy code. The speaker is also grappling with structuring their text definitions, considering single versus multiple documents and managing information repetition. Despite the scattered and voluminous information, they believe they have a clear mental picture of the app and that an AI could synthesize it all if context windows weren't an issue. Later, the speaker met a friend, Albert, noting their vastly different worldviews, particularly Albert's skepticism towards official explanations and governments, which sparked reflections on distinguishing truth from misinformation. They then attended a high-ropes bootcamp session, where they initially felt isolated but managed to connect with an outgoing participant, planning to train together next week to expand their social circle at the gym. The day's meals included oats, bread with butter and eggs, and leftover reindeer stew, supplemented by chocolate after the intense workout. Finally, the speaker reflected on masturbation, a habit they'd largely stopped but resumed twice today. They don't feel shame or see it as a problem now, unlike in their youth, but are considering stricter rules to encourage real-life intimacy. They attribute the resurgence to being back home and alone, highlighting how environment significantly influences their thoughts and cravings. They also plan to adjust their sleep schedule to match others in the house, feeling it's "wrong in principle" to wake up later.

View full transcript
Today again went very fast. I was working on the app essentially the whole day, and it feels like a very, like, ever-consuming task in that I, in a sense, made progress, but in a sense also didn't make progress at all, or even regressed, which is annoying because I'm still stuck in working in my notes and AI chat trying to define the app without really having any visual design documents or any more, like, coded app. And so it's frustrating that I'm still in this stage and that I'm not, you know, making something yet. And I'm afraid I'm spending too much time, you know, in my head, just, like, planning instead of really building. At the same time, I feel like I'm constantly, you know, actually working to finish, you know, to move on, but I'm just, like, I just find myself not getting out of this state. Anyways, today I did actually try making some behavioral prototypes. There's some, like, front-end iterations, essentially, and, I mean, it did somewhat work, but much less than what I had expected. A lot of features got missed out and didn't work as well as they were supposed to. And so I essentially, for the tool I used, the context wasn't managed well enough. I used the v0 by Vercel because my understanding is for a project, it's very good to use that, maybe not to develop the full thing, but to develop the front-end to get a nice design. I'm still not sure if I should use Figma or not for, like, even more pure design before moving into, like, web app implementation. I think it does make sense to make a more proper and nice design and thought out, but also it's a step I can probably skip to not, you know, bloat the process too much. I was working a long time in this Gemini chat for my app, and it does handle the context pretty well of me, like, asking follow-up questions, and it can remember stuff. But then I was making, like, the app definitions separated by concern in these canvas documents in Gemini, and that's where a big problem started, where after a while, I was asking it to make edits to existing canvases, and it would, it seems to have a very natural, like, max length that it wants to put on the canvases, what I've now learned in hindsight. Like, it really wants to keep them a certain length. And so when I ask it to add stuff, it usually will also go and remove other stuff from the canvas without really mentioning it. And that's horrible because I thought I was building these documents that were growing and I was refining them over time. But really, the quality of the document was just staying kind of the same or it was even slowly deteriorating because new details were added, but important existing information was removed. I didn't catch that until a while later. And so now it's all a mess, all those smudged canvases, and it's all a mess. So now I'm going back to my notes and I'm defining it there. I'm still going to use AI to try to synthesize the information from my previous notes and from the whole Gemini conversation, which is now too long, so it can't really do it, is the thing. So I'm going to have to literally maybe scroll up and copy the, like, yeah, sub-segments of the chat sequence and then paste that into Gemini or another LLM to extract useful information. Hopefully, I can maybe export the whole chat or something. But then we also have the canvases and the different versions of them, and there's some information in the previous versions. So then I have to, on each canvas, I can click back to see previous versions, but it's all just kind of a mess, and it's going to have to be a little bit of a manual job. So it's frustrating. And then the question is, should I even do all this or just try to more, like, build it directly in V0, and then it's going to be wrong and miss a lot of stuff, but just, like, use the chat to keep iterating on it and iterating on it, and then use that code as the source of truth. The reason I'm avoiding that is because I'm afraid the code will be messy very quickly because you keep, like, changing stuff and adding on stuff where some of the, like, structures should be thought out beforehand and therefore, like, known about from when you initially start coding. So I'm afraid it's going to introduce a lot of spaghetti by kind of iterating on the project in a kind of coding platform, and that's why I'm trying this time to do it very much in just text documents before actually coding. Because previous projects I've tried just iterating, you know, building the thing and then iterating it, building it instantly and then iterating on it through cursor or lovable and stuff. And that, you know, brings its own set of problems. So that's why I'm trying this approach now. I'm also having my frustrations with it. So this whole process, I mean, there's a lot of learning in it, but I'm also trying to figure out with, like, trying to define the app at the current state in only, like, text documents, what's a good structure? Should I just have one document or should I split it into multiple documents? And if so, how do I, what should those documents be, like, logically, semantically, what's each of the documents? What's the separation? What's the connection? Should information, how much should information be repeated between the documents or kept to only a single one? Because especially as I'm building it, you know, ideas change and then it's hard to remember to update all references and information. But now I am starting to form more and more of a clear idea in my head of exactly what it is. So things are not going to change as much. So it's easy for me to define it, even across documents. But also at the same time, since I know a lot of the details now, then it just leads my thinking into, like, smaller, more subtle details and then suddenly it's the same problem of doing new thinking on those and that, like, changing over time. But I guess, I mean, you do kind of reach a point where you don't need to specify the small detail, even smaller details anymore, where you can just leave that up to the AI coding engine to do whatever. I think I have at this point a very good mental picture of the app in my head. And I think through everything I've written down so far in my AI conversation, so I actually have also gotten everything out of my head. Like, everything has been, all the information has been converted into, like, digital text. Like, the information is available digitally. The problem is just that it's too much and too scattered and has changed over time. And so there's a chronology aspect to when it came out of my brain. But I think theoretically, if I could get, you know, AI to kind of scan all that information and synthesize it, it should be, like, everything sufficient to, like, make really proper definitions and just, like, build the app. But I'm not sure how I could do that automatically right now, given that information is, there's a lot, so, you know, it falls out of the context windows of any of these models. You know, I have this one Gemini conversation that I've literally been working on for hours on working sessions and then over multiple days, so, like, so many fucking hours total. And I cannot just, like, pass the whole thing into any LLM. I mean, I would think I can't, like, I've heard that these LLMs have, like, context windows that can cover entire books and stuff. I haven't really looked into that, but then I would definitely assume, like, it could also handle even the whole conversation in, like, one inside a context window. But I feel like in practice, whenever I use them, they absolutely don't do that. So, but maybe that's more of a thing of the implementation of these, like, ChatGPT and Gemini apps. I'm not sure I should look into that. So the information right now is just kind of scattered and, like, too much dumped to where I don't know how to get an LLM to handle all the context correctly. And therefore I need to, like, break it down into chunks or do it more manually. And manually review much more thoroughly. Anyways, that's enough yapping about the project. I'm going to continue working on it tomorrow. I don't know exactly what I'm going to do. I'm just trying to fucking get it, you know, into at least the point where I have design files and make the behavioral prototype, like, the front-end. But I just keep feeling like I just have to keep working on the textual definition. I just, it just, the task kind of just expands and expands and expands in terms of, like, time consumption more than I had initially expected. Anyways, yeah, I worked on that for a while at home. Had breakfast, went to the city to meet my friend Albert again, who I also met in the gym yesterday. And then we worked all just a little bit together so we could talk a little bit and also just work independently. He believes so many, like, all the fucking conspiracy theories. He's so skeptical to just, like, the official explanation of stuff and governments and authority and everything. So it's crazy how we have such different worldviews. But, I mean, we, I find it so interesting because I was more open to conspiracy theories before. And then I kind of went away from it, so now I'm really not anymore. It's really fucking interesting because I don't think either of us is going to convince the other person, like, of anything. It's crazy how, like, humans get so fixed in a worldview and how we can have such

In this day and age, there's just like so much seemingly infinite information available, but it's not all correct information, and like, how do you fucking distinguish between it? How do you like detect bullshit? How do you protect yourself from misinformation? These are fucking hard problems, and it's interesting. Yeah, anyways. Then for my workout, I did a group session at Sots, the high ropes bootcamp session, the same one I did last week. And so that was great. I didn't actually talk to someone there today. I wanted to go with a guy that I met there two weeks ago, who I like talked with on Instagram and stuff, but he's sick, so he didn't come today. And when I came in initially, I didn't feel the guts this time or the kind of natural occasion to go and talk to him or someone, so I just like warmed up alone, did the whole session alone. But at the end, this one guy who I actually wanted to talk to earlier because he seemed kind of outgoing and I've seen him there before, and he's very sporty and like fit. I saw him during the session. He was like extra loud and kind of like shouting and encouraging people, even though he wasn't a coach. He was just one of the participants. And then at the end, he went around like high-fiving a lot of people or maybe everyone or like a lot of people, including me. So I took that chance to just speak to him a little bit, and that was cool. And I told him, he was like, oh, I wasn't really feeling it today. So I was like, hey, next time, you know, let's run together. So now when I go to the same session next week, which fuck, I forgot to book it, I forgot to book it, so I got to remember that. When I go to the same one next week, now I can say hi to him when I see him there, and then we can kind of get to know each other. So I kind of initiated a new contact there, and he's more like, he's been there many times and I've seen him socializing there, so he's a good lever for me to also get to know a lot of other people there. It's gonna bring me into like right vibe. So that's cool. And then I went home. Just ate, and yeah. Food today, breakfast, well, I had kind of breakfast and lunch combined cause I ate everything before I went to work in the city, and that was a long time before the gym session because I knew I wanted to like not eat. It was just just practically more simple, and also I knew I don't want to eat like close to the thing since it's a lot of running and stuff. So I had oats with milk for breakfast, and then slash lunch at a similar time, also bread with butter and eggs. Like hard-boiled and sliced eggs. And yeah, after the session, came home, had some chocolate, which was the reasoning I used as that, you know, it's good to fill up with some sugar after a super like intense session like that, super hard session, just refill, give the calories and buy some fuel. And then I also had some proper dinner. It was the, what was left of the reindeer stew that my mom made again, and with like reindeer meat and vegetables and some cream, I think. And then that was not too much food, so I added on some more, just like bread with butter and eggs and cheese. And then I did watch porn and masturbate today, twice actually, which is interesting because I haven't done it in a long time now. I haven't done it for the like month that I was living in a different place. And in general, it's something that I did a lot when I was younger and then quit for a while. Like I quit it for like many months. And then picked it up a little bit again and like comes and goes. But I feel like when I was younger, it was like an addiction, but now I don't feel like it's a problem. And so when I did it now, I don't carry like shame around it in the same way I did when I was younger. It was just kind of like, fine, I don't know, I don't think about it too much. I think it's just fine, like a small treat or whatever, like a fun thing to do. I'm still like uncertain if I should feel guilty about it or not, but I don't really do, I don't really feel guilty about it. I might still decide to enforce a stricter rule, like not doing it to force myself to get more into like, you know, real sex with real people instead of just fucking masturbating. But I haven't decided on anything like that for now, so yeah. I just wanted to know that I did it now. It's been a long time since the last time. And the trigger was kind of that I was back home in this house, and also that I was suddenly home alone in the middle of the day. And that just seemed like the perfect opportunity. It was something I haven't even thought about for like weeks, really. And then suddenly it just hit my mind and it was like the perfect thing. And that just proves how much the environment like set an effect because that like thought, like it doesn't really come subconsciously, like it doesn't hit my mind when I'm in a different environment, but now like back home when I was traveling or when I was living in this one place, but now back home here, that's where I grown up, like it kind of hits. So that was the same now when going to bed. So I was in the middle of the day and then now when going to bed after finishing everything, it kind of hit me again, like everyone else was sleeping, like, hey, I could do it again. And so I was like, yeah, fuck it, let me just do it again. And I don't see it as a problem, but it's definitely interesting, like, because it shows how environment, my environment affects my literally, my thought patterns, not like, I would say how it affects my subconscious, my subconscious thinking, but I'm not sure if it's the correct terminology, but essentially it kind of affects how I think and feel and act to a certain extent, like which thoughts are suddenly gonna appear in my conscious mind or when I suddenly feel a craving to do, like what feelings I get a word, cravings I get. And also I'm guessing that for also, I haven't noticed as explicitly, but it probably does also like how generally kind of maybe happy or sad or confident or self-conscious I'm feeling. Yeah, anyways, that's the notes for today. Yeah, final thing, I think I'm gonna wanna adjust my sleep schedule. I feel like when I'm living in this house, people usually getting up like around six, seven a.m., so I think I should just start doing the same thing, because it feels lazy to wake up after everyone else left. Even though it's nice and peaceful and stuff, it just feels wrong in principle. I think I should like match them. Also then you can kind of have like the morning coffee together and potentially small talk if you want to. I don't need to. And then it's of course more of a mess, like fighting for the bathroom and stuff, but it just like in principle, it feels wrong to be waking up after everyone else. I should be like waking up at the same time or before. It feels like lazy, even though I know it doesn't matter, like you still have the same hours in the day, you can still be just as productive or whatever. It just feels lazy and I cannot, you know, I need to be like a high, high achieving, is a feel, is a, fuck, now I'm stumbling over my words, you know. I say that obviously to announce how you're gonna say, you know, that's not objective, that's so fucking subjective, but yeah, that's true, like I do feel like I need to be achieving more, so. And like being more proper, and so a small part of that is adjusting the sleep schedule, so I'm like waking up early like the other people in the house, my family, yeah. So I will maybe do that completely. I haven't decided yet, but I might just start setting an alarm earlier and then I know my sleep schedule will adjust pretty quickly after that. But if I don't do that, then it will probably not happen by itself, so we'll see if I'm gonna do that soon.
532449f843f118ec0c27843d4f423192646380250da11872d45ec1d664c62a06_e926b42b5af7.m4a
Wednesday, January 28, 2026
9:25 AM ยท 13:04
Essence

The speaker is using a walk to mentally prepare for a work session on their app, focusing on validating external tool dependencies, refining feature sets, and designing user experience elements like lyric editing and scoring.

Summary

While on a 15-minute walk, the speaker is using voice recording to organize their thoughts for an app development session. A primary focus is on validating external tools and services the app depends on, ensuring they provide the necessary functionality, as this is a critical, un-investigated area. They also want to finalize the app's features, critically evaluating each one to ensure it adds value without unnecessary complexity, using their own past experiences as a user to guide decisions. The speaker delves into specific feature considerations, such as the ability to edit lyrics, including the complexities of editing timestamped lyrics and the potential UI challenges. They also ponder a scoring system for songs, questioning its implementation given different user interaction modes (typing vs. performance) and the potential for a manual score adjustment. They consider ditching a song-scoring feature in favor of a typing test score and user-marked song completion. Finally, they touch upon incorporating AI explanations for words/sentences and scraping lyric explanations from Genius, emphasizing the need for a smooth and intuitive UI for these features, and planning the overall app layout and user flow within a song.

View full transcript
I'm walking now, doing my work session on the app, and it's about a 15-minute walk. So I figured again while walking, I'll just try this approach where I'll just dump thoughts on a voice recording because I think it helps me to focus my mind and my thoughts on this app, what needs to be done when I'm talking compared if I'm just trying to think in my head, thoughts usually drift elsewhere. I'll try to mentally prepare myself for what I'm going to do today. I want to definitely fire off some deep research prompts to criticize my idea or criticize my implementation plan, or maybe not criticize, but to validate it, whether it's actually going to work, whether these services actually provide this functionality because I'm dependent on external tools and companies and I'm not familiar with them, I don't know exactly what they provide or not. And I have asked about it and looked a little bit up the API docs, but I have not investigated it closely. So it's a very important thing I should do. And then it's all the other features of the app because this is like for the desire sets, everything that relies on the external data sources. I've got to decide on what I'm going to use there. I just need to double-check that the plan is actually sound. Then we have all the other stuff in the app. I have decided on exactly which features I want and which I don't, although I could still change my mind in this state and the state because it's not, I'm not completely certain of all of them, that it's a good idea to have them, whether they all provide value or are maybe unnecessary. Every new feature is going to add more complexity to the UI, but I mean, I do think I'm happy with the exact features that I've defined. There is a reason for each of them why I put it there. Then I'm trying to criticize it with the razor, the requirement of have I used or been looking for that feature when I was practicing my reps myself? And I realized that, for example, editing the lyrics, well, I've never used that feature because like I didn't have it, but I was kind of looking for it at a few times, which I think would be useful. And then we have the typing test again. I've never really been looking for that, but I've been doing kind of a version in the notes. So this would actually be better. We have the marking whenever you make a mistake without stopping the flow. I actually kind of was looking for that, but I never had an alternative to it. So that one is good. What else? Well, within editing the lyrics, the question is like, should I be able to just edit the full lyrics and the way it's based out in the line breaks and the words? Or also the timestamp lyrics. Because editing the timestamp lyrics, that's something I never thought about because I never had timestamp lyrics that much. I guess I had it in Spotify and stuff. Yeah, I wouldn't be able to edit both. And I got to think a little bit about how that's going to work because I'm not sure if those are the same text block or if they are two different separate ones. I'm hoping that it is just one text block, but it's formatted the way where every new line starts with the brackets with a timestamp. And so in the timestamp view, it shows that source text block, but it puts them at correct time and hides that the time square brackets. And then the full text lyrics, it does the same. It hides the square text brackets and it just ignores them, but it just keeps the new lines and shows all the lyrics as a block. I'm guessing that's what's happening, which means I could edit them in kind of either view and the edits would be synchronized as a single source of truth. Then I got to think about how I want my editing UI to look. Should it be edited kind of in the UI of the time sync lyrics or in the UI of the full ones or in the UI that's even more raw where you see like the raw text, which is the lyrics and also the time brackets. Because then you get more power with your editing, then the format can be confusing and then you're able to mess up the format while editing to create an incorrect file. So it seems like maybe not a good idea. So I got to do some thinking about that. But I still want the feature though. Is there any feature I would be still considering that I'll put on the list, but I actually would consider maybe ditching. The other thing where it gives you like a score on the song, how well you know it, that's something I never used or even looked forward to the list. So that one isn't important, but at the same time, it would be quite easy to implement and I think it would just be a little bit of a fun gamifying feature. There would of course then be thought into exactly how it should be implemented. But again, like the exact mechanics of how it does the calculation can be changed later without having to worry about it affecting the UI, so it's fine. But yeah, I guess there's a question, you know, it's easiest when they're gonna do the typing test type out the lyrics and it's easy to calculate a score based on how many correct or incorrect they get. But then what if they just try to perform it and they do the thing where they're gonna click space if they make a mistake? Then there's like a question, if they play the song and they never click space to mark a mistake, does that mean they did it perfectly? Well, it could mean that, or it could mean that they just didn't bother. So I wouldn't be sure if I could calculate the score from that unless I like very clearly in the UI distinguished like just casual playback versus I like this as a performance test playback and you have to promise to press it if you make a mistake. I could only evaluate the score from that mode where they're like, they promise to mark it if they make a mistake. That just seems kind of like, like a lot, you know? Like unnecessary enforcement of things. But then if I don't do that, then is then the only way to improve your score is going to be to do the typing test? Because that's gonna be kind of annoying. If I know the song that I perform here, then that's the thing I like and then I'll feel like I have to do another typing test just to prove it for the score, get my score up, but then it's annoying because I don't care about that. I already know the song. So I guess then I could have a manual way to set your score or increase your score, but then that's gonna hacky. I guess you can mark it yourself if you think you know it. I guess, yeah. Instead of showing a score on the song overall, we could just show a score on the typing test. That we can do for sure. And we could let the user just mark when they, in their opinion, have like completed a song or learned a song. That could make sense. Make it more simple, more free for them. Then there was a question of like, you know, space repetition, whether the app should kind of encourage or enforce them to repeat the song then after, you know, one day or three days or something. But I'm not doing that in version one, so don't need to worry about that. That's something I think I need to do, but I can do that when I sit down to work. Now I'm just walking, so I'm just trying to scout the landscape, try to get an overview. So I need to think about the score functionality if I want it and if so, how I want it to work. Then we have the lyrics lore, lyrics explanations. I guess that was a little bit of a question. But I think I decided on, yes, if possible, I want to scrape it from Genius and show it in that. I got to think about how I want that to interact with the UI. I was thinking to let the user also define their own explanations, but I decided that we don't need that for version one because I've never wanted to do that and if I do it, I just do it in my head. I don't need to write it down. Then we have the feature of getting a word or a sentence or something explained by AI as well. I definitely want that. So I got to think about how to implement that. I think the implementation is actually pretty easy. I just got to think about how I want it to work in the UI. It should be smooth, intuitive, not confusing, responsive. Okay, what else, what else? Is there any more like deep research I want to do? I really don't think so. I got to do some more thinking about the app layout overall and specifically when you're within a song. What's the layout? What's kind of the modes or the tabs or the views? Given the features, like how much are they together on one screen or kind of separated out on multiple screens? Kind of conceptually. Technically encoded doesn't matter, but like how does it feel to the user? Uh-huh.
7987522d654ed38abc0dc71f402f27e7775d443c27be9251aceca2d284cb79ee_b07a1e9e3d54.m4a
Tuesday, January 27, 2026
8:55 PM ยท 21:59
Essence

This memo formalizes the speaker's thoughts on key technical decisions for their app, particularly regarding music streaming, lyric sourcing, and legal considerations, after a day of intensive research and AI-assisted exploration.

Summary

The speaker used this voice memo to formalize thoughts after a five-hour working session, primarily in an Apple Note and with Gemini AI, to make hands-free decisions about their app's development. A major discovery was the feasibility of using Spotify's SDK for music playback, offering a smoother, UI-controlled experience, as an alternative to YouTube. While Apple Music and YouTube Music were considered, they presented issues with subscriptions or lack of SDKs, narrowing the choice to Spotify or YouTube. The Spotify SDK offers a premium, legal approach but requires users to have a premium account and the app to be approved by Spotify, which could be a lengthy process. The speaker decided to implement both Spotify and YouTube options, with YouTube serving as a free, basic mode and Spotify as a premium one, acknowledging the trade-offs of each. Further research led to the decision to use the Spotify API for song searches, replacing a messier existing API. For lyrics, MusicMatch was identified as the best source for synced lyrics, with LRC lib as a strong open-source, free alternative that might even be sufficient on its own. For lyric explanations, Genius was deemed the superior and primary source. The speaker grappled with the legality of scraping Genius data, given the app's increasingly legal framework with Spotify and LRC lib, but ultimately decided to include it due to its value and the non-profit nature of the app. Looking ahead, the speaker plans to conduct deep research prompts to fact-check all the decisions made and assumptions about APIs and data sources, especially concerning information provided by Gemini, which lacked explicit source attribution. They want to ensure the Spotify streaming and search functionalities work as intended without requiring user logins for basic search. The speaker also noted that most remaining tasks involve product thinking, design, and user experience, with less need for further technical research, relying on intuition for UI/UX decisions.

View full transcript
You know, the purpose of this voice memo is just to help me formalize my thoughts a little bit more because today, my main working session for, I don't know, maybe five hours or something, I was mostly just working in this one Apple note the entire time and another time in the Gemini chat, just discussing ideas and how you do a little bit of research for me. And so I've already done quite a lot of textual thinking informed decision around the app, but now while walking, it'll just be a little bit more of a, you know, hands-free, um, approach to it. I'm just seeing like what comes to mind now while not focusing so much on it. And I think it's going to help me just by thinking out loud, help me remember a little bit what I learned today and what I decided today and what big questions are still left and like what needs to be more researched. So I think a main consideration that I discovered today was that first I discovered that actually YouTube playing all the music is not the only option. I can actually do it with Spotify. As long as the user has a Spotify account, I don't need to have like a Spotify embed in the app. They actually provide this SDK, which gives me a much smoother playback experience where I don't need to use any of their UI. I can control the player myself and that can stream any song. And that was cool because that leaves the question of whether I should use that one or use the YouTube one and they each have their pros and cons. I could potentially also use the Apple Music one, but the main issue there is just that I don't have an Apple Music subscription, so it wouldn't work for me, you know, potentially for other users if they, you have that instead of Spotify. But I would need Apple developer subscription to even set it up to get like the access tokens or the keys or whatever. And so that I wouldn't need to pay $100 for that, which I don't want to. And then it could be like YouTube Music, but they do not provide an SDK for this. So really the question is just between like YouTube videos and that, or using the Spotify SDK. And the Spotify approach is a little bit more premium and it's actually made for this. It's more like completely legal. But then the big con is just that it requires the user to have a Spotify premium account, like a Spotify subscription. So that's fine for me because I have it. And I think, you know, Spotify is the standard. Most people have it, but it will limit the app from some users that don't have it. Second limitation is that as of now, while it's in development status, it will only work for me and potentially like people where I add their Spotify account to a whitelist and a allow list. I'll add like their email to the list. So it doesn't work for anyone. And in order for Spotify to allow me to make the app so that anyone can like connect their Spotify to it, they would need to approve my app. So I would need to send an application. They would consider whether they want to approve it or not. And Gemini did say that they went kind of harder down on this in 2025 over 2026, becoming stricter with which apps they approve, requiring them to have more unique features instead of just being a wrapper for Spotify player. I'm not into it, but I do think it would be fine for my app, given that the playback, it's a big feature, but it's not the main feature. It's I mean, the value of the app is really that combined with the other features of the lyrics and the practice mode and the feedback, the like error heat map and everything. So I would think you would get approved. I don't know how long it takes, given that I want to kind of get it developed quickly. And like this week. And I don't know if they would approve it. And I need to start the application and I don't know like how many features I need to build out before I can send the application because I guess they need to like look at it and see it, see that it actually has these other features. So I think I need to build that out before I can send the application. I'm not sure. So I should probably look into that, figure out whether I should send it right now with the prototype, which does already have a song and lyrics. So perhaps good enough. I'm not sure. Or if I need to build it up more. So I've got to land on the decision actually for this version 1 that I'm making to do both, both Spotify and YouTube, which is a weird decision. It doesn't really make sense, but also in a sense, it does make sense because each of them compromises. So I think I'll just, yeah, I'll do that. As long as I've decided it beforehand and planned it, I think it will be fine and be able to implement them both without any issues. I'm just walking to the bus right now. It's going to be a little bit weird recording this voice memo while on the bus, but it will be fine. And so I guess that was a nice thing that I learned today that you can actually stream the music directly from Spotify. That's pretty cool. Okay, so I decided to implement both the YouTube embed and the Spotify web SDK for streaming music. And I'm going to let the user just switch, have it on like the home screen or admin menu, choose like which mode the app is going to be in. So the YouTube mode will be like, you know, the free easy simple version of the app. And then the Spotify mode will be like the more premium version. The stage here with the free version with the YouTube videos is that YouTube might play ads on the videos, but still in testing, I've never encountered it. So I'm not sure. And so that's just a big uncertainty. I can't really know until I encounter one, although like my sources say it's likely that it will do it. I don't know if maybe the ads don't appear because YouTube has made sure not to have ads on the songs and music videos because that would be like stealing royalties from the artists. I don't think it would. So I'm not sure. Maybe it's just been random or the algorithm has like known enough info about me to know that there was no point in putting ads in those videos, but that a random user might get it. So yeah, I decided to use the Spotify API also to do song searches. The original, you know, the start song search. That's going to be much better than the current search in the LRC lib search API. That one is so messy. So I'm looking forward to that change. If we get the song unique identifier and then we should be submitted from one of those two and then we get the lyrics from a source and to like unify them and make sure we connect the right ones. We use, I forget what it was called, the song.link thing, something starting with an O, the service that helps you connect songs between services. Odesi or something. Use that one to find the correct YouTube video that should work. And then for the lyrics, there's many sources. I learned about some new ones that I hadn't thought about today, but I also learned I haven't investigated it deeper, but based on the Gemini convo, it seems like music match is the best library overall, especially for synced lyrics. And that's just the best one. So if you're going to scrape anywhere, just scrape that one and it's the best. You don't need to worry about the other ones and have like multiple backup options. And then if I don't want to scrape, then I should just use LRC lib because it's open source and it's free and it offers an API exactly for that use case. It's made for that. And the data set is impressively good in my opinion for what I've tested. And honestly, maybe even good enough for the app, so I might not even need any other source. So I can continue just using that one. And if I want to upgrade it, I should go with scraping music match using a library that someone has created for that. So it would hopefully also require minimal work on my part. So then we have the song search with the album art and the song metadata. We get to play the song and we get the lyrics timestamped lyrics. Okay, so we have all the data. What else do we need from the web? Only one other thing is the lyrics explanation. So the lyrics lore. Again, there's many sources, but I, and I should probably research it closer, but I learned that the main one and best one is the Genius site and it's also superior that all the other ones are kind of irrelevant compared to that one. It's good enough to just pick that one and not have any backup options. And there's a question given the current like legality status of the app, whether I want to scrape the Genius results and put them into the app or not, like the lyrics explanations, because I was initially thinking definitely yes, because the app was already so like illegal. But now it's actually gotten kind of more legal. It's like the official Spotify connection, open source lyrics library. So it's not fully legal still, but for a public free project, it's fine. And so before either the song playback or the lyrics would have been my biggest concern, but given the state that we have now, if I choose to go and scrape Genius for the explanations of the lyrics, then I think that would be the biggest kind of legality question mark in the app. But honestly, I think it's a cool feature. I would like to have it in my app. I don't want to go to a separate site for it. And for the legality again, it's fine. I'm not trying to profit from the app. So I think it's fine. So I think I should do it. I need to check feasibility. Hopefully there's a nice and easy way to scrape that data or some library for

I think I'll decide on doing that as long as it's feasible, which I think it probably is, but you know, pending investigation. And that's nice, I think that's all the data the app needs to get from the web, and therefore also most of the research I needed to do. And then the less rest is more left to just my decision making and like product thinking, design, user experience thinking, and I don't need to do all these like research anymore and make decisions where I don't know what's available because that has been kind of, you know, it's annoying because you don't know which data is available, so you gotta research that. And then you never know if you research everything, so I used like an AI tool to do the research for me, again, you don't know if it checks everything, how much you can trust it, whether it's hallucinating anything or just omitting some sources, skipping over something. It's just hard to like know for sure if the research has been thorough enough. That's cool. Now I'm wondering within those parts, is there, because I was planning on running some like deep research prompts in one of the AI engines today, but I ended up not doing it because I didn't get around to it because I was more just formalizing my app idea. So I could and should, I think, definitely run some deep research prompts tomorrow. But what should those be? Because I've kind of made the decisions now already. Well, I think some of them should just be like verifying for each of the decisions that I've made and like the data sources and the APIs and how I think this is gonna work, which is based on info from the AI, but it could be hallucinated or misinterpreted from the source info. So I should probably do deep research on all of that and make sure it actually works like that and is actually implementable like that and that all my assumptions are actually correct and all my evaluations of like which data source is better than others, all those evaluations are actually correct. So kind of fact checking and logic checking. I guess the deep research is mostly for fact checking everything I've written. I guess I could also run like a deep thinking model to criticize whether what I've written makes sense cohesively, but I think that's not as valuable because I think that's more important when I write even more thorough technical details. I think as of the current state, that would be kind of pointless, that type of analysis. Okay, but yeah, that's definitely some deep research prompts I wanna make, just like fact checking whatever I've written as my current understanding of the data sources and APIs and capabilities, that you know, it actually works like that and that these things would actually work for my app for the features that I want, the way I intend to use them. For example, like the Spotify music streaming that it would actually work and I can design it the way I want to control playback the way I want to loop it and segment and stuff without having to embed like a ugly Spotify widget. And you know, the fact that it would actually, like I've written that, you know, the users, even without being logged in, they can still use the Spotify API to search any song because that feature should only require me as a developer to set up a token. It shouldn't like require the user to connect with Spotify and just like verify stuff like that, that is actually correct. Because I've asked and gotten it verified from the Gemini combo and but it doesn't really tell me if it's searching stuff online or not. It doesn't really give me sources. It says that it has like verified it, but it never gives me a source. So the UI doesn't really prove if it's searched the internet or not or whether this stuff like is in its knowledge base or not or whether it could potentially be hallucinated. There was only like one time through my whole hours of working and talking with Gemini for quite a lot. There was only like one time I noticed it actually put like a link reference in the chat. But I did ask it to search online quite a lot of times and it did claim to be searching online quite a lot of times. So I really don't know. That's something I maybe miss from using ChatGPT as my daily driver, like AI conversationalist before. It would show more explicitly what was the sources for the answer and for each like part of the answer. So it just gives me some peace of mind that is actually backed up by online sources, by the API docs or something. It might be fine with Gemini if it is trustable, but I would have to test it more because I don't know as of now if it is trustable or not. But if I learn in the future that it is, then this user experience would be fine. But I don't know how it's implemented as of now. Okay, but yeah, that leaves us then with kind of thinking about the rest of the app, which is fine. The other features. And I don't know if there's any deep research to do for any of that or if it's just thinking and like decision making. I think it is actually only thinking and decision making. No more research needed. It could be research in the UI and UX, like what is the best, but I don't think I need to do that. I think I, you know, I've used enough apps in my life and had enough experience with it that I can figure it out from intuition.
2ddb68122afe4c95711e91f532b50d580c8911f820bccc82fc800f3a98ae4b57_a58aaf90d29c.m4a
Tuesday, January 27, 2026
7:55 AM ยท 14:44
Essence

The user successfully tested their lyrics learning app while driving, confirming its core functionality, but identified several UI and feature improvements needed for a better user experience.

Summary

The user successfully tested their lyrics learning app while driving, finding it worked exactly as intended by displaying large-font lyrics synced with music, similar to Spotify. They noted several issues and potential improvements. First, the player controls are inconsistent between synced and full lyrics modes, which is confusing; they suggest a consistent player with a toggle for lyric view. Second, they considered adding voice search for hands-free song selection while driving. Third, the app currently auto-restarts songs, but the user believes this should be an optional loop toggle, which would also need to interact with the existing segment looping feature. Fourth, the home screen's song cards have a play icon that suggests automatic playback upon clicking, but it only opens the song. The user is unsure whether to make it auto-play or simply remove the misleading icon, leaning towards removing the icon to avoid annoyance when only text practice is desired. They also suggest removing the generic header/logo for more screen space, perhaps replacing it with a descriptive text that disappears after the first use. Finally, they recommend moving the 'change audio' button to an advanced settings menu as it's rarely used, and removing the 'reset offset' button for lyric syncing, as it's redundant and can be confused with other reset functions.

View full transcript
just tested my lyrics learning current version of the app while driving to the university and it was working, it was working, it was super cool. It was exactly what I wanted, exactly what I've been doing with the Spotify app, being able to play the song and also read the lyrics live with like only, you know, in quite large fonts so I can read it while driving. Still I cannot do it perfectly because I have to focus on my driving, but like quite a lot of the time I can glance at my phone and read like the line. And so that was fucking cool. I'll note a few issues that I encountered or just things that I thought of. Player controls are bad on the phone as we know from before, so I'm just noting it now again. First of all, like, now the player controls are different depending on whether we're watching the synced lyrics or the full lyrics. And it's because when we're watching the synced lyrics, we have the extra buttons for setting an offset for the sync, which we don't have when watching full lyrics. But it's very confusing that the player controls shift around in between the two modes. And furthermore, yeah, I'm thinking that even when watching the full lyrics, I'm going to have like the line highlighted of the one that's being currently sang, even though it's not like the font is changing or anything, it's still going to be just like slightly different color or something so you can see it at a glance. So it makes sense to have the same controls also there when watching the full lyrics. And regardless of if you put them there or somewhere else, it's just the part of the screen that's like the music player, the play and pause button and the controls, should be exactly the same, like between watching the full lyrics and the synced lyrics. It shouldn't be shifting like that. It should be kind of like, yeah, those toggles shouldn't affect each other. It's more like the, you know, you have that screen and then with the player controls at the bottom or somewhere, and then there's like one part of the screen which is the view for the lyrics. And then within there, there's like a toggle to see the full lyrics or the synced lyrics. Or perhaps it's even in the player actually, that could also make sense that there's a button to toggle between full lyrics and synced lyrics in the player controls. I think that will make more sense actually. Yeah, that kind of makes sense. I'm excited to work more properly on like the UI and visualizations at some point, but I'm not doing that right now. Secondly, I realized, I don't know if this is a feature I want to add or not, but since I was in the car while I wanted to search a new song on the app, it's a little bit, you know, I have to press the search field and then try to type it with the keyboard and it's a little bit tricky while driving. Would be cool if I could do a voice search somehow, more like hands-free search. I guess essentially I'm getting to, like ideally the whole app should be, or and parts of it should be able to be used by voice. That's more of a maybe I specifically just searching a new song. When playing it back, now it's automatically, when you finish the song, it's automatically restarting. I think that should be an option. Like sometimes you want it to just stop, so you kind of have like a natural break or sometimes you want it to restart if you just want to keep practicing. You're already sometimes doing that if you're doing a segment, so perhaps in the player controls there should be like a toggle for whether it should loop or not. But then I'm guessing if we have that for the whole song, we've got to think about how that interacts with the choosing a segment feature that we already have, because we have one where you can choose a segment, then you play it. And then since we now already automatically have the feature where on the functionality, when it finishes the segment, then it just plays it from the beginning. So then this loop toggle should probably interact with that as well. So potentially we have the segment controls. So either like when you play it, either you play the whole song or you play the segment you have defined. And then separately, there's like a toggle on or off for whether it should loop. And that works regardless of whether your segments are on or not. So if you're playing the whole song and no loop, it plays the whole song and stop. If you play a segment and the loop thing is off, then it plays the segment and then stops. But if you have it on and play the whole song, and then it gets to the end, then it restarts. And if you have it on and play a segment, then it plays the segment and automatically restarts. Perhaps something like that. Could make sense. I'm not sure if the looping control is more relevant for when playing the full song or for when playing the segment. And I think that makes sense, the way I just said it. From the home screen currently with the UI, when you kind of hover one of the, like a home screen is kind of the gallery of the songs you practiced before. So when you hover one of them on the computer or kind of like hold over them on the phone, then it shows as clickable, but in the middle it shows like a play icon. So you would kind of assume when you click that, that it's going to open the song, but also automatically start playback, but it doesn't. It just opens the song. So I find it confusing. I'm not sure if I want it to be one click to open it and also start playback, or if one click is just opening it and then you choose within there whether you want to go to like practice or playback. Perhaps we could add even from the home screen, from the gallery, if there's like different parts of the song card you can click on if you want to go direct into playback and watching the lyrics or direct into like a textual practice mode or whatever different modes we have. I'm not sure. And therefore I think for now it probably makes sense to clicking in doesn't automatically trigger anything. It just goes into the song in like one of the pages, whatever I deem to be the default page and doesn't automatically start playback. Also though, I think most of the times when I open the song, I do want to play it back. So maybe that wouldn't make more sense as default. But I just imagine then the few times that I don't want it, when I actually want to just practice on the text, that's kind of annoying. I click it and it starts playing and I have to go and like pause it. And even though that's the majority scenario is I do want to play it, it might be so annoying on the cases that I don't, that it's better to have the default of it not playing. In that case, I mean, we're gonna rework the whole UI anyways, but for the current UI, we should just change it on the home screen so it doesn't look like now when it's like, you think when you click it, it's going to automatically start playback because it has this like play icon. We should just remove that one. So it's just like the card is highlighted. You understand it's clickable, but it shouldn't have the play icon. It should rather have like, if just nothing or it should say just like open or practice or something. I think just open is good, but maybe it doesn't even need to be any text. It just has the hover effect and then you click it. I was also thinking again for the UI, perhaps you don't need to have like a logo or a header at all because you know, the name is not something I'll put a lot of thought into. There's no logo. And so like we have the domain, but like there's no branding around it. The header is just kind of a waste of space. Like let's just remove it. Probably makes more sense, right? Or instead of having like the brand header, we could just have like a textual header, which is like, you know, more of the descriptive. It's like quickly learn the lyrics of your favorite songs or something. That would make more sense. Yeah. Or perhaps it's like that when you use it for the first time and you haven't used anyone, then you see that header, like quickly learn the lyrics. But then once you've added at least one, then that disappears just to give you more space for the actual app functionality for your song gallery and stuff. And it's just a search field and your songs and no more. Like unnecessary text anymore. Perhaps that. Also the, the change audio button, I think should not be in the music playback controls, because although it's connected, it's, it's not supposed to be used often, right? It's like a backup solution. It should be in some like hidden menu or very in the corner, like an advanced menu for the song, because it's like an admin thing. Like ideally it should automatically be the correct one. And so we don't need to have it there because it takes unnecessary space. It should be somewhere, but you got to think of a place that makes more sense. Perhaps it's like within each song, you have like an advanced settings option or something. But for now I don't know what else would be in there, but potentially I could see in the future, there might be other things I want to put there for now. I don't know, but somewhere else would make more sense. And then for the offset syncing, we have the plus and minus like 0.5 seconds, but then to the left of it, we also have like a reset button, which resets the offset, but we don't need that one because it's very easy to just use the plus and minus buttons to reset it to zero. It's an extra unnecessary button taking unnecessary space. And it's also confusable with the button to reset the whole song from the beginning. And also I don't know if we need that button either. I don't think
d73a80cc9569f42dd0b9e47bedf319e92daa1503ff8bab86f922a9b311609fdc_316d0292ae1d.m4a
Thursday, January 8, 2026
1:07 PM ยท 93:57
View full transcript
And then it all kind of depends on whatever I have to do, like I have to fix some things for my music and everything else which is going on right now. But most of the time when I have to work, I work around half past three, so I leave the house at two. And then I just work till the end of the evening and then I go back home. That's most of my routine. Sometimes my shifts are a bit later, so I start five or I have a night shift which starts at like 11 till seven in the morning. Oh shit. Yeah, but it's like, it's okay because I'm always busy with something, you know. Sometimes I'm making music, other times I'm like investing money and other times I'm reading a book. Other, I don't know, then I'm cleaning my whole room or making a closet. I don't know, I'm just always doing something, you know. So for me, if I go back to work, it's like sometimes it's like quite chill as well for me to work. You work full time? Yeah, I work 36 hours. So it's like full, like in the Dutch it's full time. Isn't it 37 and a half? I'm not sure, but the thing with my job is I cannot work 40 hours because of the times I have to work. It's like if I work 40 hours, then sometimes I have one day off and then I have to work another seven days on a row, you know, and I care about my health as well. And I think if I do work 40 hours, it's gonna like really fuck me up. So I'm not planning to do that anytime soon. Bro, have you fucked someone up in the streets recently while like guarding something important? Just like, you know? Yeah, I mean, I actually have the right to actually punch somebody, like legally as well. Yeah. And talking about punches as well, I'm planning to do a small boxing competition in like three months from now. Yeah. So yeah, I'm training now with one of my colleagues who is like a personal trainer for boxing. So he's giving me all the handouts and then I probably need to go in the ring. Okay, do you know your opponent? No, I have no idea. Bro, don't you wanna like, you could be put against anyone, man. Like, don't you wanna know who you're gonna fight? It's like, you know, it's like a competition mostly built on technique. So you actually get like a face mask on as well. So it's like not really, it's like not really high level because I don't, I don't think that's like good for me as well because I'm not sure if I even wanna do that. But the colleague of mine who was giving me like personal training, he was like, yeah, maybe you can try to do this and that. And I was like, you know what, fuck it, let's just do it. So it's like, it's like a competition mostly built on technique. And if you want, you can knock somebody out, you know, if that makes sense. So I'm curious, man. But it's kind of scary, but if I'm scary, if I think it's scary, then I have to do it, man. Sure, it's good. You're gonna grow from it. You know, last place I traveled to, the Fifi islands, there's this like bar where people fight each other. It's like a fight club bar. You know about it? Yeah, I've heard about it. I wanted to go there, but yeah, I had like a five star, I couldn't. If you were there, would you have just like fought like a stranger from the bar that day? Hell yeah, bro. Yeah? Hell yeah, why not? Why the fuck not? You never know who you're gonna end up against though. Like, could be... Do you wanna say hi? Yeah. Hello! This is mom. OK, how are you? This is Martin. OK. I met him in Bangkok. Hi Martin, where are you from? I'm from the Netherlands. Ah, cool. Yeah. We have close friends who are from the Netherlands, but they're living here in Norway. Yeah, Norway is a beautiful country. Yeah, as you see, I'm now out, on my way out skiing, cross-country skiing. It's super cold and snowy here now, so it's a very nice winter day. It's really good. It's actually snowing here as well. Yeah, it has. Wow. Which city is this? It's like Amersfoort, so it's like close to Utrecht. Yeah, yeah, I know on the map. Yeah, exactly. It's like in the center. Yeah, right. Cool. So, yeah, that's where I live. Yeah. OK, have a nice day. Ha det. You too, bye-bye. Check the view out from my room, bro. Here is actually the train station. A lot of snow. And then here it's just like down into the valley. That's nice, man. I don't have fucking mountains in the Netherlands, bro. If I wanna go skiing, I have to go like all the way to another country to actually do it. We have a lot of skiing here. We don't live centrally in the city, we live on the outskirts. We're like right where all the skiing starts, kind of. So we can just go from our house, essentially. Pretty cool if you wanna be active. Like my parents are always skiing and in the summer they're always like running in the woods. Like I got two Switzerland and Italy. Oh shit, more traveling, bro. Yeah, more traveling. Yeah. Like most of my trips are like a bit smaller, but that's for me, that's like totally fine. So I think around March, not March, like around June, my parents invited me to go to Ireland, so I'll probably go to Dublin as well. And then at the end of the year, around October, I'm probably gonna go to Mexico as well. Nice. Well, that's so long into the future, bro. It's like I cannot plan anything that far into the future. I don't know where my life is at that point, man. I mean, it's true, but it's like I got some calls from like colleagues who were going there, so if I didn't went with them, I'd probably have to pay like double the money. So I think it's like a good experience to just go with everyone else and see whatever happens. Because if I can do it a bit cheaper, then of course I would, because, you know, everybody wants some cheap stuff, you know. It's a good opportunity. Yeah, I think it is. You told me something about that girl, right? Like, you got rejected? Yeah, it was a very small thing, bro. It was just, like, no, the main point is not really that I got rejected, like, whatever. The main point is I did a fucking call approach, like, in Oslo, or Norway, which I never did in my life, bro. And like, in the gym, and that's like, you know, something to be proud of. And she was very beautiful, but yeah, she just said, like, she had a boyfriend, so it was whatever. But it's like, I was not really embarrassed or felt bad about the rejection at all. I was just like, glad I did it, because I knew if I didn't do it, then I would beat myself up over it. So it's not more true than that. It's taking opportunities, man. That's all what life is about, right? Because instead of, like, going into a grave and thinking, what if, instead of, I did. You know, that's like, you know, my whole life, bro. It's like, my biggest fucking issue in life is all these, like, small opportunities that you should seize, but it's just, like, a little bit scary, and I never do that shit, bro. And that's, like, that's all my regrets. Like, I never regret something I did or try, like, even if it doesn't work out, like, whatever. It's a choice I made. But it's, like, all the things that I wanted to do, but didn't, because I was just a little bit scared. That's all the shit I regret, man. That's not, like, always. Because you have, like, so little to lose by just trying, but it's, I get, like, stuck in my head. I mean, you're actually working on it, so I think that's the first thing to do. I mean, you're, like, you're working on it, and then you're actually acting on it as well. If you do it more confidently and you do it, like, a lot of times, then most of the things will fall into place anyway. I mean, that's what I think. But I think, like, if you still have some things where you think about, yeah, maybe I have that opportunity, then just, like, now you know, right? So now you can actually do something about it instead of being like, oh, I should have. Because everything what's in the past has already happened. No, I've been aware of this for years, bro. Like, it's my, it's my biggest issue in life. Like, I cannot get over this shit. And I wanna, it's so simple in practice, like, just don't fucking worry about it and just, like, try the thing, whatever it is. But then in actuality, I've, like, never been able to make, like, a huge leap. It's just always these, like, small improvements. It's like, progressive overload in the gym. It's like, I traveled, I got, like, gradually more confident, gradually I took these, like, new chances here and there. It's like a very gradual thing. Even though I wish I could just, like, snap my fingers and

I feel like I need to do these things which normal people don't feel the need to do. I feel like I need to go to a fucking hotel and ask them if I can just stay a night for free, which is like a normal person would not really feel the need to do that. But I know it's like rejection therapy. It's good for me because I would be so scared to ask for that because it's like such a weird thing. And that's why I should just go and do it to like get over it. I feel like I need to do like street interviews and also just, you know, taking a camera and just interviewing people on the streets about whatever, whereas most people don't feel the need to do that. And I think like it would be interesting in itself, but also I feel like I need to do it because I'm scared to do it, just like you said. But I haven't so far gone ahead and actually done it. And so therefore I feel like even more that I need to do it. What do you say when you're overthinking, just like practice thinking about nothing instead? Whatever, bro. What about the girl you told me about from the rave? Tell me about that. So you met her once before in the gym or something. You approached her. You said nothing happened, then you met her again now randomly. What the fuck? That's not a real friend. What kind of drugs do you take for the rave, bro? Eight? Bro. Is it the same as Molly or is it a different thing? What? MDMA. Okay. What did you do for the first date? No. Okay. So, of course, I said to her, like, be surprised a little bit. So I haven't told her anything. I just said, I wanna go on a date with you. And this Sunday we will do everything great. So we went to Prison Island. It's like a, I would say, a sort of like an escape room, but a bit more fun, I would say. So you have like 40 to 50 different rooms. You got like a tag and you can go inside. And sometimes you can, you have to do some puzzles. Otherwise you have to like climb left to right and don't hit like certain lasers or whatsoever. Sometimes you have like a football game or a basketball game or just games where you have to think a little bit before you actually try to do something. Because I think it's like, like, like low key, you know, it's like, just really simple. And then afterwards it was like one and a half hours. We went to a bistro, which is like a place where you can just eat unlimited, like different types of food, like salmon and beef and whatever. So you can just choose between like 50 different types of, what is it called, like, like a food pieces and whatsoever. And you can just choose whatever you want. So that's how we did it. Sounds good. Fun activity. It's a good balance. Interesting story. Sounds like it's going good now. What? What? Yeah, so you're trying to fuck me up. What the fuck? That's not a real friend. What kind of drugs do you take for the rave, bro? What the fuck? I don't know. It's interesting. People have like different preferences, though. Like, I don't know, I never really did dates and this is not something I need to learn. But I mean, some people love movie dates, first dates. Some people love just having a meal. Some people love like doing some fun activity.

because it takes the pressure off of just the talking. So... It depends, like, also my other part about this whole date was this, I've been talking to her for like, let's say, 4 to 5 months, you know, so it's because I've met her in the gym, and I'm just talking to like a whole lot of people in the gym, just to see what they're up to, because it's always fun to just have a chit-chat with people around there. So I think that also helped out a little bit, because I already knew some about her past and about her, like her job and everything she stands for, so I think that works a little bit better, maybe. But, you know, who knows. When you kind of known each other for that long, isn't it kind of weird to transition from, like, friendship into dating? I mean, the thing is, is, I actually never... We, let's, yeah, we never actually did something out of, outside of the gym, you know. So it was most of the time, like, I have like a lot of people in the gym, I just meet them at the gym, I just say hi to them for like five minutes, then I'll hit my workout, and then I'll see them probably the next day again. You know, so I'm not sure if you can actually call it a friendship from that based on. But you could. I mean, it is like Yeah, it's... I think for me, I'm not sure if it was like a friendship, yes or no. It's like, yeah, whatever. It worked out so far, so... Yeah. Cool. Mhm. Good shit. Type shit. Type shit. Zakelijk. Zakelijk, man. Zakelijk. Just business, bro. It's just business. Alright, you got any other plans for today? Bro, I don't have plans these days. I'm very lost in life, man. I was traveling, I came home, now I need to pick a new direction. I'm still undecided. I came back with my family. I really fucking hate living here. It's like my old bedroom where I've like grown up. I just need a different environment. So I looked for a place to live now. I just found something short-term. I'm gonna go and move out. I found it today. I called with a guy. It was cool. I'm gonna move out tomorrow. It's like a different part of Oslo. And it's just like one month. I'm gonna stay there to see. Stay with like some random people there. It's probably gonna be cool. I just also get my own space. I need that. I need to make some friends in the city, bro. Like it's weird. This is my city where I grew up. I like lived there most of my life. And still I'm kind of like, I kind of have no friends there. Which is weird. Like it's lonely right now. I have some friends from university, but we're kind of different because I kind of changed a lot recently and I was never that passionate about the studies. And also I was much less social when I was in university. Like I didn't meet that many people. I was very like locked into my studies. I have childhood friends that are mostly studying other places. I met up with some of them now in the holidays, but honestly, it feels kind of off as well. Like, you know, we're kind of growing apart. So it's like a big thing for me now. It's like, I need to meet new people. And it's weird because usually you do it through like your studies or your job, but I'm not doing either of those. So I need to actually like do something to meet new people because I've gone here like before in my own routine. I'll like, you know, work on my personal projects and I'll like go to the gym. Like I never meet anyone. So I need to do something new to meet new people. And then I need to decide if I want to do this like content creation thing. I need to like go all in because now I'm like half-assing it. I'm like filming a lot of stuff here and there. Like it's... If I'm gonna do something, I need to do it properly. So I need to make a decision there and also find, just pick up my balls a little bit, find some courage. So right now it's like a weird situation. Like everything is very undecided. But there's a lot of potential, you know. Like a lot of great things can happen. But I need to make some friends first of all because I cannot like go... That's why I'm doing more FaceTime calls these days because I need this like social input because I'm just fucking lonely, bro. So I need to fix that. I think you have a gym, right? Around your place? Yeah, well, I've been... I don't have a... Like one gym that I go to. I always go to these like different gyms. I mean, there's like a chain in Oslo, so I can go to like any one of those. And I've always just been going to like random ones sometimes, the same one. But I never really met people in the gym. Maybe it's a good idea to start there because you actually share a hobby, right? Yeah. It's like you actually said you wanted to like move a little bit out of your comfort zone, right? Maybe just don't talk, you don't have to talk to like girls or whatsoever. Maybe if you go to a gym consistently, you can see the same people and you can actually hang out with them. Just ask like, hey, maybe you wanna train together? Because then the bar is a little lower if you go for girls already, if that makes sense. Yeah, it's true. I mean, that works for me. Like I've actually made some friends in the gym as well. Sometimes I see them and I just work out with them as well. But that works for me. Like I'm not sure what works for you. I think best thing for you to do is just do something, you know. It doesn't matter if it's good or bad. Just try it. As long as you don't try drugs. That's all. Well, but I think I should try like a little bit of drugs though. Like I think it would be good for me, bro. But I don't feel a rush to do drugs right now. But I think at some point I'm gonna try some psychedelics and stuff. But it's gonna be like a future chapter in life. I think you should have like the basis first. Yeah. Like the underneath layer should be correct and then you can try whatever. Yeah. I think that's a good idea. I agree with that. Maybe, I don't know, maybe if there's like some parties around here. I'm not sure if you're like really from partying or whatsoever. But if there's like places where you can actually go and meet new people. You can always just... That's also stepping out of your comfort zone, right? Just asking like, hey, can I join for the night, for the evening? I don't have that, bro. But I mean, there's a city. There's a nightlife scene. And before I was like, I never got into it. I was kind of uncomfortable in the nightlife scene. But through this trip, I learned like how to navigate it, how to feel comfortable. Even if I'm sober, I can still also have a fun time and fuck around. Or I can drink if I want to. So now, tomorrow is Friday and then Saturday. I'm gonna go out and just try to like talk to a lot of people. It's gonna be my plan. Yeah, you should, man. I think that's a good idea. And then maybe in the also... Yeah, I'm like thinking a little bit with you as well. Like... But yeah, I think that's a good place to start. You said you had like some time to think because you have to choose a new path. Yeah. Yeah, I think... Do you have like anything in your mind which you actually wanna try now? I'm wondering if I should sign up for some type of like activity in the city. For example, like... I really wanna learn to improve like my presentation technique. And so I figured if there's like a kind of like a course or maybe like a uni class somewhere where you go there and you're like... You're there maybe like once a week or something and you have to do a lot of like practice presentations or whatever. You're learning techniques and you're also just like doing it and there's also a random group of people there. It's like a place where you have, you know, a common interest and you also just meet some people who do the same thing. Something like that or I could go to... I really never looked into it, but I mean, I'm sure there's a lot of things happening in the city that you can go to if you want to. Like there's some like tour, like some people go to this like... mountain top nearby that you can join if you want to or... Go to like dancing class or something, I don't know. I'm wondering if I should sign up for something like this. Well, just do it, man. Just do it, guy. Like, why wait? If you're still wondering, you should just like try to search it up, look it up and then, you know, if it's like not too expensive or if it's maybe free, you can wipe it off your list and say like, hey, I've already tried that. It's not for me. Okay, next, you know. I think just try, man. That's all you have, right? This is the part where you have to try. Okay, okay, I'll try something. I'll try something. You should send me some messages about that as well, man. I'll tell you when I try something. But I mean, it's gonna be interesting because like tomorrow I'm literally moving into this

I didn't know him because he was just living here. Then I have Jordan, and I have two other roommates, which are sometimes home, but most of the time they're gone. So, it's not really too bad, to be honest. It's like, yeah, it's like all good. Actually, I'm gonna change this. Sorry, man. I needed to do the dishes. How was your piss? What? Taking a piss. Oh, bro, you know, you saw me. Actually, I was also taking a piss. Wait, when, bro? Bro, at the orange light, like, when I put my phone up. See, I was actually taking a piss. I didn't know this, bro. Yeah, I mean, I'm a good hider, you know? I can hide the shit like that. I'm gonna make myself another cup of tea. As you should, man. Bro, I don't know if you saw on my Instagram, but I did this one cross-country skiing race, like, a day before New Year's. I saw something coming up, I think. Was it hard? Well, you know, I wasn't thinking too much about it. Like, it's a race, it's a competition, so yeah, it's gonna be hard, but, you know, it's more, you just do it for fun. It's like a fun challenge to do. The race would be, like, one and a half hours, approximately, for my speed. And it would just be fun. Like, the interesting thing is, I haven't gone that much skiing. Like, last winter, I went one trip the whole winter. Like, I didn't do anything. This winter, I was in Thailand, right? So I just came home for this, for Christmas. We just went, like, some easy skiing trips, like, with my family and friends the days before. And then there was a race. And it fucked me up so bad, bro. I did not expect it at all. I thought next day I was gonna be, you know, quite sore, but it would be all right. But I was, like, tired to, like, the next dimension tired, bro. Like, my whole body was feeling... So, like, weak. Like, my whole soul was weak. Like, I could barely walk anywhere. I felt, like, slightly sick, just, like, broken down. My lungs were hurting, bro. Like, everything was fucked. From, like, just one and a half hours of, like, exertion. Yeah, shit's good. Yeah, absolutely. It was the only time I felt similar. I've done many races and stuff, but it's only when I, like, ran the marathon, bro. Like, the fucking four-hour running. I was the same amount of tired after this short little skiing race. Yeah, it was just, like, unusual for my body. But, uh... Now I'm fine again. I didn't train for a week after. What? It took me seven days after that until I could do my first workout. I mean, it's good, man. I think, uh... Things like that are always, like, what's making you actually a stronger kind of person. Yeah, for sure. Let me see how it's looking outside. Are you, uh... You're... This is my bike, bro. Hell yeah. Nice bike, bro. Super nice. And this is where I live up there. Uh-huh. Got a little bit of a view. Yeah. I have to, uh... I have to get a package. It's, like... I bought some, like, increased cases for booster boxes of Pokemon cards. Because I think they're gonna be, like, really, really worth it in the long term. Yeah. So, we will see what happens. You're such a nerd, bro. Pokemon booster boxes. Bro, it's gonna be worth it. You have to trust me, man. Did you play the Pokemon games when you were growing up? Yeah, on the Game Boy. The Game Boy. Oh, I never had the Game Boy, bro. Oh. That's a shame, bro. That's OG type shit, you know? I had the Nintendo DS. I was playing on that one. And we got... You could actually play those games. Yeah. Those games are expensive now, you know? Like Pokemon Leaf Green is, like, $120 now. Really? Damn. Yeah. It's really worth it, man. I played Pokemon Diamond was the main one I played. Oh. Like, I think I got, like, Black and Pearl. Yeah, maybe I had a black one, actually, also. That was, like, a newer... That was a newer one. It's more modern. No. No, that's the same. Wait. Diamond and Pearl was the same, like, generation. Black was later, bro. It was, like, black and white. That was later. Yeah. I had one of those. So, you're on the bike right now? Hell yeah, bro. Yeah, it works out pretty well, man. You're biking. Yeah, the audio is fine. There's no noise, actually. Oh, shit. I'm gonna have to be red light because fuck this shit, man. It's cold outside. Oh, you're a gangster like that, huh? Yeah. So, when you work, you're, like, enforcing the rules and then outside of work, you're breaking all the rules. Yeah, of course, man. They're meant to be broken, right? Yeah, yeah, yeah. The rules are actually, like, white chalk. Do you have, like, main locations you work or is it, like, always changing? Most of the time I work around Amersfoort, which is the place where I live. Sometimes there's, like, other places as well. Like, it kind of depends, but most of the places... How do you call it? I can just go to every station if needed, you know? So, it's like the whole railway station in the Netherlands, right? Yeah. There's, like, a lot of options. Let's just put it that way. Most of my shifts are on the same place, which is Amersfoort. What's, like, the word you would use to, like, describe the job? Would you just say, like, security or is there, like, a better word? I'm not sure if it's better. I think the best thing I would suggest to, like, describe it, I would say it's like railway police, if that makes sense. Yeah, yeah. So, I'm legally allowed to use violence as well. I can arrest people if I need to. So, that's why I'm also taking, like, those boxing classes right now, right? I think it's fairly important to have some kind of self-defense, but also to just go all in, if needed, of course. Do you ever do a similar type of job just for, like, a different company or at a different venue or a different event or something? No, I don't. It's just here at railway stations. It's also, if you, like, sign a contract, you are, like, legally not allowed to because you signed a contract there. Okay. Or you have to do it, like, when they don't see it or whatever, but I don't want that. Like, I'm busy already, you know, with a lot of stuff. Is there, like, a handbook of the railway rules that you know inside and out? I mean, there are some rules, but, like, some things are, like, not written out for others because they're not really necessary, but I can use it to take action if needed. If that makes any sense. Yeah. Wait, uh... Did you get to your location? Yeah, I have to double-check. All right, wait. One second, mate. I have to get my package. Yeah. No worries. Let me give you... Or can you talk now or are you just waiting for the package? Yeah, I'm waiting, but I'll let you know. I'll give you a room tour. All right, so... Here's the office. You already saw it. This is the table here. Now it's actually passing the train. But I wish I had, you know, someone working as a railway police right here to keep me safe. That would have been something, bro. Close the house, yeah. Then, yeah, we got some clothes. Got my full-body mirror here. Let's see, what's this? Yeah. Yes. Hey, Tom. Yeah, I focused now. Oh, yeah, yeah, yeah. This is, just a reminder, it's the full marathon medal, bro. You see 42,000. Here's the trail around Oslo. Then here's the bed. It's, like, really high up, you see? Because I need the storage underneath. So it's like a bunch of clothes under. Then we got... Do you know this machine? Yeah, it's like a ski erg, right? Yeah, yeah, yeah. We got the ski erg in the room. And then just some more closets. Yeah. That's about it, bro. Good, man. I think you got some good space to work with. Yeah, but I'm trying to get the fuck out of this place, man. So I'm leaving tomorrow. Yeah, for a month. Yeah, and then we'll see if I want to stay longer in Oslo. I'll hopefully again find somewhere I can live, not with my family. But I also might, like, travel again to Bali or something. I'll have to see. I'm giving the city a chance. So I'm staying for, like, a month, and then I'm going to reevaluate. That's a good idea. It's also a step, right? But if you've been there for a month, right, can you, like, extend as well or do you have to leave anyways?

Yeah, I should check that out actually, but when I was traveling, I saw recommendations of a bunch of apps like that, but I haven't really considered it, like in my home city, but there's probably similar things here. Yeah, I think it is. Maybe those things work internationally as well. Where's Jordan right now? He's somewhere in Thailand. Hello. I have no idea where he is, but one of his friends is coming over, already flew over to Thailand now. So he's doing a... He's doing a long trip, huh? Yeah, he's there for like three months. I'm not sure if it was like the first of February or the end of February, but at least he's like staying there for quite some time. But again, it's... Yeah, I don't know, it's... He... We need to do like a full round trip around, I think, Vietnam as well. I was actually quite happy I could like already go home within like the two weeks, because for me it's like... I don't go on holidays that often, or I haven't like for the last eight years. I haven't had like the money for it as well, and also the time. And I was there for like two weeks, and it was like my first holiday where I actually had three weeks of spare time in like the last eight to nine years, I would say. So for me, this was already quite, quite long, you know? Damn, bro. You've been grinding. Yeah, but I was working, man. I'm always working, grinding, doing shit, and... Like a study, and I quit after a year. I did get like some sort of degree, but I had to pay like off a whole lot of student debt as well. So I just paid everything off as quickly as I could, so I was just like living like a goblin for like two years. But now I don't have any debt anymore, so that's really, really nice. You dropped out? No, it's like... I have one degree in graphic design, because I've just done that because I thought it was cool, but then I came across the issue that I didn't want to work 40 hours a week behind an office. That fucking sucks, man. I have to do something active. So I did that, I finished that one, then I applied for a new, like a higher education. I did that for a year, and in the same year I applied to join the Dutch Army. So I tried that for 10 months. And then, uh... Basically, I kind of finished that first year, and if you finish the first year with everything correct and good and whatsoever... You get like a... Sort of like, what is it called? Let's just say that like the first degree I have was like around level 4, if that makes any sense. If you complete the whole study, if you... If I completed another four years, I would be like level 6. And since I've completed the first full year, and I did everything correct, I got level 5. Let's just keep it really simple. But then, when I completed like level 5, I got like those... I got like that degree. Then I went to the Dutch Army. And from that year, I had like... I think it was like 12 to 13,000 euros of debt. Which is quite some money, but there's like actually people who are like 70,000 euros in debt after four years of studying. And that fucking sucks. I got 40 right now. 40,000? Yeah. Are you being sarcastic now, yes or no? Because I can see you smirk now. No, actually. And I'm collecting student loans for now one more semester until the summer, then it runs out. Why? Well, you're not like studying right now, right? Yeah, it's kind of a... It's just how the system is. Like, you can study, and even if you fall a little bit behind, they'll still like keep supporting you. Unless you fall too much behind. And I have... And I like want to keep collecting it as long as possible because it gives me more freedom. And I think I'll be much easier able to pay back later as long as I have the freedom now to pursue something. And so I'm like... I'm like technically registered to study this semester as well. I just went and did it in the system, even though I'm not gonna show up, I'm gonna end up failing my classes. And so right now, this semester, I dropped out, right? So I failed all my classes, so I'm like one semester behind. And that's like just on the edge where they'll still keep supporting you. But if you get fall more behind, which I will when I fail when the summer comes, then I'm like too much. Yeah, exactly. So for you, it's like a... I would say a decent choice to keep some extra and just start doing that first and then move on. Yeah, to me it makes sense. It's like the best loan you can ever get. The terms are super good. Even once it runs out, you're supposed to start paying it back, but you can actually procrastinate it like three years before you have to start paying it back. And I mean, my perspective on life is that by that point, I'll have made a ton of money through something I'm very passionate about so that I can just kind of pay it back like that. And so I think I should use it to buy myself as much time as possible. I think that makes some sort of sense, yeah. Oh my fucking God. Shit, bro. So you got a bunch of Pokemon stuff here, bro. But what's the small heads there? You have like a Deadpool figure and then some other stuff. Which ones? These ones? That's Pokemon. Yeah. And then below... And then this? To the left of the way. This? The other side. Yeah, these like small heads. Isn't this the left? Oh. It's like pop figures, man. It's like you can collect these. Oh, it's a Marvel. Spiderman. Yeah. It's like these are like collectibles as well. You can like save like thousands of these things. But I actually came across a small market. Somebody sold these for 2 euros a piece. Oh shit. I bought everything. I bought everything. I sold them for 160 to someone else place. And I got these ones. I'm always... I'm always worried like making more... Trying to make more money, right? Yeah. So now, look. Now I got these cases. Like I bought these cases as well. Like these are just like empty cases. Like acrylic cases. It's like just hollow. Yeah. And I try to buy one of these motherfuckers every month. So this is like a booster box of Pokemon cards. And basically long term, these will eventually go up in value because these won't get printed anymore. It's like I bought this one for like 250 euros. It's kind of down now to around 200. So it has lost some money, but if I just don't open it for like another 10 years, this could probably be like 3 to 4,000 euros in 10 years. So I think it's worth an investment. Or at least to try. So I bought like 10 of these cases. So for the next 10 months, I'll just buy one of these boxes every month. And then just leave them. Interesting. Yeah. I'm always trying to make more money, man. It's like, I don't know. I think it's fun, you know. Just... I mean, yeah, it's investments, but these ones are unique. Like it's not like traditional investments. You need to know the game a little bit in order to... Yeah. That's cool. And then you get to have the collectibles as decorations meanwhile. That's also true, yeah. But the only thing is I actually bought like these cases now, so I put them in the bag here. Only issue right now is that one of these cases is like barely fitting inside, so that fucking sucks, to be honest. It's like I'm not even sure if you can actually see this, but I have this one to the right, which I have to flip. Because otherwise it won't fit, and that sucks. It doesn't fit perfectly. They're like crammed in there. Well, it looks like it fits perfectly, but is it too tight? No, it's like these boxes are a little too big. So... No, these won't fit. But I can try to fit them and I'll just put them sideways, but then I have another issue, and I'm not sure if it's gonna... if the other one's gonna fit, because I have one more. Look, I actually got these in Thailand as well, man. These ones. Oh shit, did you? Yeah, I did, actually. I saw them on a market. I was like, fuck it, I'm gonna buy some. Aren't you worried these things are like fake? No, because I opened them already. I opened a few. Okay. So I have some proof, like on every product where I know most of the time which is fake or real. And the small figurines you bought super cheap, they're also like, you know, the real or...? Which ones? The small figurines in the bottom left. Yeah, those are real. It's like, but it's more like an extra thing. I just like to... These are... I got them from like official stores as well. And some I found on markets as well. So I found this one for like 5 bucks, which is worth around, which I think it's worth around 35 euros now. So it's all... Everything's improving, you know. You just got a shelf full of money right there. Yeah, basically. I actually have... Look, I have this card as well, this one. Oh shit, what

I'm gonna be playing some like Minecraft or League of Legends or something. That's gonna be fucking amazing. Bro, you should, man. Minecraft is good, man. And Pokemon, and play some Pokemon, bro. Yeah, you should, of course. Bye, dude. Bye, dude, man. Bye, dude. We should actually get some things for these as well. Fuck, I actually graded this card as well. It's worth like absolutely nothing, but it's one of the best cards I have from my youth as well. Yeah. I've had this card when I was, I think, around 12. Oh, shit. What's it graded? It's like a 2009, so a buy, bro. It's worthless, but I don't give a shit. Yeah, it's cool. Did you have, when I was younger, I had this, kind of like a book, but it's like empty, but it has these slots that you can put your Pokemon cards into, so I have like a full book. I filled it up with Pokemon cards. Did you have that? Like books of cards? Yeah, it's like a book, or it's more like, I guess, like a folder. I don't know the word. It's like a book, bro, but instead of there being like pages with words, the pages are these like small pockets that you can slide cards into. So each page has the space for like 12 cards or something, and then you're like... One, two, three, four, five, six, seven, eight, nine, ten. Look, wait. Yo, I had one, bro. Let's just get this one out for a second, okay? So, look, you're talking about these shit, right? Yeah, yeah, yeah, yeah, yeah, yeah, yeah. Oh, shit. Yo. It's all organized as well. Yeah, man, of course. It's all got this big row, so I have to, you know. So I have more. So like I also have some gold cards here as well, and I have to keep it sideways a little bit, because otherwise you cannot see everything. These are like the full arts. Yeah. Then I got like a whole ton of, like I got like billions of Pikachu. Bro. Yeah. I got more Pikachu also. How'd you get all these, man? Like where do all these come from? I already have like four that make an image together on the previous page. Wait, which one? Here. Yeah, no, like, wait, here's one, I guess. Yeah, you got like four cards who make a picture together. I've never seen that. Yeah, it's quite new. Actually, you have like these days, you have like these cards as well, like jumbo cards. It's like you have these huge ones that like double the size as well if you compare them to each other. Yeah, oh, so that's the same card. Yeah. It's like maybe if you actually save some cards like from when you were younger, maybe these ones, like most of them are beaten up like quite badly, but like these are like old ones as well. Yeah, that looks exactly like the thing I had, bro. This type of... How'd you get so many, bro? Bro, I'm just grinding out markets and get them for like fairly cheap. And I just scam kids, bro. Really? Scam kids? I got actually, yeah, but it's like not really on purpose because some kids actually like the things they buy. Like when I go, we have like a King's Day here in the Netherlands, right? It's a tradition on the day of King's Day, you go outside and you take all the stuff you don't want and you can sell them on like a market, right? A lot of kids actually sell Pokemon cards on there as well. And actually one time I bought a card for like 10 cents, which was worth around five euros. And another kid was like really like, yeah, I'm searching for this card, this and that. I traded it for another card. He did and won, which was around 25 euros. So I actually told him like, hey, this one is worth more. He was like, no, I want to have it. So I just traded it for it and I just sold the other one. 10 cents, 10 cents made like 25 euros profit. I think that's a good margin. Nice, bro. Fuck the game. Fuck the game, bro. Shit, man. It's like, I got, I got like a whole lot of things. Like this is worth like a lot of money as well, but I'm saving it because I think it's, I love it, you know? It's like, these ones are not filled up yet, so I still have some things to collect. But I just take my time. Like I don't want to spend like billions of dollars or like euros on this as well because I don't think that's worth it. Yeah, it's like an interesting combination of a hobby and also kind of an investor project. Yeah, sometimes it's like, there's like a lot of things. Like usually a market sometimes actually sell these bags for like four euros a piece. And I just buy like a whole lot of them and I sell a few. And if I have made enough profit, then I just open the rest of it. You know, that's how I make money. If that makes, maybe it could make some sense, you know? It makes sense, bro. I'll have to think about like filling this up too. So. I have one more question for you, bro. It's more, more of like a serious question. This Pokemon stuff is super fun though. I'm just curious how you're like, what's your perspective is, how you're thinking about, you know, career and what you want to do like long-term for like life, you know? Like where do you see your life going? To be honest, I, um, I do have like a plan for the future, like a little bit. But, um, that's kind of also because I've been dating now, right? Because, um, this is for the first time in my life, I'm actually thinking about like, what if I maybe get kids or whatsoever, right? Because since I haven't had like a partner for a long time, I haven't really thought about that as well because it never came across my path. But, uh, for long-term, like, um, I think I'm at a point of life right now where I actually enjoy everything I'm doing right now. Uh, I have a, like, I have a nice job. Um, I have a studio so I can make my own music. I have a really good place where I can actually stay and just take care of my health as well. So I think that's like a good foundation. The thing I want to improve on for like the next couple of years, I want to really dig in like investing and index funds and everything like that because in the Netherlands it's really hard to actually be able to buy a house. So if I don't invest my money in something, I will never be able to buy a house. Like a house here costs like, let's say 400,000 euros, right? For a normal fucking house. Like the prices are like out of, out of, out of control. And so that's something I'm looking forward to. Like, um, I don't really care about like being famous or whatsoever, but I do like to make content. So I'm just making content and I'm hoping like long-term that I can actually have like a small community where I can just share my thoughts and my ins and outs, you know, which I think would be quite lovely. And yeah, I mean, the thing for me personally is, um, since I have paid off my student debt, I'm more focused on building like wealth now because, um, I've built like wealth for the last year now and I feel way more confident in my spendings and my everything, right? So if I want to go and take that girl out on a date, which maybe costs like 200 euros, fuck it. I'll just, I'll just, I just do it, you know? And I want to be able to do that more frequently, but also like for my parents and for every, everybody that's like, uh, which I love and care about. But I have to have, um, a plan for that. That's why I want to maybe invest like, uh, let's say uh 1500 euros in a course where I can actually learn how to do everything, uh, according right with like taxes and everything else. That's, that's kind of my plan. But I'm not, I'm not like, uh, yeah, I want to be, uh, be there like five years. I don't really like planning to look forward to in that, in that case, but I do want to improve everything I am doing right now. And I'm just doing every day or like I'm trying to do something every day to actually make that happen for long-term because I already see like a lot of improvement in like singing, but also like investing and also like taking care of myself for like the last year already. That's my plan. How's the content creation going? Sorry? How's it going with the content creation? It's like, it's like a little slow, but it's like, um, I'm not planning to rush. For me, most importantly is if I make something which I think is like funny or cool or helpful or whatsoever, then I'd rather think about it a little bit more than instead of just pumping content out and out and out, you know? Because that's something I'm not really willing to do. It's the same for my music. I make like three clips for, for my music. I make that and then I just post it and then, uh, that's for me, that's like enough, right? Because otherwise what I'm,

Velkommen hjem. Bare henge pรฅ rommet, ring med noen venner, sรฅ skal jeg make. Hva skjer med deg? Nei, ikke nรฅ. Nei, nรฅr jeg er ferdig รฅ ringe. Det var ikke mรธte, det var รฅ ringe venner. Du sendte meg mange meldinger. Jeg hadde ikke tid til รฅ svare.
5a268cfd6021f469845ea69259215df0d014d2e423b9b0c6523f8d5a437ac299_3dc0f21753dd.m4a
Thursday, January 8, 2026
12:37 PM ยท 14:40
View full transcript
Og sรฅ er det to karer, det er den ene vet jeg jobber pรฅ Carlings, og den andre vet jeg ikke sรฅ mye om, han er ganske ny, men det er ogsรฅ de flytter ut sammen med en periode, altsรฅ de flytter ut i slutten av januar. Det har vรฆrt opp og ned tror jeg, hvor ryddig det har vรฆrt. Det er pรฅ en mรฅte det eneste tingene jeg vil si i forhold til hvis det er noe med forhold til fellesarealer og sรฅnt. Men siste gang jeg var der, sรฅ pรฅ en mรฅte snakket vi om det og liksom tok opp det, og da var det liksom, vi kan utsette at det var opptatt at det var en viktig ting รฅ holde, og det gjorde at det ikke var sรฅ relevant for deg. Skal bo der en liten mรฅned da. Sรฅ det er det, og rommet pรฅ en mรฅte er jo, det er en seng pรฅ hvert fall, og ja, ellers sรฅ tror jeg det er ganske straightforward. Ja, det hรธres fint ut. Ja. Jeg skal bare, jeg kan dobbeltsjekke ogsรฅ om det er noe annet jeg har som kan vรฆre relevant. Jeg tror bare ikke det. En mรฅned tror jeg kanskje ikke det er noe annet jeg har som blir relevant. Men pรฅ en mรฅte, det er jo egentlig et hus som er belagt for flere enn som bor der nรฅ, sรฅ sรฅnn sett er det jo bra. God plass altsรฅ, i forhold til. Ja, sรฅ liksom, det er jo flere rom som ikke er i bruk, sรฅ ja. Ja, jeg synes det hรธres fint ut. Jeg vet ikke hva du vil ha fra meg akkurat nรฅ, liksom. Nei, altsรฅ, det jeg trenger er pรฅ en mรฅte, jeg er jo bare enig at du er innsjรธtt med den som bor der, og at det er litt sรฅnn, det blir jo litt sรฅnn mystery BNB pรฅ en mรฅte, eller hva du vil. Ja, men jeg tรฅler det. Jeg har bodd med mye fremmede mennesker i lรธpet av gangen. Ja, men det er pรฅ en mรฅte ingen der som er noe, jeg kjenner jo alle sammen da, sรฅnn at jeg vet at det er liksom at det, som sagt, jeg tror den eneste tingen, hvis det er noe, sรฅ er det jo det at det er liksom det er gutter som bor der, og det har vรฆrt opp og ned hvor ryddig det har vรฆrt, men det har pรฅ en mรฅte aldri vรฆrt utholdelig pรฅ noen mรฅte, men det er pรฅ en mรฅte, det er vel det eneste. Og da prรธver jeg pรฅ en mรฅte รฅ vรฆre sรฅ realistisk jeg kan med รฅ stole i anbefalingen, holdt jeg pรฅ รฅ si. Men jeg hadde ikke vรฆrt overrasket heller om det var gut. Jo, det er en ting og en annen ting, det er en hund som bor der. En hund? Ja. Det er fantastisk. Jeg elsker hunder, men jeg kan ikke fรฅ meg en egen i denne fasen likevel. Sรฅ bra. Det jeg trenger egentlig da er pรฅ en mรฅte, det er pรฅ en mรฅte, altsรฅ, for meg sรฅ er det pรฅ en mรฅte 28 dager, 20 dager eller hva det blir, ja, det blir pรฅ en mรฅte, hva skal jeg si, for meg er det viktig pรฅ en mรฅte bare at det ikke blir noe styr, sรฅ for meg er det viktigste egentlig at du er innforstรฅtt med pรฅ en mรฅte at du flytter inn med folk du ikke kjenner fra fรธr, men ogsรฅ som med alt forbeholdet jeg har sagt, at det er pรฅ en mรฅte oppegรฅende vanlige folk, som det er lett รฅ ha med รฅ gjรธre. Jeg har pรฅ en mรฅte aldri hatt noen som har flyttet inn med som virkelig har trukket. Og sรฅ er det รฅ sende meg de fulle navnene og e-post, og sรฅ adresse og fรธdselsdato. Beklager fรธdselsnummer, og sรฅ vil jeg lage en kontrakt, og sรฅ hvis du er sรฅnn noenlunde teknisk, sรฅ klarer vi รฅ signere den online, hvis ikke sรฅ kan vi signere den pรฅ kontoret vรฅrt pรฅ Svesta. Jeg er veldig teknisk, sรฅ det fรฅr vi til. Ja, og i forhold til innflytning, sรฅ er det pรฅ en mรฅte utgangspunkt at sรฅ fort leiet er inne, sรฅ er det bare รฅ flytte inn. Kult, og ja, hvordan vil det funke nรฅr det bare er en mรฅned, eller under en mรฅned? Det vil funke at det er en minimumspris pรฅ 8000. Og det positive med utover det? Nei. Nei, enkelt og greit. Ja. Sรฅ det er vel egentlig det som gjelder. Ok, fรธr jeg ser bilder av leiligheten, eller ser stedet selv, fรธr jeg skal signere kontrakten? Jaja, altsรฅ jeg fรฅr ikke vise deg huset, det er liksom, skal vi se, men jeg tipper jeg har den gamle annonsen et eller annet sted. I sรฅ fall sรฅ tror jeg det blir gledelig. For den jeg sรฅ pรฅ Finland, det var jo mer sรฅnn, det var ikke alltid et spesifikt sted. Nei, nรฅ mรฅ du bare se borti fra pris og alt sรฅnt, og det er, bildene er pรฅ en mรฅte fra alt er likt som pรฅ bildene, bortsett fra pรฅ stua sรฅ er det satt opp en nettvegg. Men det er sรฅnn det ser ut faktisk, skal vi se, hva er ditt telefonnummer? 469-474-188 Hvordan er det med varme der nรฅ pรฅ vinteren? Hva sa du? Hvordan er det med varme i huset pรฅ rommet nรฅ pรฅ vinteren? Nei, varme er det nok hvertfall, siden at du holder deg varmt, holdt jeg pรฅ รฅ si. Hvordan det varmes vet jeg ikke, men vi dekker pรฅ en mรฅte at det skal vรฆre nok til at huset holdes varmt, pรฅ en mรฅte. Ogsรฅ er det jo, fordeles jo strรธmregningen pรฅ alle som bor der. Sรฅ det er pรฅ en mรฅte, du skal ikke betale for noen flere dager enn det du bor der da, sรฅ det blir Gabriel som fikser det og ordner. Kommer den som er tillegg for meg pรฅ slutten av leieperioden? Ja. Men det deles jo pรฅ fem, sรฅ jeg har ikke sett siste regningen, men det er vel liksom slike som er bergvarme, det er det jeg synes. Sรฅ det er jo langt ifra det mest energiet, hva skal jeg si, det er det mest huset som er, det er et moderne hus pรฅ en mรฅte, som du ser pรฅ bildene, sรฅ er det liksom gode varmesystemer og alt som du tror. Nei, men det synes jeg hรธres fint ut, sรฅ jeg kan sende deg kontrakten pรฅ nรฅr du trenger, sรฅ kan du bare sende kontrakten med en gang. Ja, jeg skal egentlig til neste uke, men jeg skal gjรธre det fรธr jeg gjรธr det, sรฅ hvis du bare ringer pรฅ. Men nรฅr tenker du i forhold til innflytning egentlig? Jeg tror ikke jeg fรฅr til det i dag, for det er tre timer igjen av dagen. Men i morgen, hvis du fรฅr sendt i dag, sรฅ kan du fรฅ til i morgen. Da sitter vi pรฅ det. Ja, perfekt. Ha det bra. Skal jeg sende melding med en gang. Ha det bra. Ha det bra. Hvordan er det med varmen der nรฅ pรฅ vinteren? Hva sa du? Hvordan er det med varme i huset pรฅ rommet nรฅ pรฅ vinteren? Nei, varme er det nok hvertfall, siden at du holder deg varmt, holdt jeg pรฅ รฅ si. Hvordan det varmes vet jeg ikke, men vi dekker pรฅ en mรฅte at det skal vรฆre nok til at huset holdes varmt, pรฅ en mรฅte. Ogsรฅ er det jo, fordeles jo strรธmregningen pรฅ alle som bor der. Sรฅ det er pรฅ en mรฅte, du skal ikke betale for noen flere dager enn det du bor der da. Sรฅ det blir Gabriel som fikser det og ordner. Det kommer den som er tillegg for meg pรฅ slutten av leieperioden? Ja. Det deles jo pรฅ fem, sรฅ jeg har ikke sett siste regningen, men det er vel liksom slik at det er bergvarme der, hvis jeg ikke ser noe. Sรฅ det er jo langt ifra det mest energiet, hva skal jeg si, det er det mest, huset som er, det er et moderne hus pรฅ en mรฅte, som du ser pรฅ bildene, sรฅ er det liksom gode varmesystemer og alt som du tror. Nei, men det synes jeg hรธres fint ut, sรฅ jeg kan sende deg kontrakten pรฅ nรฅr du trenger, sรฅ kan du bare sende kontrakten med en gang. Ja, jeg skal egentlig til neste uke, men jeg skal gjรธre det fรธr jeg gjรธr det, sรฅ hvis du bare ringer pรฅ. Men nรฅr tenker du i forhold til innflytning egentlig? Jeg tror ikke jeg fรฅr til det i dag, for det er tre timer igjen av dagen. Men i morgen, hvis du fรฅr sendt i dag, sรฅ kan du fรฅ til i morgen. Da sitter vi pรฅ det. Ja, perfekt. Kongen. Da sender jeg veldig med en gang. Ja, kongen. Ha det bra. Ha det bra. Jeg gรฅr og grer deg. Jeg gรฅr og grer deg. Men jeg tror aldri jeg har spist eggerรธre med rรธkelaks i fรธr. Det er spennende รฅ prรธve.
de7d99d133f8123b88127abef71e31ae23de5a2f1d73d0f147c61470ba22251b_3f671cd12a8d.m4a
Thursday, January 8, 2026
10:44 AM ยท 42:26
Essence

The speaker reflects on past trauma, self-harm, and current struggles with connection and dating, ultimately prioritizing self-growth and authentic relationships.

Summary

The speaker recounts a pivotal moment in their youth where a traumatic experience led to a rebellious phase and self-harm, stemming from a deep self-hatred and a lack of emotional intelligence from their parents. They describe the addictive nature of self-inflicted pain as a secret coping mechanism, contrasting it with the pain of tattoos. Now, the speaker is in Oslo, feeling a strong need to make new, genuine friends, as current social interactions feel unfulfilling. They discuss strategies for meeting people, like joining clubs or approaching strangers, acknowledging the cultural differences in Norway compared to their travel experiences. The conversation then shifts to their current detachment from sexuality, attributing it to a past relationship that left them feeling used and unhappy. They express a strong disinterest in romantic or sexual relationships, prioritizing personal growth and career over dating. The speaker also voices frustration with superficial interactions, particularly with men who are only interested in sex and women who are overly focused on men, desiring deeper connections with people who share similar life goals and ambitions.

View full transcript
I just felt like, like I got hit, and I was just like, dope. I don't know. I just remember that was like the turning point for me where I was like, you know what, fuck them. I'm fucking done. Like, I'm done trying to be like, this like, perfect kid. Like, I'm literally fucking done and I'm just gonna do what I want to do. I'm gonna sneak out, I'm gonna smoke weed, I'm gonna have fun. But then you somewhat doing stupid shit just because they told you not to, right? Like, not even if you necessarily wanted, but like, just to say fuck them. It's funny how kids work like that. Yeah. Just so many other options. I'm like, bro, you guys are like, didn't have like any emotional control. Or like emotional intelligence. Well, yeah, they were probably never taught it. And it seems like all the parents that do like, hit their kids, they're not aware of like, how much it's gonna affect them, like, right then and there, but also like in the future, like how much they're gonna remember that feeling. Thank you. Like, it really makes a strong impression on kids. Yeah. Yeah, but I just know how I would be different. But, I don't know, it's just so annoying because it's like, I think that led to me taking it out on myself too. Like, hurting myself. Like, isn't that so fucked up when you think about it? It's really fucked up, yeah. Like, how fucked, like, it's really fucked up. Like, some of these scars are deep. Like, really fucked up. Yeah. Like, I think that, if they were hurting me, I was like, okay, screw it, I guess I deserved this, or this is like my only way to like, deal with my fucking feelings. You know? Mm. And I think part of that too was because like, I hated myself. Like, I wasn't just uncomfortable in my body. I, like, hated myself. I hated everything about myself. And so like, I didn't really even think twice about like, oh, in the future I'm gonna like, have scars and shit. Like, I just hated myself. I didn't care. Like, I literally just didn't care. Um, I don't feel that way anymore. I love myself. For the most part. But, yeah, it's just so crazy, like, how much I didn't, disliked myself. Like, that has, I think like, the way your parents validate you makes a difference on like, your self, like, how you feel about yourself and like, how you treat yourself and your body. You know? Are you afraid that you're gonna at some point fall into, like, fall back into a state like that where you hate yourself so much? No. I don't think I could ever fall back into that state. I really don't. Because like, I don't, yeah, I really don't think I could ever fall back into that. Unless I did something really bad. But like, yeah. I mean, there are moments where I... Actually, I take that back. I can fall into that state, but I just, I deal with it better. I just like, recognize that it's like, temporary. And like, I just wouldn't deal with it the same way. Like, I don't think I could even cut myself like that anymore. You know what I mean? Like, I'm so scared of pain now. Like, I'm literally so scared of pain. I'm like... Yeah, but before, it was just like, so addicting. How much does it hurt when you do it? How would you rate it? Like, it hurts, like, it hurts bad, dude. Like, I don't remember because it's been like, almost 10 years. It's like, stings. And then it constantly stings afterwards because it's like, healing. Yeah. Um, but then I kind of liked that back then because I was like, okay, it's like, reminded, like... It was like, my own secret. Like, I didn't, like, I would just like, wear like, long sleeves and shit and it would just be, and like, it would sting throughout the day and it would just be like, my own secret. And I could be like, kind of like, focus on the pain always. You know what I mean? Mm. So weird. Like, compared to getting a tattoo, for example. It lasts much longer. Similar, but worse because tattoos don't go that deep. They don't go deep enough to cut your skin. But, but it feels like your skin's being cut, but it doesn't go deep enough to actually cut your skin. So when you're like, doing this, like, you're, it feels like you're being cut and you're actually cutting your skin. Yeah. Yeah, I need to make some friends in, uh, Oslo. Like, I need someone who I can have like, a real conversation with, also in real life. And someone I can just have like, a good vibe with. I met up with some old friends, uh, yesterday. And it's like, it's cool, but uh, you're also different. Like, it's not, what I, it's not fulfilling enough. I need to meet some new people. Uh, and, but I'm not exactly sure why. There's like a bunch of things I can try. Like, usually you meet people through your studies, like your work is the main thing. And I'm, I'm not really doing either. So it's kind of like, it's just a weird situation right now. I'm thinking I could, um... Oh yeah, let me just ask you like, how would you, where, where should I meet people? How should I make friends? You know, I still need like a good, just like, basic, like headshot, like the first picture. And now, like all of my pictures are like Bali, Thailand pictures. Yeah. I mean, it makes you interesting. Yeah, but I should mix in some Norway pictures probably, don't you think? So it's not a hundred percent Thailand. Yeah, definitely. Yeah. But, yeah. Um, yeah, so I would take dating apps to like, meet girls and you don't have to, like, you can just like, sometimes you can just be friends and then like, get introduced to their friends. You know what I mean? Um, what else? Um... I know it's gonna be so much easier once I just make like, some friends. Because then you give me like, friends of your friends. Like, right now, I'm like at ground zero. Yeah. Maybe you could go, go to the same, like, like do some hobbies somewhere. What kind of hobbies? Like, I saw you were bouldering. Yeah. You could like, meet people there. Coffee shops. Like, just start, start conversations like how you used to here. Just like, walk to a random person and be like, hey. You know? Try it. Like, literally just go up to a random person. Like, I know it's more different, it's different because like, people are less receptive because like, they're not on vacation. But yeah, just go up to a random person and be like, hey, what's up? Okay. Are they shut up? You're good at it too. Yeah, it's just, I never really did it in Norway. Because there's a culture difference and also just, I definitely still can do it, but it's, it's like, in my head, it's so much harder here or more scary because like, the version of myself that's here usually like, never does it. Like, the travel version of myself was that guy, right? So I gotta find the courage to do the same thing here. It's like a weird mental battle. Because like, in principle, it's so simple. Yeah. Do it. Just like, push yourself. Yeah. And like, get over that mental block. Because like, probably it was still you. You know, not a different person. It's still you. So I would say, yeah, I would say that's my main advice. Like, in those like, spaces, like, in like, um, like a coffee shop or like a bouldering or like the library or wherever, um, just like, go approach random people. Like how you would when you were traveling. And just like, approach them and talk to them. Like how you would when you're traveling. You know what I mean? I think that's probably, right now, your best bet. Because like, it's just like, and don't be afraid of like, the rejections. Like, you said it's like, more difficult in Norway because like, there's a culture difference. So just expect that, okay, it's gonna be maybe a bit different and maybe a little bit harder. But just do it. Just do it regardless. Okay. What about like, uh, activities? Like, you could do, I don't know, when traveling, people go do these like cooking classes or do this like tour or whatever. It's kind of like touristy. I'm sure there's similar things here. I never really looked into it. Would you have like concrete recommendations within that type of thing? That you think would make sense? Mostly no. Like, you could join a book club. You could join some clubs or something. Like, some sports club. Like a sport. You could join a sport for a guy. Or you go to drop in, pick up basketball or something. Drop in basketball. I don't know if Norway has that. They have like any drop in sports. It's the capital, they have everything. Then do, do some drop in sports. I'm really bad though. Why? At sports? Who cares? At basketball. Really bad. Yeah, it could be fun. You could just be like, you know what

Buddy, get to work. It's gonna be interesting to hear after me. I don't, I don't think nobody knows. I think, yeah, he's probably interested in doing something, but if you don't want to, maybe you'll also just have a good friendship. Yeah. It's gonna be interesting. Yeah, I just don't feel interested in, like, having a physical relationship with anyone. You know what I mean? Yeah, you're asexual. Yeah, I think so. I don't know what's happened. My ex-boyfriend fucked me up. Fucker, I hate him. No, I don't hate him, but like, it was just too much. Like, fucking relax, dude. Have some self-control. Have you, uh, talked anymore with your ex? No, not at all. Because I remember, like, before we parted ways, so now maybe like three weeks ago or a month ago, you were like, you wanted to text him and say that it's just like over, that like you're not gonna get back together even when you come back home or whatever. I told you I would recommend you to just wait a little bit longer. How do you feel about it now? I still feel the same. I still feel the same. Like, I don't wanna be with him. I don't wanna be with him at all. Like, I just feel like I was forcing it when I was with him, and I just like was so unhappy when I was with him and just like angry all the time. And just like, that didn't feel like my fault. And I just, like, he was a bestie. Like, honestly, I felt like we were, we did get along, like it, like, I feel like we'd be better friends, but he just, I don't know. It's just like, people are just too thick with their dicks way too much, dude. I don't think we could ever be friends, because he would always want more. But like, I just don't want it. I don't want it. Get away from me, you know? Like, I literally just don't want it. I don't want a guy. I don't want a girl. I don't want anybody. I feel like I'm like really detached from my sexuality right now. Okay. I mean, that's good if you wanna focus on career, for example, try and lock in your Instagram or something. Yeah, I have other, more important things to focus on, and it's just not even a top three thing for me. It was a top three thing for him. I just find that disgusting. Why is it disgusting? It's not disgusting. It is. He literally, like, I don't know. I just find it, I find anyone that makes that a priority for themselves is like, no, I don't want, like, yeah. Like, I just don't like it. See, there's a train station, but here is like the beautiful view of the, it's like the valley. Yeah. Gorge. Gorgina. Yeah, I think so. I don't know what's happened. My ex-boyfriend fucked me up. Fucker. I hate him. No, I don't. I don't hate him, but like, it was just too much. Like, fucking relax, dude. Have some self-control. Have you talked anymore with your ex? No, not at all. Because I remember, like, before we parted ways, so now maybe like three weeks ago or a month ago, you were like, you wanted to text him and say that it's just like over, that like you're not gonna get back together even when you come back home or whatever. Yeah. I told you I would recommend you just to wait a little bit longer. How do you feel about it now? I still feel the same. I still feel the same. Like, I don't want to be with him. I don't want to be with him at all. Like, I just feel like I was forcing it when I was with him, and I just like was so unhappy when I was with him and just like angry all the time. And just like, that didn't feel like my fault. And I just, like, he was a bestie. Like, honestly, I felt like we were, we did get along, like, it, like, I feel like we'd be better friends, but he just, I don't know. It's just like, people are just too thick with their dicks way too much, dude. I don't think we could ever be friends, because he would always want more. But like, I just don't want it. I don't want it. Get away from me, you know? Like, I literally just don't want it. I don't want a guy. I don't want a girl. I don't want anybody. I feel like I'm like really detached from my sexuality right now. Okay. I mean, that's good if you wanna focus on career, for example, try and lock in your Instagram or something. Yeah, I have other, more important things to focus on, and it's just not even a top three thing for me. It was a top three thing for him. I just find that disgusting. Why is it disgusting? It's not disgusting. It is. He literally, like, I don't know. I just find it, I find anyone that makes that that important for themselves, it's like, no, I don't want, like, yeah. I just don't like it. But maybe it was just because I was not into him. I don't know. It seems to me you reached the conclusion like a month ago that, like, you don't wanna get back together with him no matter what. And then, if you still have had the same conclusion, feel the same way all the way until this point, like, do you still wanna, like, send that text and, you know, communicate that or just keep, like, not communicating until later? I just feel like I can't reach out to him right now. Like, I'm not ready to have a conversation with him. You know? Like, that's just way too much right now. So... And he doesn't expect it. Like, he just said, hit him up when I'm coming back. But, like, he probably thought I was coming back way earlier. What'd you tell him about how long you were gonna travel? I said I didn't know. I told him I didn't know. What do you think right now about how long you're gonna travel? I think maybe a month or two. My sister and my mom are going to India in March, so I think I'll meet them in India. Shouldn't you be meeting more new people? Well, hopefully in Cambodia. I'm just sick of people's shit, man. I'm sick of people's shit. What does that mean? I just wanna focus on, like... I just wanna... I should be... Maybe I should make a goal of, like, this many people. I wanna have this many conversations a day or a week. But, like... I wanna focus more on myself and getting my, like... Getting better at filming and, like, putting out cool shit. Like, I just wanna focus on that, and then maybe I should have a goal for how many conversations I want. But I'm just sick of people's shit. Like, I just mean, like... I'm just sick of, like, people's stuff. Like, stuff, like, surface-level shit. Like, the guys that... Especially, like, the guys that I've met in, like, Pai and Tamer and Chiang Mai, like... Or, like, the guys that you meet at a hostel. Like, all they wanna do is, like, get in your fucking pants. And it's like, I'm sick of your shit. Like, get the fuck away from me. Like, talk to me like a person. And not just somebody that you're trying to get with. You know? So I'm sick of that. And just not gonna entertain it. And then with girls, I'm just sick of, like... Like, I love Del, but, like, she was too obsessed with, like... Guys. I'm just sick of that shit. Like, I don't wanna talk to girls that are obsessed with guys. Or, like... You know what I mean? I love Del, though. I'm gonna meet her in Vietnam, hopefully. But, I'm just like... I don't wanna talk to... Like, I'm just... If someone's not on the same, like, level of, like... What they're trying to do in life as me, I just don't really care to talk to them. You know what I mean? That's fair. That's usually, like, younger people. Or, like, people that... Yeah, younger people that just, like... Are traveling. I don't know. That's... I don't know. What do you think there are some people in these hostels that you're living in that would be more on your wavelength? And you just kind of randomly haven't met them? I think I've made a mistake by, like, choosing party hostels too much. Like, I really have. In Pai, I could have chose a way better hostel. But I think maybe I'll just have a goal of, like... Okay, I'm gonna try to have this many conversations. Like, when I'm in Cambodia. And that... Maybe I'll try, like... My goal will be, like, one interesting person to talk to a day. At least. Yep. I'm just sick of people's shit. I never got that feeling that I was sick of people's shit. But obviously, it's different for a girl. But yes, I was very sick of, like, surface-level conversations. But I wasn't, like, sick of people's shit in that sense. Yeah, I'm just, like... I mean, like, I'm 25. So I... And I feel like I've been through some shit that, like, you haven't. So it's just different. Um... But I just feel like I'm at the

And so when we do call, they're always like, oh my God, like, I wanna hear the stories, like, what's been going on? And I'm like, honestly, I don't even know what's been going on. So like, how am I supposed to tell you? Like, I actually have no idea what's going on. So, it's just hard, you know? I think I should let go and start my day and stuff. Okay. Yeah, I have to go too. But it's always nice to talk to you, because, you know, I need to have some, like, social interactions. So these, like, FaceTime calls are my, like, replacement for real-life friends. Yeah. I'm your friend, like, you can always talk to me whenever, like, you can call me whenever, like, we are friends, like, we're for life. Yeah, I appreciate that. Like, yeah. I need it, actually. I actually need to have this. I'm saying, like, sometimes I be feeling like, I don't know, sometimes I'm like, nobody can understand. I feel like you kind of understand a little bit better than other people. We are kind of similar in ways, even though we're, like, from different countries and, like, different backgrounds and stuff, we are, we do share a lot of similarities. Do you feel like I understand you well, or there's, like, a lot of you that I don't understand? I think you understand me well, but maybe there, I mean, what do you think? Like, do you feel like you understand me well, or is there a lot, because, like, Is there a lot where you're like, what is going on in her head right now? So I'm probably the most open with you right now. I'm probably the most open with you out of everybody. Like, yeah. I don't have a feeling like I'm very confused, so, like, I don't understand you, but at the same time, I wouldn't feel, like, comfortable saying, like, I understand you super well. Like, I don't know. Yeah. Yeah. I don't think anyone, like, I mean, my one side does understand me pretty well. I mean, I think my friends do understand me, but at the same time, like, they don't. Like, no, I don't, yeah, I feel like there's not really anyone in my life that, like, like, really understands me inside and out. You know? But, whatever. Do you feel like you understand me very well, or no? I feel like I understand you, like, maybe, like, 60, yeah, I feel like I understand you pretty well. Well. Do you wanna put a number on it, like 60%? Yeah. Yeah, like, I feel like you've been pretty open about everything, so I think I know you pretty well. Yeah, like 60, 70%. And that's just probably the time, too. Yeah, I think it takes time, for sure, because I feel like when I do the openness thing, it's like, people can understand, but at the same time, if, I think it's just a human thing, like, if I can talk about the things, but, like, still, you know, there's a part that, like, only I can, like, really understand. And you probably feel the same way. Yeah, like, I was thinking about it the other day, and it's like, every person kind of has their own perception of you, but, like, the only person that really understands, like, you is you, because, like, you're, like, inside your brain. Like, everyone has a different perception. Like, some people might think I'm, like, wise, some people might think I'm done, some people might think I'm, like, energetic, some people might think I'm, like, you know what I mean? Like, people just have, like, a different perception based on, like, what you show them, or, like, how much time they spend with you, or just, like, their own, like, projection. So it's, like, nobody really, like, It's, like, impossible for someone to understand you 100%, like, the way that you would understand yourself, unless you're spending every single day with them, every single minute. What about your relationships? I'm guessing at the time, that was, like, the person you felt understood you the best in the world. Uh, the first, my first relationship, I felt like he understood me the most, because, like, I was so open with him, too. Like, I was just, like, so open, like, so myself, like, like, all my silly parts and, like, all my, like, deep parts, like, I just, like, was completely open. Like, it was so weird. Like, I was just, like, Like, now I feel like sometimes, you know, with, like, my more sillier parts, like, I'm not really, not, like, I don't really let it out that much, but, like, with him, it was like, I let it out completely. Like, I was a fucking weirdo with him. I think because of that relationship ending, it, I was, I was less of a weirdo after, because, like, I just felt like, oh my God, I showed every single part of myself to him, and he still left. You know what I mean? Like, it kind of, like, damaged me in that way. But, um, uh, yeah, so maybe then my first boyfriend at the time, like, I felt most understood by. My second one, the reason why it ended is because I felt misunderstood by him. Like, I felt understood by him humor, but, like, I just didn't understand him on an emotional level. Like, he didn't know anything about me. Like, I never really shared. So, like... Did you feel like that was your fault for holding it in, or his fault for not really asking? Or, like, both of yours' fault together? Both of them. Yeah. And then, like, my third boyfriend, I think I felt pretty understood by him, actually. Like, he asked, like, he actually, like, dug, like, asked me a lot of questions, and, like, I should feel pretty open with him. Like, he just wasn't my person. Like, our humors were very different, so I felt emotionally understood, but I felt, like, humor-wise, I didn't feel understood. I was like, dude, you're supposed to laugh. You're supposed to take it up. But you know what I mean? Yeah. So, yeah. I feel like I still haven't met, like, my first. Like, I was, like, reflecting on this and, like, thinking about all my relationships, but I know. Then relationship where I felt like, okay, yeah, this is my person. You know? Like, maybe my first relationship, but I was like, oh, so young and naive, and it was the first relationship. I don't think I really felt, like... Yeah, I don't know. All right. I think I should get going. Okay. Good luck with your, uh, bus trip. Thanks. Pretty good, though. Yeah, I'm, I mean, I'm fine. I think my mood right now a little bit, uh, in, like, an overthinking mood. You know? Why? Because it's just a weird situation. Like, I came home, okay, it was weird with the family, I, like, I, you know, now I did the ship. I did the Christmas thing. Now it's, like, the start of the next chapter. There's, like, a lot of things I need to do. I haven't done any of them yet. So it's, like, a lot of uncertainty. There's a lot of, like, possibilities, but I need to pick something and try for it. If I wanna do the content creation thing, I, like, I need to go in harder, because I'm very much, like, half-assing it. So if I wanna, like, say that I'm doing that, I need to, like, fully commit. Uh, but I also feel like I have to make friends in the city. That was, like, the main thing, because, like, now it's just, like, unbearable to live there. Like, I'm just gonna be too lonely. So that, like, kind of precedes everything. It's just a weird situation. And, uh, I feel like I need to do something a little bit risky, crazy, out of my comfort zone. And I just, like, need to do it. And then you get kind of stuck, like, overthinking. Don't be so hard on yourself. You're gonna figure it out. And, like, it sucks that your family is being annoying right now. And I'm sorry. I understand how that feels. And, like, but try not to be so hard on yourself. You're gonna figure it out with time. And, like, just do the things that you can control. So, like, with friends and stuff, like, just go start up random conversations with people and see where it takes you. Ask that girl out on a date. Like, don't be so hard on yourself. And nothing, like, nothing good comes easy, you know? It can take time, but you got it. I believe in you. Do you think I'll be able to make a friend in the city? Oh, yeah. 100%. You just have to, like, try. Because, like, don't just, like, be like, oh, I should do this. I should do this. How do
79dcdcb06bc1acbe4fcd023b00e72215adb87d929ce98beb59e41039450b85f3_deb139b12166.m4a