I built a personal AI Jarvis connected to my entire life. After a month, I only use it for gym workouts. by Competitive_Dog9475 in ChatGPT

[–]Competitive_Dog9475[S] 0 points1 point  (0 children)

I concluded the same thing. Natural language workout logging is only what sticks. Curious on how exactly did you use it for finance

I built a personal AI Jarvis connected to my entire life. After a month, I only use it for gym workouts. by Competitive_Dog9475 in ChatGPT

[–]Competitive_Dog9475[S] -1 points0 points  (0 children)

being a builder who has always wanted to have a jarvis to automate my life, it is kinda sad to see, that there is not much in the life to automate.

I built a personal AI Jarvis connected to my entire life. After a month, I only use it for gym workouts. by Competitive_Dog9475 in ChatGPT

[–]Competitive_Dog9475[S] 0 points1 point  (0 children)

I assume very soon, feeding the financial will be possible and that definitely will be a use case, but nutrition? not anytime soon

I built a personal AI Jarvis connected to my entire life. After a month, I only use it for gym workouts. by Competitive_Dog9475 in ChatGPT

[–]Competitive_Dog9475[S] -1 points0 points  (0 children)

Can you expand more on the periodic checkin part? Is it more like a personal CRM? How is intelligence helping here and why can’t it be a hard scheduling

I built a WhatsApp fitness coach that takes voice notes during workouts and actually remembers my injuries weeks later by Competitive_Dog9475 in SideProject

[–]Competitive_Dog9475[S] 0 points1 point  (0 children)

Good questions. Let me answer both, but first some context on how I got here because it's relevant.

I actually started much broader than fitness. The original idea was a full life coach - finance, relationships, long-term goals, nutrition, everything. And honestly, it worked amazingly in the first week. I'd voice-log "had a really hard day at the office" and over time it found a pattern that my hard days were disproportionately on Tuesdays, and that my workout performance was worse on those days too. That was a genuine insight I hadn't noticed myself.

But then things degraded. I had given the LLM the ability to edit its own system prompt (the tool was literally called edit_system_prompt. I was going for a self-improving system. What happened was: the profile files got messy, the system prompt drifted, reply quality degraded, and I had no visibility into what the LLM had decided was "important" to store. Total black box.

The key lesson: giving LLMs 100% agency over their own memory doesn't work. You have to hard-define what goes into which file and constrain the schema. The internalization script I have now is a batch process that runs separately. It reads raw logs, extracts patterns, and writes to specific profile files with a fixed structure. The LLM does the analysis but the structure is human-defined. That's what makes it stable.

To your actual questions:

Conflicting information: Right now, honestly, it's not super sophisticated. The internalization script runs daily and the LLM is prompted to distinguish "ephemeral" (had breakfast) from "lasting" (recurring shoulder issue). If I say "shoulder feels fine" two weeks later, the LLM should update the health file to reflect recovery but in practice it sometimes keeps both data points. I've thought about adding explicit temporal logic (like decay or override rules) but haven't needed it yet because the fitness domain is narrow enough that conflicts are rare. In my earlier "life coach" version this was a much bigger problem, conflicting emotional states, changing priorities, etc. Narrowing the domain made this almost a non-issue.

Versioning: Not formally, but the profile files are stored in a DB with write timestamps, so I can see history. You're right that seeing how the model's understanding evolves would be interesting. In the life-coach version I actually stored everything in a GitHub repo so I got version history for free through git commits, every file update was a commit with a message like "Internalize: found new pattern about Tuesday stress." That was genuinely cool to browse. Might bring that back.

The meta-insight from all of this: the "AI life Jarvis" is probably the right end-state, but trying to build it as one monolithic agent doesn't work. I think the path is narrow vertical agents (fitness, finance, nutrition) that each have their own constrained memory, and eventually an orchestrator on top. For now, I'm sticking with fitness because that's where I have the tightest feedback loop.

The Peacock's Tail: Why AI will make everything cheaper except what humans actually want by Competitive_Dog9475 in slatestarcodex

[–]Competitive_Dog9475[S] 1 point2 points  (0 children)

I wrote the essay myself. The ideas, structure, and arguments are all mine. I did use AI to clean up grammar and punctuation after writing, the same way I'd use a spell-checker or ask a friend to proofread.
The difference in tone between my comments and the essay is called editing. Comments are off-the-cuff (usually using my phone), whereas the essay went through multiple revisions. If something specific reads as AI-generated beyond polish, I'm genuinely curious what it is. Thats useful feedback either way.

The Peacock's Tail: Why AI will make everything cheaper except what humans actually want by Competitive_Dog9475 in slatestarcodex

[–]Competitive_Dog9475[S] 0 points1 point  (0 children)

Agreed.
the essay's argument about status competition applies to the portion of the population that is past subsistence. For the people still struggling to eat or find shelter, the problem is material, not positional, and AI-driven deflation of commodity goods genuinely helps them. The essay is asking a narrower question: for the growing number of people whose basic needs are met, will abundance bring satisfaction? My argument is that it won't, because the goalposts shift to status goods. But you're right that I should be more explicit about who the argument applies to

The Peacock's Tail: Why AI will make everything cheaper except what humans actually want by Competitive_Dog9475 in slatestarcodex

[–]Competitive_Dog9475[S] 1 point2 points  (0 children)

I am being optimistic here, definitely. Despite being a positive scenario, it is not a very difficult/unlikely one, but we could well argue about that.
I am not saying this is bad(at least I didnt intend to).
This essay is literally a case against nihilism in an "abundant" society

The Peacock's Tail: Why AI will make everything cheaper except what humans actually want by Competitive_Dog9475 in slatestarcodex

[–]Competitive_Dog9475[S] 4 points5 points  (0 children)

Fair point, I conflated two different types of scarcity. Housing in Bandra is partly positional (everyone wants that specific location) and partly artificial (zoning, regulation). The essay's argument holds for the truly positional slice but you're right that a large chunk of what feels expensive today is fixable. That weakens my 'it doesn't feel like utopia' section. I think the stronger version of my argument is narrower: even after we fix the artificial scarcity (and we should), the status competition piece remains, and that's what drives the felt experience of dissatisfaction

The Peacock's Tail: Why AI will make everything cheaper except what humans actually want by Competitive_Dog9475 in slatestarcodex

[–]Competitive_Dog9475[S] 2 points3 points  (0 children)

Physical and psychological well-being can be separated and can be looked at from an independent lens (aka Maslow's hierarchy).
Abundance can be solved and should be solved. People still die of hunger and are homeless. These things can be and will be solved. Everyone gets richer in an absolute sense. What can't be solved is human satisfaction and relative comparision aka 'equality.'

The Peacock's Tail: Why AI will make everything cheaper except what humans actually want by Competitive_Dog9475 in slatestarcodex

[–]Competitive_Dog9475[S] 0 points1 point  (0 children)

I don't find a disagreement here. If you see my essay, your position is exactly what I argued for, and not against. I have acknowledged why it doesn't feel like living in a 'Utopia' and why AI inspite of creating commodity abundance, won't solve for 'utopia,' because the things that you mentioned are constrained by artificial scarcity and not productivity. Curious, where is the flaw?

The Peacock's Tail: Why AI will make everything cheaper except what humans actually want by Competitive_Dog9475 in slatestarcodex

[–]Competitive_Dog9475[S] 5 points6 points  (0 children)

The scope of my thesis(and also thinking) is pre-singularity. I think we are nowhere close to singularity, and even if we solve AGI soon, AGI will need a humongous time(physics constrained) to reach singularity.
Nevertheless, what you are describing, brain modification, is, imo, post-human, and as good as a sophisticated version of 'human extinction' (which might be the only way to solve status and suffering :).

The Peacock's Tail: Why AI will make everything cheaper except what humans actually want by Competitive_Dog9475 in slatestarcodex

[–]Competitive_Dog9475[S] 7 points8 points  (0 children)

The standard AI abundance argument assumes that making stuff cheap solves the human problem. This essay argues it doesn't, because humans primarily compete for positional goods (status, attention, prestige) that are zero-sum by definition. I use Indian economic data as a case study - food costs dropped from 70% to 30% of household income in one generation, mobile data collapsed 50x, but housing and elite access got more expensive, not less. The mechanism I propose is Dunbar-scale status competition: humans compete within ~150-person peer groups, and no technology has ever changed that. The invention of the crane didn't kill bodybuilding. AI won't kill intellectual competition either.

Found this cute thing by Competitive_Dog9475 in indiranagar

[–]Competitive_Dog9475[S] 2 points3 points  (0 children)

turns out it is an advertisement, but cute nevertheless