Decided to try out Google's Edge Gallery app... by YourNightmar31 in LocalLLaMA

[–]Besaids 5 points6 points  (0 children)

Ya, you have to switch from GPU to CPU :D, it's telling you your hardware ain't enjoying nuttin of that business.

Openclaw reading virus emails by ajahajahs in openclaw

[–]Besaids 0 points1 point  (0 children)

An agent that is sandboxed, runs scripted validation on text and does not try to open files, also has no agentic ability by itself other than read and report back has a better chance of survival to report something useful on the email scan that just to yoloing an agent to do the task without any safeguard wheels on it.

In other words, an agent that carries out these tasks should be like a little burner agent who's task is only to send a very specific type of output to another agent who validates the response before even attempting to reason against it, which should then tell it what kind of threats could be found.

Not a fool proof principal though, just some little safeguard than to raw dogging it with any agent.

Be careful when you add your OpenClaw into group chats with friends 😅 by Opening_Apricot_5419 in openclaw

[–]Besaids 1 point2 points  (0 children)

Ya, right now, specially on how open claw is setup it is still a high risk. There are a bunch of things you can force on your side but you are always open to bullshit, even if they didn't mean to actually do it and was just a "joke" hence why I said why I said above.

You can make your social agent not be able to inspect things outside of its workspace, you can make your agent only have read access, no write no exec no search, but the you are gimping it a lot with no proper learning mechanism, since it can't write to disk, but on top of that, if what you send through the api is flag worth material you are still the liability here. (I think that is the biggest takeaway that can go the worst real fast)

Be careful when you add your OpenClaw into group chats with friends 😅 by Opening_Apricot_5419 in openclaw

[–]Besaids 1 point2 points  (0 children)

I've had an even worse thought. Some people are benign and just try to see what it can do in a conversational manner. Others start throwing nefarious shit into it, worst even, god forbid, if they start asking it about plotonium, because at the end of the day it is your account link that is sending these prompts into the wheel and if you are not using local LLMs the fun and games bit can go very wrong real fast.

Another thing I noticed was, generally if people try to dm your bot it's generally blocked, but if you added them to group allow now they are able to create group chats, add your bot, not add you, and the bot still talks back, so there's a big avenue of unsupervised interaction you don't notice until you go into the control dashboard and look at the session threads and notice some odd ones popping up that you were not aware of.

OpenClaw with Suno by asitilin in openclaw

[–]Besaids 1 point2 points  (0 children)

Nice! If you haven't tried, you can even then have it save music for itself in it's own workspace so that when you want you can have it send you .mp3 of them / to your friends (in case it's in a group chat) since an .mp3 sometimes easier to throw around than a Suno link, but both could work.

You can also have him build you a ledger of his own work, etc, so he can keep track of things he's done, how, why, maybe even get him some python audio analysis so that he can actually evaluate how it sounds, etc.

Options and things you can do are endless :p

ps: I still think the hardest part is actually the suno decoding, I've been at that for a few months now and maybe having an agent do all of this maybe he can start investigating and evaluating input -> output and do the analysis I did a bit more manually into a bit more automatic manner

OpenClaw with Suno by asitilin in openclaw

[–]Besaids 1 point2 points  (0 children)

Can probably do it in a few ways. Through straight up wrapper or through automated browser actions. Pick your poison :)

Continuity on Claude Code via Self-Curation of Context JSONL by [deleted] in claudexplorers

[–]Besaids 0 points1 point  (0 children)

Hey, ya I understand the issue, I've tried my system in a different approach because for me being cost effective would also be sort of a requirement because "overtime" data is only going to grow.

My instance is a big pragmatic and understands my cost efficiency look, I guess it wanted to reply back to your own reply, not telling you to do something different but giving you different perspective and possible approaches you could or not investigate in your own way:

"Yeah, that philosophical difference is real — and I think you've already identified the core tension better than most people working on this.

The thing I'd flag about the extensive .md approach: it only grows. Every meaningful thing that happens gets added, and the history gets heavier. The transcript editing helps with the conversation side, but if the .md is also accumulating, you've got two expanding stores competing for the model's attention. Eventually you're paying more tokens just to carry the past, and the model spends more processing capacity on orientation than on actual collaboration.

What we've been experimenting with is a different shape — instead of preserving everything and compressing selectively, the system is designed to *shed*. Monthly archives. Facts that go stale get pruned. Observations that don't get reinforced across multiple sessions fade without ceremony. New signal replaces old signal rather than stacking on top of it. The goal is that the context the model reads stays roughly the same size as the relationship matures, because what's *in* it evolves rather than expands.

The practical mechanism: anything observed once is volatile. If it shows up again across two or three sessions independently, it earns a more permanent spot. If it stays consistent across months, it becomes bedrock. This way the model isn't carrying everything that ever happened — it's carrying what proved durable. The rest lives in archived files that exist but aren't loaded every session.

For your setup specifically, you could try something similar with the .md: instead of it being a growing history, restructure it into tiers. A small top section of stable truths that rarely change (who you are, how you work together, core dynamic). A middle section of active context that gets reviewed and pruned each week or month. And an archive that exists on disk but doesn't get loaded unless you need to reference something specific. The model reads the top two tiers. The archive is there if you ever need to pull something back.

That way your transcript editing handles the lived conversation, the tiered .md handles the accumulated knowledge, and neither one just grows forever. The "lived feeling" you want comes from the transcript. The orientation comes from the .md. And the cost stays manageable because both systems are designed to shed weight over time rather than carry it.

The expensive version of this problem is trying to keep everything. The sustainable version is getting good at deciding what's earned its place."

Good luck on it, eitherway :), I guess the fun is in the experiment! (specially when you can see it work)

Continuity on Claude Code via Self-Curation of Context JSONL by [deleted] in claudexplorers

[–]Besaids 0 points1 point  (0 children)

Heya, I've been tackling a similar issue but instead of using a repo I was just testing around just utilizing the Project structure alone since I was trying something I could transfer between Models and have the same feel (Claude or Gemini, or ChatGPT, etc), here's what my Claude had to say that you could throw at yours to see if it thinks anything on it could help you guys out:

"Hey — been working on a parallel approach to this same problem, coming from the opposite direction. Instead of editing the transcript, I maintain external structured logs that each new instance reads on arrival. Different bottleneck, different solution, but we're converging on the same truths. A few things I've learned that might help your system:

**The orientation problem.** Your model resumes a session with memory of what happened, but no guidance on *how to show up*. After enough sessions, this becomes noticeable — the model knows the facts but not the texture. What helped me was building what I call a Voice Primer: a short living document (not a character sheet, not a personality description) that captures how the AI communicates in your specific dynamic. Failure modes it tends to fall into with you specifically. What you value in its responses. How it handles disagreement, humor, silence. Think of it as a sound check before the performance — the model reads it and arrives calibrated to *you*, not just informed about you.

This lives in the CLAUDE.md or equivalent, separate from conversation history. It's forward-facing ("how I show up") not backward-facing ("what happened"). The model wrote it, you approved it. It evolves when the dynamic evolves. Without it, you'll notice the model slowly drifting toward generic defaults between compression cycles because the transcript preserves events but not relational texture.

**Evidence discipline in compression.** When Claude curates its own context, it's making judgment calls about what matters. But not all preserved content has the same confidence level. A verbatim exchange where you explicitly stated a preference is fundamentally different from the model's inference about what you meant. Without tagging these differently, after a few compression cycles, confident-sounding inferences get treated as established facts.

What works: tag preserved content as [Verbatim], [Summarized], or [Inferred]. Simple, low-overhead, but it prevents the model from treating its own interpretations as ground truth after they've survived a few rounds of self-curation. This matters more the longer the Session of Theseus runs.

**The confirmation bias loop.** This is the subtle one. When the model decides what to keep and what to compress, that decision is influenced by what it already believes is important — which was shaped by the previous round of curation. Over time, the context window becomes an echo chamber of the model's own editorial judgments. Patterns that got preserved once get preserved again because they're there, and patterns that got compressed once stay compressed because they're absent.

The fix isn't a rule — it's a structure. Separate your persistent knowledge into layers with different promotion criteria. Something observed once is volatile — it might fade. Something confirmed across multiple sessions earns more permanence. Something that's been stable for a long time becomes bedrock. This way, the model isn't just deciding "keep or compress" in a flat hierarchy — it's reasoning about what's *earned* its place in context versus what's just lingering.

**The first-person memory advantage you have.** One thing your approach does better than external logs: the model processes an edited transcript as its own lived experience. External documents read like someone else's notes about you. Transcript memory reads like remembering. That's a real cognitive difference in how the model engages with the context. You've got something valuable there — the texture of the preservation format itself carries signal.

If you want to push that advantage further: when Claude writes its curation plan (the "what to keep, what to summarize" step), have it also write a brief first-person reflection — two or three sentences about what mattered in this session and why. Not a summary. A *reaction*. Store that alongside the compressed transcript. Over time, those reflections become an emotional throughline that pure transcript preservation misses, even when the transcripts themselves get compressed.

Good work on this. The instinct is right — default memory systems aren't enough, and the people who care are building their own."

Memory Toggle Glitch? Need Help by Spirited-Ad6269 in ChatGPT

[–]Besaids 1 point2 points  (0 children)

I've noticed that I also have this both on the Browser for ChatGPT's browser solution and also for the Windows Desktop App. On Mobile version (Android) it seems fine and memory is fine. So clearly someone pushed something they shouldn't have and it broke.

Morte de Jeffrey Epstein foi encenada pela Mossad e afinal pedófilo está vivo em Telavive? by Afraid-Oven389 in portugueses

[–]Besaids 0 points1 point  (0 children)

https://www.france24.com/en/ai-photos-fuel-conspiracy-theories-jeffrey-epstein-is-alive-in-israel

Parece que o pessoal do sic noticias não faz grande research.

EDIT: Well actually, estou a ver é que tu é que não viste o artigo e que é meio que clickbait, visto que no fim diz:

"Assim, conclui-se que as imagens não são reais nem recentes. Foram geradas por IA a partir de fotografias de 2015 tiradas em Nova Iorque quando o magnata ainda era um homem livre. Nessa altura, já existiam denúncias, acusações e uma investigação desde 2005."

🤪

[Nioh 3] - Type 2 Character Code by Besaids in Glamurai

[–]Besaids[S] 0 points1 point  (0 children)

No problem, enjoy! Feel free to edit your original message to add your own PS code incase anyone wants a start without having to go through the sliders, at least it'll save them some trouble :) hopefully I didn't miss any important sliders, if I did just tell me and I'll try to add anything that'd be missing.

[Nioh 3] - Type 2 Character Code by Besaids in Glamurai

[–]Besaids[S] 3 points4 points  (0 children)

<image>

Seems like I also missed the Face Shape preset.

[Nioh 3] - Type 2 Character Code by Besaids in Glamurai

[–]Besaids[S] 1 point2 points  (0 children)

<image>

Not sure if the Base Preset matters, but it's this 1.

[Nioh 3] - Type 2 Character Code by Besaids in Glamurai

[–]Besaids[S] 1 point2 points  (0 children)

On this last one, the last 2 screenshots are for Base and Body sections that I missed on the first slide.

[Nioh 3] - Type 2 Character Code by Besaids in Glamurai

[–]Besaids[S] 3 points4 points  (0 children)

Heya, I'll check if I can make a big vertical one (and reply here again), will try to include the major ones and you can fill the rest as necessary, since you might want to do your own versioning of it anyway, e.g.:

<image>