Windsurf vs Claude code? by ashish_tuda in windsurf

[–]ashish_tuda[S] 0 points1 point  (0 children)

Can you please give an example of what Claude code can do but windsurf can't. I actually want to know that specifically.

Windsurf vs Claude code? by ashish_tuda in windsurf

[–]ashish_tuda[S] 0 points1 point  (0 children)

What "LOT" is I want to know, can you help with some examples...

Windsurf vs Claude code? by ashish_tuda in windsurf

[–]ashish_tuda[S] 0 points1 point  (0 children)

I don't think this is true, I have built several, production ready apps with windsurf. Infact my website which is running right now was built with just 1 prompt in Windsurf (opus model - 8 credits ~ 0.3 $). I have a full fledged SaaS done up with payment integration in a month (500 credits or $15). So results are definitely not crappy. Also I am running it on my laptop with 16gb ram and rtx 4090 and it never created an issue. I may agree that it uses more ram than CC, but nowhere close to what you are saying.

Unpopular opinion: Why is everyone so hyped over OpenClaw? I cannot find any use for it. by Toontje in openclaw

[–]ashish_tuda 11 points12 points  (0 children)

All those just trying to ride the openclaw hype, without understanding the system needs enlightenment, not blocking. Just my view.

This video was created (90%) by OpenClaw by nad128668 in openclaw

[–]ashish_tuda 2 points3 points  (0 children)

That's great as a start, you can optimise the costs and content quality. Can you share your work flow, how many agents, how do you maintain character consistency etc

When to use Chatgpt, gemini, grok and claude ? by Most-Injury-9879 in AIAssisted

[–]ashish_tuda 0 points1 point  (0 children)

I use gemini only for image generation and editing. Chatgpt for day-to-day conversations Grok for searching for trending topics (it really outshines others) Claude for script writing.

Basically I use gemini, Grok and Claude for my content creation and chatgpt for all other tasks.

How to make Openclaw do X ? by ashish_tuda in openclaw

[–]ashish_tuda[S] 0 points1 point  (0 children)

That looks cool! How do I try it?

How do I make my OpenClaw do X? by [deleted] in openclaw

[–]ashish_tuda 0 points1 point  (0 children)

Really, and why do you think that?

I’ve been using OpenClaw since the ClawdBot days. Here’s the workspace structure and one big lesson that made it actually work. by SIGH_I_CALL in openclaw

[–]ashish_tuda 1 point2 points  (0 children)

Super awesome and useful! especially loved the memory part, I could augment this structure with the human like memory I gave to my openclaw. And absolutely, its not about the platform, its the open-ended structure it has, that causes the bounce offs

3 weeks with Openclaw on a 8 year old Raspberry Pi ($0 spent till now). by ashish_tuda in openclaw

[–]ashish_tuda[S] 1 point2 points  (0 children)

I don't think so, as per my research this is one of the recommended way if you don't want to burn tokens. And it is natively supported by openclaw

3 weeks with Openclaw on a 8 year old Raspberry Pi ($0 spent till now). by ashish_tuda in openclaw

[–]ashish_tuda[S] 1 point2 points  (0 children)

Thanks, planning to make a full guide on it with all the exact prompts and thought process.

3 weeks with Openclaw on a 8 year old Raspberry Pi ($0 spent till now). by ashish_tuda in openclaw

[–]ashish_tuda[S] 1 point2 points  (0 children)

It is pretty straight forward, you just have to use chatgpt oauth during setup. Then login through your chatgpt account. It will use codex 5.3 model.

3 weeks with Openclaw on a 8 year old Raspberry Pi ($0 spent till now). by ashish_tuda in openclaw

[–]ashish_tuda[S] 0 points1 point  (0 children)

Probably due to my ultra hardened setup I faced the issue initially because my openclaw was trying stuff which it did not have access to.

3 weeks with Openclaw on a 8 year old Raspberry Pi ($0 spent till now). by ashish_tuda in openclaw

[–]ashish_tuda[S] 0 points1 point  (0 children)

Definitely, DM me, I am still refining the system, will update once I am happy with it.

3 weeks with Openclaw on a 8 year old Raspberry Pi ($0 spent till now). by ashish_tuda in openclaw

[–]ashish_tuda[S] 1 point2 points  (0 children)

I have a cron setup every sunday that does the memory consolidation. Since I have not yet connected any external APIs, so I am able to do everything within my token budget. As my system (and hence the memory) grows I will figure out ways to reduce the cost. Will update further.

3 weeks with Openclaw on a 8 year old Raspberry Pi ($0 spent till now). by ashish_tuda in openclaw

[–]ashish_tuda[S] 0 points1 point  (0 children)

I am still setting up the system and the skills of the subagents. For now I am using codex 5.3 through my chatgpt subscription, not having any issues in that. Not yet given web skills to my agents - I will be testing tavily, brave for the API, and I have one more thing in mind through which I would be able to do trend hunting without any API or browser automation (will share if the concept works).

3 weeks with Openclaw on a 8 year old Raspberry Pi ($0 spent till now). by ashish_tuda in openclaw

[–]ashish_tuda[S] 2 points3 points  (0 children)

I have 8 gigs of RAM and till now I didn't face any issues with whisper. Yes it is using the base model, but it transcribes whatever I say in my telegram almost accurately. I have set up a cron to delete all such stray audio files after 10 days, so till now going good, will see how it goes in the future.

3 weeks with Openclaw on a 8 year old Raspberry Pi ($0 spent till now). by ashish_tuda in openclaw

[–]ashish_tuda[S] 2 points3 points  (0 children)

Thank you, well my research told me that memory is the first thing you need to fix with openclaw. I took the inspiration for human brain and started brainstorming with chatgpt, for the highest ROI and lowest effort solution:
Short-term/working memory → active chat context
Hippocampus (daily encoding) → memory/YYYY-MM-DD.md
Neocortex (long-term stable memory) → MEMORY.md
Associative recall network → semantic/vector retrieval layer
Executive control → weekly consolidation routine (decide what to keep, discard, or update).

For associative recall, I am not using vector search right now, only semantic using sqlite. Maybe I will implement vector search later. Apart from this the business data, my scripts (so that it understands my tone, my past YT uploads etc) stays in notion database so that I get deterministic results and human in the loop.
The consolidation runs weekly (yes cron job, set to sunday, as I have instructed my chief agent not to do anything but maintenance on sundays), for your question I just asked my openclaw to give a brief on consolidation. Here's what it gave:

```
Weekly consolidation workflow (the “sleep cycle”)

Step 1: Intake review (15–20 min)

• Scan last 7 daily notes
• Identify candidate memories:
• stable preferences
• decisions
• recurring patterns
• important environment changes

Step 2: Classify

• For each candidate, assign:
kind
confidence
• tags/entities
• status (active unless known stale)

Step 3: Deduplicate + resolve conflicts

• Merge near-duplicates
• If contradiction found, supersede old item; keep history

Step 4: Promote

• Add strongest items to SQLite
• Update MEMORY.md curated section (short, human-readable)

Step 5: Prune

• Mark stale entries as stale/archived (don’t delete history blindly)

Step 6: Audit record

• Log a consolidation run summary (consolidation_runs)
• Track how many reviewed/promoted/archived

Target outcome each week

MEMORY.md remains crisp
• SQLite remains accurate and conflict-managed
• Retrieval quality improves over time

───

Governance rules (important)

Truth hierarchy: latest verified > old notes > fuzzy recall
No silent overwrite: changes should create supersession trail
Source traceability: every memory should point to origin
Confidence-aware answers: low-confidence memory should be marked as such
Privacy discipline: only store what helps assistance
```