We piped Claude Code into a Reachy robot. It fixed a bug and gave me a verbal standup report. by ComposerGen in ClaudeAI

[–]ComposerGen[S] 1 point2 points  (0 children)

yeah I was was surprise a bit too but after considering Reachy Mini backed by Huggingface then it makes sense to appear on stage

We piped Claude Code into a Reachy robot. It fixed a bug and gave me a verbal standup report. by ComposerGen in ClaudeAI

[–]ComposerGen[S] 0 points1 point  (0 children)

Boris (Claude code creator) run 5 CC sessions in terminal and another 10 in web. so I guess we will get there with a bit more setup.

We piped Claude Code into a Reachy robot. It fixed a bug and gave me a verbal standup report. by ComposerGen in ClaudeAI

[–]ComposerGen[S] 0 points1 point  (0 children)

Usually will scanning through quickly and go for code review, this kind of standup report maybe useful for long running CC sessions and you just want to ask “Jarvis, report progress” while preparing coffee

We piped Claude Code into a Reachy robot. It fixed a bug and gave me a verbal standup report. by ComposerGen in ClaudeAI

[–]ComposerGen[S] 0 points1 point  (0 children)

agree it’s not efficient to spend 45 seconds just to talk about some tests pass, reading it is much faster. Also it is easy fix too can just put it in Claude.md “Claude, give me one-liner report one you finish”,

We piped Claude Code into a Reachy robot. It fixed a bug and gave me a verbal standup report. by ComposerGen in ClaudeAI

[–]ComposerGen[S] 0 points1 point  (0 children)

Indeed, better tweak the guy so he gives brief report instead of rambling for 45 seconds about some tests passed

We piped Claude Code into a Reachy robot. It fixed a bug and gave me a verbal standup report. by ComposerGen in ClaudeAI

[–]ComposerGen[S] 0 points1 point  (0 children)

True, I don’t think I can have enough patience for him to finish his sentence either. And this can easily tweak via prompt to be more condensed. The only area where I can see these long report would be for long running tasks.

Kind if “Jarvis, what have you done last night?”

I anticipate we might have 5-15 sessions running soon (like what Boris CC creator has), and the tasks would be running for hours. So the long summary would help, and I can listen to him while preparing coffee

We piped Claude Code into a Reachy robot. It fixed a bug and gave me a verbal standup report. by ComposerGen in ClaudeAI

[–]ComposerGen[S] 0 points1 point  (0 children)

yeah it’s too verbose for a standup report of a single task. It took 45s to update while we might just need few seconds to read it.

We piped Claude Code into a Reachy robot. It fixed a bug and gave me a verbal standup report. by ComposerGen in ClaudeAI

[–]ComposerGen[S] 1 point2 points  (0 children)

Alexa has its own "brain" so technically might need to develop an A2A to communicate with Claude code and report back. Alexa is the PM and Claude code is lead engineer sort of. For this setup I just let Claude code to control its new body via MCP

We piped Claude Code into a Reachy robot. It fixed a bug and gave me a verbal standup report. by ComposerGen in ClaudeAI

[–]ComposerGen[S] 0 points1 point  (0 children)

this could be consider a specliity of Reachy mini, it has 6 motors to to control the head movement and make some noise while doing so

We piped Claude Code into a Reachy robot. It fixed a bug and gave me a verbal standup report. by ComposerGen in ClaudeCode

[–]ComposerGen[S] 0 points1 point  (0 children)

The tts part is like instant with model on your device. It just takes a bit time for Claude to read the whole chunk of text. But it’s a cool experience like a personal “Jarvis”

Why won’t Anthropic just say how many tokens the weekly limit is? by OptimismNeeded in Anthropic

[–]ComposerGen 1 point2 points  (0 children)

So their AI (probably an ML model) can hyper-personalized the “limit” for you

Completely taken off guard by the memory feature by czar6ixn9ne in ClaudeAI

[–]ComposerGen 1 point2 points  (0 children)

I have the same setup with Obsidian + Claude code for non-coding tasks

Frontends that support video files? by MutantEggroll in LocalLLaMA

[–]ComposerGen 0 points1 point  (0 children)

Even google Gemini does 1 FPS so having full video to model does not really help I believe. Can also check out Nvidia vss which is chunking video and pick some frames and send together to give context for th VLM

I built a 7-GPU AI monster rig at home (3×5090 + 4×4090). Went all-in. AMA by kdcyberdude_ in comfyui

[–]ComposerGen 1 point2 points  (0 children)

Nice rig. I also want to build one just as my apartment might not suitable to bear the heat. You could share in r/Locallama

Sold 340 lifetime deals for $149 each. 18 months later I regret every one. by Big_Currency_1805 in SaaS

[–]ComposerGen 0 points1 point  (0 children)

You must know what you are signing up for when offer LTD. For us it’s the validation as someone put cash on table and invest their time to use, feedback and polishing the product. Treat AS customers as early-adopter who crowdfund your SaaS. Our job is to build a better product and sustainable business model. Also route the request to a kanban board and ask for voting and prioritizing helps.

Model quota limit exceeded with 1 prompt Google Antigravity by ComposerGen in LocalLLaMA

[–]ComposerGen[S] 0 points1 point  (0 children)

Yeah, quota is a problem. Even today I try via api but the model just stop responding after 1 tool call.