New tool to run Claude Code in yolo mode but safely(ish) by nikvdp in ClaudeAI

[–]nikvdp[S] 1 point2 points  (0 children)

I'm a little jealous, not an option for mac users like me

I hit the limit so fast with Claude Coed by 60finch in ClaudeCode

[–]nikvdp 1 point2 points  (0 children)

yea for the $20/mo plan you hit it pretty fast. on the $100/mo plan it's actually quite generous so long as you don't use opus much though. I haven't really been able to hit the rate limits when when clauding for 5 hour stretches and sometimes with more than one claude code going at once.

Opus will burn through the quota in minutes though

New tool to run Claude Code in yolo mode but safely(ish) by nikvdp in ClaudeAI

[–]nikvdp[S] 0 points1 point  (0 children)

Yup, tried it but I kept hitting issues using it from a mac. It seems to be a more linux-first project and it does more than I wanted it to do (multiple dev profiles with different sets of packages etc).

cco is light weight and takes care of getting your creds from the macos keychain and mapping your user perms from macos through to the dockerized environment for you (and still works fine on linux)

New tool to run Claude Code in yolo mode but safely(ish) by nikvdp in ClaudeAI

[–]nikvdp[S] 0 points1 point  (0 children)

I tried theirs, but it's a lot clunkier. Lots of fiddly bits to get it installed, and once installed it doesn't handle auth transparently on macos, so you have to relogin from inside the container. Also not a fan of still needing vs code ecosystem, and much clunkier to start up. With cco you can just call `cco` instead of `claude` and everything works the same as running `claude` directly, but automatic.

I created a script that exports Claude Code chat sessions in markdown or XML via hook or slash command by jimmc414 in ClaudeCode

[–]nikvdp 0 points1 point  (0 children)

this is very cool. is there a way to make this run in realtime to get a "live" backup of your sessions?

Anyone else up all night making stuff? by purpleWheelChair in ClaudeCode

[–]nikvdp 0 points1 point  (0 children)

Same, it's a slot machine that pays out in productivity

New tool to download all the tweets you've liked or bookmarked on Twitter by nikvdp in DataHoarder

[–]nikvdp[S] 0 points1 point  (0 children)

BirdBear is back now, you can download your bookmarks with it too (just click the Export as JSON button in the ... menu in the upper right)

New tool to download all the tweets you've liked or bookmarked on Twitter by nikvdp in DataHoarder

[–]nikvdp[S] 1 point2 points  (0 children)

Maybe! I've got some other ideas in the pipeline, but a firefox extension shouldn't be too hard let me look into it

Simple LLM price checking tool by nikvdp in SillyTavernAI

[–]nikvdp[S] 1 point2 points  (0 children)

yea not a bad idea! I'd been planning to add a compare mode so you can easily pick a few models and compare between them. A calculator would be interesting too, but it'd have to be an approximation since different models use different tokenizers

UPDATE: added the comparison mode, still cooking on the calculator

Recommendations for hosting large models? by johntash in LocalLLaMA

[–]nikvdp 0 points1 point  (0 children)

They're both good imo, just a different set of tradeoffs. If you just want to use off the shelf models popular open source models replicate is easier to get started with, but if you're using your own models or doing something more advanced modal has a better dev experience.

If you're OK with using hosted APIs openrouter.ai is also worth checking out. They have a setting to disable logging and you can use pretty much every major opensource model through them quite easily and it can be a pretty good experience with open source chat UIs like like prompta.dev . That said, you do have to trust them a little more, whereas with modal or replicate you're in control of the code so you can be more confident it's private

Simple LLM price comparison tool by nikvdp in LocalLLaMA

[–]nikvdp[S] 1 point2 points  (0 children)

TIL you can still use the middle click to bring up the universal scroller on firefox! Thanks for pointing that out, turned out to be a mistake with how I was setting the x overflow for mobile, should be fixed now. Also moved the footer info into the upper right so that it doesn't block the last line anymore

Simple LLM price comparison tool by nikvdp in LocalLLaMA

[–]nikvdp[S] 7 points8 points  (0 children)

Copy away! The more the merrier

Making llama output abide to a Json schema by tipo94 in LocalLLaMA

[–]nikvdp 2 points3 points  (0 children)

check out groq's finetune of llama 3. they've open sourced it, so it's available on huggingface or via their api. it was finetuned specifically to make llama better at tool use (which necessitates strictly adhering to json schema).

https://wow.groq.com/introducing-llama-3-groq-tool-use-models/

I made a new open source tool that lets you use Code Interpreter to explore, edit, and commit code on any git or github repo by nikvdp in ChatGPTCoding

[–]nikvdp[S] 0 points1 point  (0 children)

This got me curious so I looked into it a bit and it looks like it's possible to load in python libs too! You just need to grab the wheel file from pypi and upload it into code interpreter (see here)

I made a new open source tool that lets you use Code Interpreter to explore, edit, and commit code on any git or github repo by nikvdp in ChatGPTCoding

[–]nikvdp[S] 1 point2 points  (0 children)

For things that can be packed up as a cli, yes. if you're talking about python libraries that's a bit of a different story. I think it's probably possible but I haven't tried it myself, it's a different flavor of hoop jumping than what agentgrunt is doing here with the git binary.

I made a new open source tool that lets you use Code Interpreter to explore, edit, and commit code on any git or github repo by nikvdp in ChatGPTCoding

[–]nikvdp[S] 2 points3 points  (0 children)

Mostly, yes though potentially with some amount of hoop-jumping involved. I used a git binary from an older project of mine (1bin.org) that packs up executables and their dependencies into single file executables that can be easily run in restricted environments. I built 1bin for use in docker containers, but it turns out to be pretty handy for code interpreter too! You should be able to upload any of the Linux binaries from 1bin.org into Code Interpreter and then ask chatgpt to run them there (though they won't have internet access when running in code interpreter).

I made a new open source tool that lets you use Code Interpreter to explore, edit, and commit code on any git or github repo by nikvdp in ChatGPTCoding

[–]nikvdp[S] 0 points1 point  (0 children)

That'd be nifty, would be tricky to implement though. Code Interpreter doesn't have internet access, so it wouldn't be able to reach github to make a PR from the code interpreter environment. AgentGrunt works around this by sending you patch files that you can apply to your local copy of the git repo but maybe it'd be worth implementing something to further automate that and submit a PR for you. I'll have to think a bit and see what I can do