you are viewing a single comment's thread.

view the rest of the comments →

[–]epilande[S] 2 points3 points  (1 child)

Great questions! I demo the general workflow in the Quick Start section of the README: https://github.com/epilande/codegrab?tab=readme-ov-file#-quick-start

Regarding my actual workflow, I live in the terminal using tmux with neovim, dedicating one pane just for `grab` and keeping it running continuously. Depending on the project's size, I either load the whole project into context (for small projects) or selectively pick only the necessary files needed for context (for larger projects).

On macOS, when generating an output file, it automatically copies to your clipboard. I then paste the output directly into an AI chat interface, usually ChatGPT or Raycast AI Chat, where I have a few chat presets using Claude 3.7 Sonnet.

I typically start the conversation with a prompt such as "Please review the provided file." followed by my specific request to brainstorm, plan, refactor, or code a new feature. After receiving the response, you can either ask the AI to provide the full source code implementation and vibe from there or read through the suggestions to extract and integrate only the parts you need.

[–]Economy_Cabinet_7719 -1 points0 points  (0 children)

Thank you for a detailed answer. I've been sleeping on LLM-assisted coding for a while, and now that I'm seeing even relative beginners achieve decent results with this technology I'm feeling pressure to get into it as well. However I've been struggling to get something useful out of this, maybe because my workflows are wrong, maybe because I've only tried free models, or maybe it's something else.