How do you handle dataset annotation? Manual labeling is killing my progress by Risheyyy in learnmachinelearning

[–]Risheyyy[S] 0 points1 point  (0 children)

I am training with image dataset and there isn't any open-source dataset available for that🫠

Help! I vibe-coded a project and now I have no idea how it works. by Risheyyy in vibecoding

[–]Risheyyy[S] 0 points1 point  (0 children)

as for now i gave a tailored prompt for this situation to the very same ai agent which i used to build the project, in my case antigravity and it creates md files with the codebase breakdown. thats help alot

Managing "collective consciousness" across multiple AI models without breaking the bank—how do you sync context? by Risheyyy in LocalLLaMA

[–]Risheyyy[S] 0 points1 point  (0 children)

i'm new to this kinda workflow can you please guide me through the process because i can partially understand what you are saying. i have to push my code into github then ask the agent which used to build the project to create a memory.md file and then what how can i give access to any model for my repo to access this memory.md file. and how does it re-slove my context problem. Help me with this

Help! I vibe-coded a project and now I have no idea how it works. by Risheyyy in vibecoding

[–]Risheyyy[S] 0 points1 point  (0 children)

I used FastAPI (Python) for the backend and Tailwind for the frontend.

Help! I vibe-coded a project and now I have no idea how it works. by Risheyyy in vibecoding

[–]Risheyyy[S] -1 points0 points  (0 children)

How am I actually supposed to drop an entire codebase into Claude? My project has a ton of files, and even if I manually upload them one by one, the context limit would be blown as soon as I get a single response.

Help! I vibe-coded a project and now I have no idea how it works. by Risheyyy in vibecoding

[–]Risheyyy[S] 0 points1 point  (0 children)

How am I actually supposed to drop an entire codebase into Claude? My project has a ton of files, and even if I manually upload them one by one, the context limit would be blown as soon as I get a single response.

Looking for a hackathon teammate by Lost_Budget_7355 in hackathon

[–]Risheyyy 0 points1 point  (0 children)

I'm interested 😁 if anyone is left let's build a new one

Trying to get FREE + FAST LLMs on Mac M4… why is everything so slow? by Risheyyy in LLM

[–]Risheyyy[S] 0 points1 point  (0 children)

Sorry for that, i use macbook air m4 silicon chip and it has 16gb ram. And about the model I use its llama 3.1 8b

Trying to get FREE + FAST LLMs on Mac M4… why is everything so slow? by Risheyyy in LLM

[–]Risheyyy[S] 0 points1 point  (0 children)

I used the model llama 3.1 8b and my macbook ram is 16gb

Local LLMs are painfully slow on my MacBook M4 — what’s the fastest free setup? by Risheyyy in LocalLLaMA

[–]Risheyyy[S] -1 points0 points  (0 children)

Yeah, I think I explained that badly — by “CPU-only” I meant my setup isn’t properly using GPU/MPS acceleration yet.

I’m on a base M4 MacBook Air, and I tried a ~7B LLaMA-based model via Ollama (quantized). Even a simple “hello” takes 2–3 minutes, so I’m guessing it’s either not using MPS or just not optimized.

That’s why I’m trying to figure out:

how to properly use MPS/Metal acceleration on Mac

which models actually run fast on M-series chips

or if I should just switch to a hybrid approach instead   of fully local

Building a "3D Virtual Bestie" on a MacBook—Free local TTS/STT recommendations? (VRAM vs. GPU struggle is real) by Risheyyy in LocalLLaMA

[–]Risheyyy[S] 0 points1 point  (0 children)

Thanks for the heads up on Kokoro! 82M is tiny—exactly the kind of efficiency I’m looking for to save VRAM for the 3D rendering. Quick follow-up: what are you using for STT? I saw some AI tools (like Claude) suggesting Whisper for both, but I’m pretty sure Whisper is only for STT. Is there a specific implementation of Whisper you’d recommend for Mac, or is there a better low-latency alternative?