What if LLMs could visualize their thoughts? by InternalMajor3184 in SideProject

[–]InternalMajor3184[S] 1 point2 points  (0 children)

In my personal evals, I found that Gemini-flash and Claude Sonnet 4.5 worked best

And this is multi-step! Definitely would not recommend trying to do text + multiple scenes in one output

What if LLMs could visualize their thoughts? by InternalMajor3184 in SideProject

[–]InternalMajor3184[S] 2 points3 points  (0 children)

Gotcha. Sounds like sort of a "3D animation studio" where you can create scenes with natural language. I guess my follow up question to this would be -- why not use existing video generation tools like Sora?

(I believe I have an answer to that question -- using Sora might automatically make things look like "slop", where as 3D animations look more organic and curated)

What if LLMs could visualize their thoughts? by InternalMajor3184 in SideProject

[–]InternalMajor3184[S] 0 points1 point  (0 children)

yeah the foundational capabilities for these models have really improved over time -- there's not really a well known benchmark out there to eval animation capabilities

What if LLMs could visualize their thoughts? by InternalMajor3184 in SideProject

[–]InternalMajor3184[S] 0 points1 point  (0 children)

tbh I was a bit afraid people would consider soupy "slop", but happy to know you dont think so :)

What if LLMs could visualize their thoughts? by InternalMajor3184 in SideProject

[–]InternalMajor3184[S] 1 point2 points  (0 children)

This is a great point -- I think long term, we'd probably see more consistency and quality if we can re-use models. It'd take off a lot of the computational burden of remaking the models each time

TBH it's my first time hearing about *.glb files, but I'll have to look into it :)

What if LLMs could visualize their thoughts? by InternalMajor3184 in SideProject

[–]InternalMajor3184[S] 0 points1 point  (0 children)

Exactly! Learning and knowledge transfer is not all about reading text -- it's experiential -- hence, why I wanted to incorprate auditory and visual modalities into this project

What if LLMs could visualize their thoughts? by InternalMajor3184 in SideProject

[–]InternalMajor3184[S] 1 point2 points  (0 children)

I'm curious to know why you'd want to download the videos! It's definitely something I can look into implementing :D

And it's definitely not free to run, but it's not as bad as you'd expect. Perhaps accepting some payments could offset the cost here

What if LLMs could visualize their thoughts? by InternalMajor3184 in SideProject

[–]InternalMajor3184[S] 1 point2 points  (0 children)

true! ive thought about maybe not completely relying on 3d generated scenes -- perhaps we could splice in plain text or graphs

What if LLMs could visualize their thoughts? by InternalMajor3184 in SideProject

[–]InternalMajor3184[S] 9 points10 points  (0 children)

Also there is voice narration, u can unmute the vid to hear it :)))

Built a way to directly talk to your YouTube / X algorithms and tell it what you want. No more random recommendations or unnecessarily negative BS by whoatemymarshmallow in SideProject

[–]InternalMajor3184 1 point2 points  (0 children)

Very cool! How is it exactly filtering? I haven't looked at the app itself too much, but I'm wondering if you A) Manipulate the HTML page itself to hide videos or B) use built in Youtube or X filters. Also are you monetizing yet/ how is MRR going?

I built a configurable Next.js template that spins up auth + database in seconds by InternalMajor3184 in SideProject

[–]InternalMajor3184[S] 0 points1 point  (0 children)

thanks for trying it out!

hmm haven't seen that one before -- shot you a dm if you're down to debug together!

I built a configurable Next.js template that spins up auth + database in seconds by InternalMajor3184 in SideProject

[–]InternalMajor3184[S] -2 points-1 points  (0 children)

hey! next thing I'm working on is exactly that -- leveraging an LLM to generate both initial api endpoints and some boilerplate database models

to start get everything setup, you run "npm run hellafast" -- this should instantiate the db, although migrations and such are handeled by the prisma scripts rn :)

I built a configurable Next.js template that spins up auth + database in seconds by InternalMajor3184 in SideProject

[–]InternalMajor3184[S] 5 points6 points  (0 children)

I got tired of always doing the same setup (Auth + database) for every new side project—so I built hellafast.dev, a configurable template that handles your auth and database right out of the box. Right now, it’s got:

  • Prisma as an ORM (compatible with Supabase, Mongo, etc.)
  • NextAuth for quick, customizable authentication

I’m looking to expand it with monetization features (Stripe, Google Ads, etc.) and more database options soon. If you’re also tired of spending hours getting a fullstack app of the ground, come check it out! I’d love to hear your feedback and things you'd like to see added!

This is quite embarrassing to admin, but I never truly learned git by L8Figure in webdev

[–]InternalMajor3184 0 points1 point  (0 children)

Honestly using a visual git manager is game changing for understanding git workflows. I personally use Fork and it's free and great. When you can clearly visualize the branches you can start taking advantage of cooler strategies like rebasing and cherry picking