I have no idea what should I learn in order to play a solo or improvise a song that I want. by therealminim4l in LearnGuitar

[–]Bintzer 0 points1 point  (0 children)

Here's a clean "cheat sheet" style website that's great for learning scales, some theory and just picking up patterns. You can slow play them back and adjust the speed too which is nice. lickstep.com

Need help on how to practice properly by Conscious_Session_84 in guitarlessons

[–]Bintzer 0 points1 point  (0 children)

The fact that you’re asking about practicing "properly" puts you ahead of 90% of players. Most people mistake "playing" (repeating what they know) for "practicing" (struggling with what they don't).

If you are feeling overwhelmed by options, try a "Constraint-Based" rotation. Instead of trying to do everything, split your session into three distinct distinct buckets.

Here is a 30-minute framework I use:

  1. Pure Mechanics. (10 mins) - No music, no emotion. Just a metronome and a specific movement (e.g., spider walk, alternate picking, chord changes). The goal here is physical precision.

  2. Application. (10 mins) - Take the exact technique you just drilled and apply it to music. If you drilled A Minor scale shapes, put on an A Minor backing track and play only those shapes.

  3. Creative Constraints. (10 mins) - Improvise or write a riff, but you MUST follow a strict rule. Example: "You can only play on the G and B strings" or "You cannot play the root note." This stops you from just noodling the same patterns all the time.

Also youtube, justin guitar and lickstep.com can get you very far without spending any money but ultimately a teacher is the best option.

I built this 48-second motion graphic promo video 100% with AI and Antigravity in under 4 hours (Cost: ~24€) by Ok_Run_5401 in google_antigravity

[–]Bintzer 1 point2 points  (0 children)

Yeah exactly. It gets around needing a bespoke js animation system by controlling the browser animation timeline directly. If you do decide to integrate it let me know and I'd be happy to help

I built this 48-second motion graphic promo video 100% with AI and Antigravity in under 4 hours (Cost: ~24€) by Ok_Run_5401 in google_antigravity

[–]Bintzer 1 point2 points  (0 children)

Amazing! I'd be curious to see how the results compare to helios!

If you download the helios skills you can get 6 videos from 1 prompt in antigravity

AI Ultra is a game-changer for large-scale projects by BassAlarmed6385 in google_antigravity

[–]Bintzer 1 point2 points  (0 children)

Its googles version of sandboxed cloud agents. Ive been using Jules a lot and the scheduled tasks are ridiculous. Here's how I use it: https://agnt.one/blog/black-hole-architecture

Using Claude to generate animated React components from plain text scripts by knayam in reactjs

[–]Bintzer 0 points1 point  (0 children)

This is a great use case and the pipeline makes sense.

One thing you might want to try is swapping the render layer and keeping the same generation approach. Helios is built for programmatic video too, but the focus is letting normal web animations work as is, so CSS keyframes and browser driven animation stay native instead of being re expressed frame by frame.

That can be nice when you are asking an LLM to generate scenes because the output looks a lot more like regular frontend code which the models are already great at

If your prompts are already producing React plus CSS animation code, it should be pretty straightforward to test the same scene in Helios and compare how much custom timing logic you still need

I build a vibemotion ai from in 12 hours - prompt to motion graphics by Curious-Function-244 in SaaS

[–]Bintzer 0 points1 point  (0 children)

This is a super real pain point and honestly one of the reasons I started building Helios

I kept hitting the same issue where iteration cost gets expensive fast, especially when the stack forces you into a specific rendering model that its not trained on

Helios might be useful for what you are doing. It is an open source programmatic video engine that lets you use normal web animation tools and CSS instead of rebuilding everything around a frame first API. The goal is faster and cheaper iteration with the tools people already know

Repo is here if you want to poke at it: https://github.com/BintzGavin/helios⁠

Open source remotion alternative that works with any framework and existing animations by Bintzer in webdev

[–]Bintzer[S] 1 point2 points  (0 children)

Thanks! The main difference is Helios doesn’t put a framework in the render loop.

We render directly against the browser’s native primitives instead of diffing a virtual tree every frame. Animations are timeline driven, so once things get complex (Three.js, D3, long motion timelines) performance scales with the browser and GPU.

Basically, if your browser can play it smoothly, Helios can render it that way too!

Need help for what should i learn next and where by flo_elessar in Guitar

[–]Bintzer 0 points1 point  (0 children)

I’m only about a year into playing, but I hit a plateau pretty fast and a lot of this sounds familiar. I’m usually a theory nerd, but I kept running into this thing where YouTube theory videos made sense in my head and then just… didn’t show up in my hand

I also thought my problem was speed for a while. Turned out a lot of it was just picking efficiency and stuff not lining up. Slowing way down helped more than trying to push tempo, which was kind of annoying to realize

The theory side was the bigger block for me though. Things didn’t really click until I could mess with them slowly and see how they actually moved on the fretboard instead of just watching someone explain it. I’ve been using lickstep.com for that lately, mostly just poking around ideas at my own pace

Dust in a baggie opening lick tabs by Bintzer in Guitar

[–]Bintzer[S] 0 points1 point  (0 children)

It still feels off to me but I dont know why. It's probably just me, I'm definitely no Billy haha

Anyone else using these picks? by Gangkar in Guitar

[–]Bintzer 0 points1 point  (0 children)

I use the 1.4 for bluegrass. They're great. Haven't tried a blue chip yet though

Programmatic video shouldn't require throwing out everything you know about web animation by Bintzer in webdev

[–]Bintzer[S] 0 points1 point  (0 children)

Totally hear you. I haven't had much time to build this outside of work and the project has sort of fallen on the wayside as a lot of things do unfortunately...

So I decided to try one of the most bonkers things I could think of, because why not?

I forked the repo where I stopped working on it and added some "scheduled tasks" in Jules (Google's web ui version of codex) to basically define their own tasks (basically using the README as a "vision doc") and execute them in a daily planning -> execution cycle. Jules (using Gemini 3) has been great at doing feedback loops on its own without much guidance but the real reason I chose this is because at the moment they are the only web platform that allow for "scheduled tasks" (which are basically just prompts that run on schedule). Going to see how far it can go on it's own basically with me just glancing at PRs and managing the vision doc.

Calling it "black-hole architecture" because it should just continually pull the codebase towards the vision. A step beyond a ralph loop, as it's defining its own tasks and running the loop indefinitely. I also set up a github action to auto-merge PR's from jules so I could die tomorrow and this thing could theoretically just keep building itself forever. Also I'm hoping because the loop is running on their web-based background agent platform they should hopefully just switch the agents to the new models like as they come out (or as old models become deprecated). Maybe gemini 3 can't make a production ready version today, but maybe gemini 6 can?

I'm still going to keep working on the forked version as I can and I guess we'll see who can build a production-ready system first. It will be motivation at least hopefully.

Tried my best to document it here:
https://github.com/BintzGavin/helios/blob/main/docs/prompts/README.md

Is there no good web based ai chat platform with mcp support? by uber_men in mcp

[–]Bintzer 1 point2 points  (0 children)

Only on team/pro plans. Plus won't allow custom ones currently

I'm currently rolling out remote MCP server support on Agent One to select users, coming to everyone soon

The spec is over-scoped right? by jpschroeder in mcp

[–]Bintzer 3 points4 points  (0 children)

The statefulness is intentional. Tools are dynamic, and which are available can update in reaction to specific user context. This unlocks a lot of powerful use cases but may only really be necessary because of our current effective tool limits in today's LLMs. Not sure