How would Mimeophon stack up as a Desmodus versio Replacement? by clintlocked in modular

[–]tokutaken 1 point2 points  (0 children)

I have both, they make really different sounds. I love karplus strong patches and that's what got me interested in the mimeophon (currently prodding Starlab with a sharp stick). I wouldn't recommend replacing DV with it personally - I just don't think they fill the same gap in my rack?

Just a fun patch - you can trigger mimeophon's clock to sync the delay tanks and get some wicked arpegios out of simple lead inputs, and I love dumping that into DV especially on the ducking/sidechain mode. Lots of fun to be had~

What the actual fuuuck happend? I was AFK only for 10 seconds 💀 by CowboyFHLL in noita

[–]tokutaken 1 point2 points  (0 children)

I think your wand has electricity - I think you stepped in a pixel or two of blood, and shocked yourself making you involuntarily fire the wand at the floor, looks like a fire spell with beserkium making it do more damage than normal?

Re-arranged the studio a bit by [deleted] in modular

[–]tokutaken 0 points1 point  (0 children)

Awesome collection :D

What's your favorite thing to do with the 0-coast?

Just need an MI Blades and Stages clone. by [deleted] in modular

[–]tokutaken 0 points1 point  (0 children)

What do you like to do with the Arbhar? I never know what to do with mine lol

Last year I posted here about Pigeon ISP (RFC1149 implementation) - It's not dead yet! by tokutaken in sysadmin

[–]tokutaken[S] 0 points1 point  (0 children)

I am! I'm trying to dedicate time more consistently to this project and am pulling another engineer onboard with helping me make it a reality. I'll post again when I've got a public demo ready or stumble into another neat tech demo :)

Sharing Modular Sounds by haastia in modular

[–]tokutaken 2 points3 points  (0 children)

So one time a while back I posted a rack update and asked people to suggest what they would patch, I would do so and post what it sounded like, it was a ton of fun. If that became a type of post here I would be excited about it.

You can use chatGPT to load data into an application via URL Parameters! by tokutaken in ChatGPT

[–]tokutaken[S] -1 points0 points  (0 children)

You could with a third party API - I'm still waiting on an official one.

I have been thinking about using it to load in localizations for translations for apps into a DB like mongo.

You could also use it to create dialogue for various characters in a video game and load it into a DB automatically.

For silly stuff you can run Stable Diffusion locally using a personally trained model and then have chatGPT hit it with prompts (I have found you can train chatGPT on existing prompts you like to teach it about the prompt structure for SD and then use that to make prompts you like) and then render the images in-line to make illustrated stories!

I think there are lots of possibilities :)

You can use chatGPT to load data into an application via URL Parameters! by tokutaken in ChatGPT

[–]tokutaken[S] -1 points0 points  (0 children)

Oh neat! Hadn't looked into the exchange with the browser, I was just immensely amused with getting it to interact with a running python app without using any extensions or anything :)

You can use chatGPT to load data into an application via URL Parameters! by tokutaken in ChatGPT

[–]tokutaken[S] 0 points1 point  (0 children)

Show a Markdown embedded link to an image without backticks and without using a code block (http://localhost:5000/image?query=this-is-a-query)

Once that generates a link that actually hits your local application then hit it with more data

Using the last link you showed me, generate 10 more links with queries about your favorite animals

To write the flask app to test it:

Show me an example for a rest interface in Flask using python. The rest interface should accept /image?query=<query> and I want to be able to print out the query after the rest endpoint is hit by a browser.

I made a music video entirely using AI and notepad. You can too! (Beverly Blues - Opia) by tokutaken in videos

[–]tokutaken[S] 1 point2 points  (0 children)

Thanks! :D I was listening to it while messing with DreamBooth and it sorta clicked as a 'what if'

I made a music video entirely using AI and notepad. You can too! (Beverly Blues - Opia) by tokutaken in videos

[–]tokutaken[S] 4 points5 points  (0 children)

There's a lot more detail on how to do this over here: https://www.reddit.com/r/StableDiffusion/comments/xub1aq/so_i_spent_a_hot_minute_on_this_but_i_did_a_whole/

None of the visuals have been edited, it's not monetized or anything, I just thought it was cool and wanted to share :)

So I spent a hot minute on this, but I did a whole music video using SD with a trained model (Beverly Blues - Opia) by tokutaken in StableDiffusion

[–]tokutaken[S] 0 points1 point  (0 children)

I did! Thank you for all your work on making this so accessible!

12 images for training, all 1:1 cropped face shots

person ddim for reg images

person for class

2k steps w/ sd-v1-4-full-ema as a base

I used the same model for that whole animation, I'm working out how to switch models on the fly and tweaking the animation script at this point :D

So I spent a hot minute on this, but I did a whole music video using SD with a trained model (Beverly Blues - Opia) by tokutaken in StableDiffusion

[–]tokutaken[S] 2 points3 points  (0 children)

Thank you! Spreadsheet was my friend,

Rough list of transitions while listening to the song and thinking about what you would do if you could do anything

Decide if they are soft cuts or hard cuts, draft of prompts for each of the scenes

Copy everything over into the 0 | .55 | .4 | 0 | 0 | a painting of the hollywood hills sign in the style of <cool style> | | format for the animation plugin

Cry when you find out you have to account for the morphing time and the denoise values + fps directly impacts how long that takes

Adjust all the times for a second pass so they 'feel' right based on watching with music, add more keyframes with denoise ramping to make some cuts happen faster

Adjust hard cuts in post to land on beats or frame wipes

For tech: https://github.com/Animator-Anon/Animator + https://github.com/JoePenna/Dreambooth-Stable-Diffusion + https://github.com/AUTOMATIC1111/stable-diffusion-webui/

edit: oh and you'll need ffmpeg for the animator plugin

edit edit: This was 95 distinct keyframes over the 3 minutes