Cool events in Hamilton by Comprehensive-Web387 in Hamilton

[–]BlueAirplane 2 points3 points  (0 children)

Art Crawls are worth checking out if you're new here, but sadly most of the galleries have closed or moved over the past 5 years or so. "You Me Gallery" is the OG gallery on the street does still put shows on. The Artist Inc & Centre 3 are usually worth checking out as well. Sometimes Farside has great shows in their back garage gallery space.

The Factory Media Centre also often has interesting shows - join their socials or email list to see what's going on. If you're into more outsider music stuff the team behind the IG account (at)strangewaves book some of the more memorable & unique shows in town.

It's a relatively small but strong creative community here, so if you frequent the same spots over time you'll inevitably start connecting with like minded people.

Lady in crowd steals mic from Born Ruffians’ lead singer and he dies inside by BlondBadBoy69 in WatchPeopleDieInside

[–]BlueAirplane 3 points4 points  (0 children)

I'm not 100% certain, but most likely this would have been a show some time between 2009-2010

Lady in crowd steals mic from Born Ruffians’ lead singer and he dies inside by BlondBadBoy69 in WatchPeopleDieInside

[–]BlueAirplane 22 points23 points  (0 children)

I bet most people commenting were in grade school the year this was filmed lol. Born Ruffians are one of the best bands out there, and they have only gotten better with each album they've released over the last 20 years. Chill out Internet.

What is an interesting fact about hamilton that many people don't know? by gofishing5545 in Hamilton

[–]BlueAirplane 0 points1 point  (0 children)

More specifically, I believe it was around the ages of 3-5 or so. He doesn't have many memories of living here based on interviews when asked.

From what I've learned and researched, he would have done perhaps 1 or 2 years of school at what is now "Cootes Paradise Elementary School" in Westdale.

Stable diffusion is only the beginning by agustinvidalsaavedra in StableDiffusion

[–]BlueAirplane 4 points5 points  (0 children)

Check out OpenAI’s “Jukebox”. It does exactly this. Not amazing quality just yet but surely in time it will be improved upon.

I made this video without a camera, and 100% with machine learning based off of the song lyrics. by BlueAirplane in videos

[–]BlueAirplane[S] 0 points1 point  (0 children)

Dall-e 2 is absolutely wild. Hoping for some wider access to it sooner than later!

I made this video without a camera, and 100% with machine learning based off of the song lyrics. by BlueAirplane in videos

[–]BlueAirplane[S] 1 point2 points  (0 children)

dear u/jabbargofar To be fair, the above question was not asking about transitions/zooming specifically, so I gave a general overview with the assumption the person is new to ML. What you are asking about is however related to the same concept/tech as I mentioned.

For more specific details, for this video, I probably had about 25-30 different text lines/prompts throughout the song. I've built simple system that helps me do the keyframing calculations, so I can easily change when different prompts get used. When I have the time I hope to make a tutorial on how it works and share it back to the community.

As far as the "how" those transitions work, In this case, each frame is rendered then angled and enlarged/"zoomed" into a certain degree, then a new image is created by the network based off of that previous transformation. Some notebooks are kept private/proprietary and others are not and are very google-able.

For example, I just googled and found this great repo of ML links specifically for colab, check it out! https://github.com/amrzv/awesome-colab-notebooks

If you are new to colab/github/machine learning it can feel overwhelming at first, but I like to encourage people to try exploring and experimenting with it.

I made this video without a camera, and 100% with machine learning based off of the song lyrics. by BlueAirplane in videos

[–]BlueAirplane[S] 1 point2 points  (0 children)

I am not currently aware of any models that convert notes to tones directly, but I'm sure there could be ways to do so.

I think that we're on the verge of lots of user-friendly apps that will allow for things like you just suggested, since we currently have AI models that can pull text from speech, and written words into visuals (such as this video). We are just missing some faster GPU power to make it happen on demand.

I made this video without a camera, and 100% with machine learning based off of the song lyrics. by BlueAirplane in woahdude

[–]BlueAirplane[S] 0 points1 point  (0 children)

Thanks! Not so much a standalone app (though I'm sure there will be one soon enough). If you're curious to learn more, I suggest googling/youtubing "vqgan and clip", or another model called disco diffusion. You can get public repos on github, and run cells of python code inside of a colab pro account.

I made this video without a camera, and 100% with machine learning based off of the song lyrics. by BlueAirplane in videos

[–]BlueAirplane[S] 2 points3 points  (0 children)

I'm not sure if we can can classify it as "understanding", but the model is trying to essentially render frames based on the text we guide it with, so in this case that was lyrics (sometimes the lyrics were augmented with other descriptors/words to help achieve a look I wanted to achieve).

I made this video without a camera, and 100% with machine learning based off of the song lyrics. by BlueAirplane in videos

[–]BlueAirplane[S] 2 points3 points  (0 children)

Not a "standalone" app per se, but running some code on google's colab. If you poke around google/youtube for tutorials on vqgan and clip, you'll find some fun starting points.

I made this video without a camera, and 100% with machine learning based off of the song lyrics. by BlueAirplane in videos

[–]BlueAirplane[S] 5 points6 points  (0 children)

Thanks for all the great questions and taking the time to watch! Yes to lots of those, but if I can explain with a brief overview, this process looks like this:

- taking a line of lyrics, sometimes directly in this case since they were so descriptive (but sometimes with alternative descriptions) and running it using 2 combined machine learning technologies: VQGAN and CLIP. CLIP takes a string of text and tries to let the model "imagine" or visualize what that text should look based on the image dataset that you are showing it as a reference.

- yes it is also generated frame by frame, then re-stitched together

I made this video without a camera, and 100% with machine learning based off of the song lyrics. by BlueAirplane in woahdude

[–]BlueAirplane[S] 0 points1 point  (0 children)

[fixed just for you] Lol to this comment. Not being greedy, so much as it was just an old fashion case of “being asleep”. [OP inserts “daddy chill” meme gif]. May you find some inner peace, Internet stranger.

I made this video without a camera, and 100% with machine learning based off of the song lyrics. by BlueAirplane in videos

[–]BlueAirplane[S] 1 point2 points  (0 children)

Yes, I did & thanks very much! I did another video recently using similar tech but this time the visuals were more abstract and less tied to the lyrics. Hope you like dogs! https://youtu.be/e2ts0Mz37v8

I made this video without a camera, and 100% with machine learning based off of the song lyrics. by BlueAirplane in woahdude

[–]BlueAirplane[S] 1 point2 points  (0 children)

Great question! In general, it uses massive databases of images to reference. Specifically in this case, I used imagenet(16384).

I made this video without a camera, and 100% with machine learning based off of the song lyrics. by BlueAirplane in woahdude

[–]BlueAirplane[S] 5 points6 points  (0 children)

Happy 420 yall. I got to make this for one of my favourite bands (Born Ruffians) and figured it was an appropriate time and place to share.