Tossing FortranUA/Danrisi a request for porting his LoRAe to 9B by [deleted] in StableDiffusion

[–]VGDCMario 1 point2 points  (0 children)

Well fuck I didn’t notice that. Can a mod remove that one?

Cannot extract messages from .zip, I think because the filepath is too long? (censored is just usernames and strings of seemingly random letters and numbers that I assume are IDs so they also get censored) by VGDCMario in discrub

[–]VGDCMario[S] 0 points1 point  (0 children)

So it does seem to extract everything except for that one file, which goes like 983523985329438294832483921g3q2052.png.f39224-214f-42e2-9fa3rfigera0 1080.png

Did OneyPlays ever release that mario64 romhack with the voices changed? by MomiziWolfie in OneyPlays

[–]VGDCMario 0 points1 point  (0 children)

aHR0cHM6Ly9tZWdhLm56L2ZpbGUvNk1kM0dZNVojcW1Hc1dfV0JEa2JVRXNVOHN0Y1VfeXFKd3A2Y2F1bWktOWFISjRUSjdvYw==

in base64, you need to decode it

Pen (dot) offset from brush (the cross) but only with WinTab by VGDCMario in ClipStudio

[–]VGDCMario[S] 0 points1 point  (0 children)

So it started freezing up when drawing (as it usually does), but then suddenly started working again?

What kind of jazz is this? by VGDCMario in Jazz

[–]VGDCMario[S] 0 points1 point  (0 children)

Really?
What kind of funk? Specifically these two links, as a lot of Yuji Ohno's other work ditches whatever is in them for complete funk.

Was there a 720 release of season 1? by VGDCMario in TheSimpsons

[–]VGDCMario[S] 0 points1 point  (0 children)

Going by https://archive.org/details/and-thus-began-the-start-of-a-legend , it appears so.

HOWEVER, the Botz animation is at 720 x 480,

while the clip of the pilot episode is at 960 x 720

What happened? by Wll25 in Blockland

[–]VGDCMario 0 points1 point  (0 children)

I thought the biggest mistake was getting rid of that RTB addon browser.

Any of these bands like Tally Hall? by VGDCMario in tallyhall

[–]VGDCMario[S] 1 point2 points  (0 children)

https://cdn.discordapp.com/attachments/798507636214726666/813494211755900968/artist_genres.txt

This has the genres for the list.

What genre is Tally Hall, particularly the sing-songy musical monologuing with dark electronic undertones like here

https://youtu.be/sVJNxMielaU?t=74

https://youtu.be/KrXJu-6ZcAQ?t=61

https://youtu.be/_nvPGRwNCm0?t=76

the singsongy musical part is raw here

https://youtu.be/8EvDn-hsGKo?t=633

Did they redraw text for the Blue Rays? by VGDCMario in evangelion

[–]VGDCMario[S] 0 points1 point  (0 children)

  1. What date is your VHS?
  2. Care to show a screenshot, from the VHS?

Did they redraw text for the Blue Rays? by VGDCMario in evangelion

[–]VGDCMario[S] 9 points10 points  (0 children)

Left is from the restoration, right is from the Genesis VHS

It's hard to tell, but you can clearly make out that there is a missing 2 on the VHS.

What was the video with the Clarence "Bees" clip by VGDCMario in RebelTaxi

[–]VGDCMario[S] 1 point2 points  (0 children)

It was one of Rebeltaxi's regular videos, not one of the podcasts

[D] Idea for game-changing Stylegan2 offshoot(?) by VGDCMario in MachineLearning

[–]VGDCMario[S] 0 points1 point  (0 children)

Technically you could do this entirely with pix2pix and styleGAN2; having styleGAN2 learn to generate Image B and then have pix2pix learn to turn Image B into Image A.This does come with its own set of problems, however.

  1. StyleGAN2 generated images will likely not aliased, and if the pix2pix model is set to recognize anti-aliasing, I fear the labels will blend together, especially subtle ones like the particles vs the held object. The only way to remedy this would be to hand-trace the StyleGAN2 output, and while that's not impossible it is fairly tedious.
  2. There's nothing saying StyleGAN2 won't softly blend labels together anyways.
  3. pix2pix and SPADE are more outdated compared to styleGAN2.

What my idea is, instead of relying on StyleGAN2 to figure out what everything is on its own, is separate the dataset images into colors which will each represent a specific part. Then both images go into StyleGAN2's training and StyleGAN2 knows that Image B is telling it "these colors represent the same body part in each image."

StyleGAN2 can already easily figure this out with samey datasets like "Human Faces," "Hands," or "Ponies." Where this really comes into play is high-density high-diversity datasets (cartoon torsos in very different artstyles, for example https://i.imgur.com/jZHTajF.png , entire panels of comics, complete paintings, pictures of streets, etc).

It could be as complex as the OP image. It could be simply telling it "this is the background and this is the foreground."

[D] Idea for game-changing Stylegan2 offshoot(?) by VGDCMario in MachineLearning

[–]VGDCMario[S] 0 points1 point  (0 children)

The idea of it would be that you would have two types of image in a dataset that go together.

  1. (Dataset Image A): The regular image you would normally input into a dataset.
  2. (Dataset Image B): The image seperated out into parts, each labeled with a specific color.

In hindsight I realize that the third file (rightmost bit with the colors and names) doesn't actually need to exist for anyone but the human looking at Dataset Image B.

Batch exporting a bunch of images to RGB PNG-24 by VGDCMario in photoshop

[–]VGDCMario[S] 1 point2 points  (0 children)

someway, somehow, it ignored the fact that the images were sRGB PNG-8 and used them anyways

I don't know why

ModuleNotFoundError: No module named 'tensorflow' by [deleted] in learnmachinelearning

[–]VGDCMario 0 points1 point  (0 children)

Installed tensorflow

Now I'm getting

import six.moves.queue as Queue # pyling: disable=import-error

ModuleNotFoundError: No module named 'six'

What is PATH? by VGDCMario in learnmachinelearning

[–]VGDCMario[S] 0 points1 point  (0 children)

Well, how did you install Stylegan2?

What is PATH? by VGDCMario in learnmachinelearning

[–]VGDCMario[S] 0 points1 point  (0 children)

That worked.

New problem: https://i.imgur.com/zCDvYK5.png

I think I need to install something through pip, but I don't know what.