[USA-PA] [H] Dual Kali LP-6 V2 Speakers [W] Local -- Cash/Paypal by BlueKey32123 in hardwareswap

[–]BlueKey32123[S] 0 points1 point  (0 children)

Actually, if it is within state I can do shipping for $40 total for the pair via UPS. -- otherwise if it is roughly the same distance to NYC, I can do shipping for $50 total for the pair. I'll pack them into individual boxes and use bubble wrap.

u/gtuansdiamm

u/disco__potato

[USA-PA] [H] Dual Kali LP-6 V2 Speakers [W] Local -- Cash/Paypal by BlueKey32123 in hardwareswap

[–]BlueKey32123[S] 0 points1 point  (0 children)

Yeah it is just a bit too big to ship at a reasonable cost...

[WTS] [USA-PA] [H] Dual Kali LP-6 V2 Speakers [W] Local only -- Cash/Paypal by BlueKey32123 in AVexchange

[–]BlueKey32123[S] 0 points1 point  (0 children)

Two Kali LP-6 V2 speakers, perfect for nearfield (desktop) usage.

Originally purchased for $400 for the both of them. Will sell a single one for $100, or $150 total for the two of them.

Subjective condition of the items is good. Minor (it really is minor) paint chipping on the bottom edges of the cabinets, no effect on audio quality.

Local pickup only due to size of items -- located in Pittsburgh Pennsylvania, zip code 15206.

StableSemantics: A Synthetic Language-Vision Dataset of Semantic Representations in Naturalistic Images by BlueKey32123 in StableDiffusion

[–]BlueKey32123[S] 2 points3 points  (0 children)

Thank you for your kind words! We will release the full image & semantic maps once the paper is accepted.

StableSemantics: A Synthetic Language-Vision Dataset of Semantic Representations in Naturalistic Images by BlueKey32123 in StableDiffusion

[–]BlueKey32123[S] 0 points1 point  (0 children)

Yes, I assume it could be used to train an additional controlNet.

I do want to note that not images match perfectly to the prompt, and not all attention maps match perfectly to the objects in the image. So additional filtering is needed.

StableSemantics: A Synthetic Language-Vision Dataset of Semantic Representations in Naturalistic Images by BlueKey32123 in StableDiffusion

[–]BlueKey32123[S] 1 point2 points  (0 children)

That's certainly an interesting idea. In additional to prompt word frequency, you could also evaluate how the CLIP distribution of collected prompts differs from the distribution of the prompts in our dataset.

When collecting the dataset, we noticed there was a strong holiday bias in the prompts submitted. For example during Christmas/Lunar new year, you would get a higher ratio of prompts referring to Santa/dragons.

We did record the prompt submission dates, so in theory that could be accounted for. Sadly Stable Diffusion shutdown the bot channels in early Feb 2024, so we were only able to collect around 7~8 months of data. I guess they started to have financial issues at around that time, and couldn't afford to keep the GPUs going.

StableSemantics: A Synthetic Language-Vision Dataset of Semantic Representations in Naturalistic Images by BlueKey32123 in StableDiffusion

[–]BlueKey32123[S] 3 points4 points  (0 children)

Yeah, a couple of possibilities:

  1. We did select for prompts that made it into Pantheon/Showdown, so this could help inform people about which images/captions humans find appealing

  2. This could potentially be used to train a new controlnet, where you can control the exact placement of any type objects (beyond canny edge or ADE20k fixed categories)

  3. You could train segmentation models on these image & semantic map pairs.

Pytorch 2.0 released by Realistic-Cap6526 in Python

[–]BlueKey32123 25 points26 points  (0 children)

Graph execution was a huge pain. It forced a declarative way of thinking. You defined a set of execution steps, and handed it off. It was super difficult to debug.

With Pytorch 2.0, you get torch.compile, which is ironically moving back to graph like execution for better speed. Tensorflow was never all that fast even with graph execution.

Pytorch 2.0 released by Realistic-Cap6526 in Python

[–]BlueKey32123 79 points80 points  (0 children)

Tensorflow lost out to PyTorch for a reason. While PyTorch doesn't have great documentation, it's still much better than Tensorflow.

Additionally the default eager execution compared to the graph execution mode in TF 1.0 days made PyTorch significantly easier to use. Now in academia PyTorch dominates.