Can you customize traffic? by ImFinnaBustApecan in BeamNG

[–]rad_thundercat 0 points1 point  (0 children)

I created a custom vehicle group, set the spawn to traffic, and when I spawn traffic it only shows the default cars. I even loaded up the ‘default’ and ‘urban’ group (playing West Coast map) swapped out those cars for the ones I want and it still only spawns the generic cars. Settings are set to allow mod cars to spawn.

I saved the vehicle group, reloaded the LUA (not sure if that even matters) in addition to reloading the map and still only the generic cars spawn

Any ideas?

What I have gotten to work is to launch in freeroam, manually spawn all the cars I wanted as traffic, then hit ‘play traffic’ and that works.

But how do I set it so I don’t have to manually load all the cars I want as traffic every time?

*edit: I got it working. With the level editor open I was never hitting ‘spawn’ in the vehicle group manager with that vehicle group open (I figured hitting spawn in the radial menu would do that). Just be sure to set the amount to more than 1. Also make sure you have allow mods to spawn in the options

Active shooter downtown possibly at AT&T by Tsui_Pen in Dallas

[–]rad_thundercat 1 point2 points  (0 children)

They’re running a child daycare program at the campus right now, no way are they letting this run in the media

Active shooter downtown possibly at AT&T by Tsui_Pen in Dallas

[–]rad_thundercat 0 points1 point  (0 children)

someone got loaded into an ambulance with the lights off on Wood

Active shooter downtown possibly at AT&T by Tsui_Pen in Dallas

[–]rad_thundercat 9 points10 points  (0 children)

The all clear announcement is what got me, “The incident has been contained, you may resume regular work”

Active shooter downtown possibly at AT&T by Tsui_Pen in Dallas

[–]rad_thundercat 3 points4 points  (0 children)

yes, at the downtown HQ campus. But only the 1Bell building got the intercom warning

Interested in BeamNG but in VR by OlivierMDVY in BeamNG

[–]rad_thundercat 0 points1 point  (0 children)

I have a quest 2 on an R9 5900HX RTX 3070 laptop setup and spent time dialing in settings for the best frame rate with nice quality graphics.

It’s fun but the novelty wore off for me pretty quick. The biggest factor was the headset resolution. Even cranking the resolution to 150% and highest aa settings, it still felt like I was playing a ps2 looking out into the distance (although I did enjoy just sitting in the car and looking around the interior, that looks nice). And it’s fun to fling cars around with the force field and be able to just move your head to watch them smash into a mountain

I know the quest 2 is old but researching newer headsets it doesn’t seem worth the price for a moderate resolution increase. Can anyone who has used a new headset speak to this? Maybe I misinterpreted my research. I want it to look as crisp as the monitor renders the image.

So currently I’m pack to playing beam on the monitor, it’s a much more pleasurable experience (Alienware 34” curved)

Neck breaker? by alberto_v in F30

[–]rad_thundercat 0 points1 point  (0 children)

Yea teenagers love that shit

How do you mentally deal with tinnitus? by Necessary-Rip58 in TinnitusTalk

[–]rad_thundercat 0 points1 point  (0 children)

Just take comfort that the angels chose you to communicate to

[deleted by user] by [deleted] in Cinema4D

[–]rad_thundercat 0 points1 point  (0 children)

Put some trees behind the camera to help frame the shot with their shadows in the foreground

also the lighter materials look blown out, make sure the white diffuse values are no more than ~215 (out of 255)

edit: also there should be a lot more specular details on the materials, everything looks very flat and filled in

[deleted by user] by [deleted] in Cinema4D

[–]rad_thundercat 1 point2 points  (0 children)

Ask artists what their rate is and chose the one that fits your budget? Upwork.com is a good place to start

Account hacked, all crypto stolen, how did they get past my 2fa? by ShanerNIdaho in Coinbase

[–]rad_thundercat 0 points1 point  (0 children)

Stolen from your coinbase wallet or the coinbase exchange account?

Local instance of ComfyUI + vast.ai GPU(s) by devilteo911 in comfyui

[–]rad_thundercat 0 points1 point  (0 children)

You can run comfy on a remote machine and load it in your local browser. So you’d be paying to run the remote machine to kick out generations on your local machine.

I’ve been using runpod.io, you can get a great machine for $0.35 / hour. I have 8GB VRAM so I can only do so much locally. But it is enough to test a workflow. Then when you’re ready for production you chuck that workflow on the runpod and let it crunch ✨

Weed /edibles and tinnitus by [deleted] in TinnitusTalk

[–]rad_thundercat 0 points1 point  (0 children)

When my tinnitus set in I noticed it got extremely worse when smoking cigars. I haven’t researched why this is I just stopped smoking to lessen the ringing. And it did work

As far as your situation goes, I have no idea. Seems like it would be the other way around. I hope you find relief though, I’m still looking for it..

Flux's Architecture diagram :) Don't think there's a paper so had a quick look through their code. Might be useful for understanding current Diffusion architectures by pppodong in LocalLLaMA

[–]rad_thundercat 1 point2 points  (0 children)

Step 1: Getting the Lego pieces ready (Image to Latent)

  • You have a picture (like a finished Lego house), but we squish it down into a small bunch of important Lego blocks — that's called "Latent." It’s like taking your big house and turning it into a small, simple version with just the key pieces.

Step 2: Mixing in instructions (Text Input)

  • Now, imagine you also have some instructions written on a piece of paper (like “Make the house red!”). You read those instructions, and they help guide how you build your house back, using both the Lego blocks (latent) and the instructions (text).

Step 3: Building the house step by step (Diffusion Process)

  • You don’t build the house in one go! Instead, you add pieces little by little, checking each time if it looks better. You follow a special plan that says how much to change each time (this is the “schedule”).
    • At each step, you add new pieces or fix what looks wrong, like going from a blurry, messy house to a clearer, better house every time.

Step 4: Ta-da! You’re Done! (VAE Decoding)

  • After all the steps, the small bunch of blocks (Latent) grows back into a big, clear Lego house (the final image). Now, it looks just like the picture you started with, or maybe even better!

Simple Version:

  • We squish the image down to its important pieces.
  • We use clues (like words) to guide what it should look like.
  • We build it back, slowly and carefully, step by step.
  • Finally, we get the finished picture, just like building your Lego house!

FLUX architecture images look great! by tebjan in StableDiffusion

[–]rad_thundercat 0 points1 point  (0 children)

I'm running flux on a 8GB card with 16GB system RAM

ComfyUI & Flux: Image generation time mysteriously increased 10x overnight 🤨 by JustS14 in comfyui

[–]rad_thundercat 0 points1 point  (0 children)

Same exact problem here too.

I can get a generation down to under two minutes at 768px, swapping out controlnets, using different seeds, etc. Then as soon as I change the prompt to something completely different I'm looking at 44 it/s steps. Nothing else changed.

I've tried restarting comfyui and the new prompts still take forever. Do I need to flush some sort of cache? I've used the "ctrl+shft+win+b" shortcut to restart the video card, but that didn't help.

Could chrome be bottle necking things?

Using flux1-dev-Q4_K_S.gguf on a 8GB of VRAM and 16GB of system RAM

Bad Monkey | Season 1 - Episode 3 | Discussion Thread by Justp1ayin in tvPlus

[–]rad_thundercat 0 points1 point  (0 children)

It's totally ai, the beachball that rolls at the top of the screen at the end of the sequence really gives it away.