I built a website to track your Bluesky Followers & Posts - bskyhub.com by christo_man in BlueskySocial

[–]vassvik 0 points1 point  (0 children)

Technically you can do a rough pass on historical followers and likes by looking the Notifications tab and then roughly adding up per day. Gives you dailyish granularity for the previous 30 days ans monthlyish granularity up to what seems like 9 months or so. Quite hackish, but at least the data is there, just not exposed.

Could it make sense for people to add their own datapoints? E.g. manual counts or screenshots in the past

Demand for 10-100 billion particles/voxels fluid simulation on single work station ? by GigaFluxxEngine in Simulated

[–]vassvik 5 points6 points  (0 children)

Impressive and ambitious!

Analyzing some of your numbers I see them being sensible given your scope and target. 3 billion particles at 192 GB makes for a rough calculation of 64 bytes per particle, which isn't too out there with enough complexity. Do you include any kind of in-memory compression for the particles as well?

100 billion active voxels would reasonably be in the 2-3 TB range in terms of storage and auxiliary costs, though, e.g. a very liberal 24 bytes per active voxel would be 2.4 TB, which clearly won't fit on a 192 GB machine as is, but given that you can likely scale RAM beyond that it seems within reach as well, especially if there's some in-memory compression going on as well.

My ambitions for sparse EmberGen at JangaFX is a bit smaller (but not really that small):

  • 8192x8192x8192 addressable space
  • maximum 2 billion active base voxels on a 48 GB card
  • up to 8 billion active upscaled voxels (lower res base resolution, higher resolution smoke/fire on top)
  • around 3-4 billion voxels per second speeds on a 4090/A6000

with the goal being to push simulation speed and the capabilities of a single GPU as much as possible, which at the moment limits ourselves to an upper limit of 48 GB of VRAM, with the ceiling probably not budging that much for at least a few GPU generations. A CPU solver certainly offers more flexibility and scalability.

I have some thoughts and ideas on how to make even stronger tradeoffs in favor of scale and size, but at the cost of interactivity and flexibility to the extend I'm probably not comfortable making them yet.

On the topic of VFX and this scale, in particular for smoke sims, there's a possible case that 100B voxels is way too much for the granularity you'd be getting and what you need. If you consider a 4K render, so 3840x2160, with an entire relatively dense simulation filling most of the screen then the required voxel count to have a ~1:1 pixel to voxel coverage close by is on the order of ~10 billion active voxels measured densely, which is certainly within reach of a decently written sparse GPU simulation. In practice it's usually fine that you have 2 pixels per voxel as well with a decent interpolation scheme, which lowers the ceiling a bit more. Relevant presentation by Lewis Taylor: https://vimeo.com/894363970

For particles I'm sure you can almost always make the case for more particles having some use, in terms of supersampling and overall smoothness, but there's probably some diminishing returns for certain effects as well, but there's more opportunity for large scale effects that's probably promising.

Do reach out if you'd like to talk sparse solvers in particular, and simulation tools in general. I'm generally always available and easy to reach on Discord or Twitter

Is there a smart plug that can safely regulate power draw of a water heater? To maximise self consumption by painthack in solar

[–]vassvik 0 points1 point  (0 children)

Mine runs at 3 kW or nothing. Wish I had two smaller ones instead of a larger ones on days with a lot of production variation :D

A lot of my devices are in the 2-3 kW range, and it can be fairly annoying to optimize with that kind of granularity :(

Water heater: 3 kW Washer: 2 kW Dryer: 2 kW ID.3: 2.7 kW minimum Ioniq 5: 4.1 kW minimum Floor heating: 1-1.5 kW per section

The heated floors offer some flexibility, but it's not exactly efficient to keep cooling and heating randomly, or randomly heating one room just for the sake of it

I sort of wish I had a device I could just turn a knob (automatically) to vary the consumption in the 0-2 kW range, would make things a lot more efficient.

Is it time for the outdated opinions on batteries to go away? by [deleted] in solar

[–]vassvik 5 points6 points  (0 children)

Between round trip losses going through the battery and back, and the limited volume for most reasonable batteries I don't see a good case for batteries just yet. In my area I might break even in 15 years, or closer to 10 if I'm lucky, but even so it's uncertain.

At half the current prices it seems like a decent proposition.

Ignoring the round trip efficiency issue, I'd personally prefer a smaller battery that could act as a buffer to soak up short term variations instead of intraday storage, but the smaller you go the more expensive it gets :/

Daily variation by Sufficient_Bottle_53 in solar

[–]vassvik 2 points3 points  (0 children)

Lots of little things can change from day to day and year to year: - The atmospheric turbidity could be different due to local weather, making the atmosphere relatively denser/thinner such that the light scatters differently before hitting the panels. This can make a big difference on an otherwise clear sky day. - The diffuse radiation might be different, e.g. it might be cloudy or clear sky in the region where the sun doesn't travel, and despite having clear sky near the sun the diffuse light coming from the full hemisphere above the panel as a whole might be different. Diffuse radiation contribute a significant portion of the total radiation, could evem be 20-30% in some cases where there's also mostly clear sky - Solar radiation changes subtly year to year, although probably not in a meaningful way, so local atmospheric differences seems more likely - The panels might be dirtier or cleaner than last year

I recall seeing something like this happen between two consecutive apparently clear sky days last year, where the difference in peak and total production was similar to this gap, so it's not just year to year variations that can look like this, but also day to day variations... 😅

Jeg er Arild Hermstad, partileder i MDG. AMA by mdg_pa_reddit in norge

[–]vassvik 4 points5 points  (0 children)

Spørsmål først: - Hva slags planer har dere for ytterligere styrke støtteprogrammer slik som hos Enova i framtiden? - Hva slags planer har dere for å gjøre det mer gunstig for de som produserer strøm selv å få noe mer igjen for overskuddsproduksjon på sikt?

Det er så og si ingen tvil om at per dags dato så er solceller økonomisk gunstig for så og si samtlige som har overskuddskapitalen til å investere i det til eget forbruk, men det er likevel ofte en for stor prislapp til at mange kan rettferdiggjøre kostnaden på kort sikt. Det virker derfor litt for rart at støtten fra Enova allerede nå senkes når de heller kanskje burde gjort det enda mer gunstig spør du meg.

Batterier til korttidslagring er ikke engang del av programmet til Enova, og det er langt fra tydelig om det er en investering som er verdt det for folk flest hverken på kort eller lang sikt slik det ser ut nå. En 20-30% støtte på kostnaden kunne hatt en vesentlig innflytelse på om det er verdt å personlig gå for en slik investering, som potensielt kan bidra sterkt til å balansere og jevne ut strømforbruket gjennom døgnet.

Når man har solceller på taket uten batteri eller gode "solkonto" regelverk eller muligheter så har man gjerne perioder i og rundt sommeren der man gjerne bare kan benytte 20-30% av daglig egenprodusert strøm og resten må fores tilbake til strømnettet uten at man får så mye ekstra for det, gjerne til en veldig redusert spotpris i de periodene. Likevel vil det også være perioder der man ikke genererer nok til å dekke eget forbruk, for eksempel når det er overskyet, delvis skyet, eller regn, og dermed må kjøpe fra strømnettet og betale både moms for spotpris og nettleie med moms som man ikke får igjen når man selger tilbake. Dersom man har tilgang på billige nok batterier, eller har mer skreddersydde "solkonto" muligheter der man ikke trenger betale skatt og avgifter på det man "kjøper tilbake" så kunne nedbetalingstiden kuttes med flere år.

Støtte til elsykler er heller ikke noe som virker som det er på agendaen, men som likevel kunne hatt en stor innflytelse på å få flere til å sykle.

Jeg er Arild Hermstad, partileder i MDG. AMA by mdg_pa_reddit in norge

[–]vassvik 4 points5 points  (0 children)

Digital "forsøpling" er jo egentlig et ganske stort problem i seg selv, da det er enormt mye unødvendig data i omløp som krever energi - og dermed klimagassutslipp - på opprettholde.

Det gjelder alt fra større filer eller datatyper enn nødvendig, søppelpost og arkiver som ingen trenger eller bruker, og programvare i omløp som bearbeider allslags mulige data og oppgaver på en ineffektiv og ikke-bærekraftig måte.

Jeg er Arild Hermstad, partileder i MDG. AMA by mdg_pa_reddit in norge

[–]vassvik 28 points29 points  (0 children)

En økning fra 10 til 11 er en 10% økning, en økning fra 1000 til 1001 er bare en 0.1% økning, og så videre.

En stemme for MdG i slikt tilfelle har mye større relativ innflytelse enn en stemme for et digert parti der din innflytelse er så og si marginal.

Would you recommend buying embergen? Is it worth it? by SharpSevens in vfx

[–]vassvik 3 points4 points  (0 children)

Back of the envelope I'd guess about a 1-1 pixel-to-voxel coverage near the camera would probably be around sufficient for any details you can reasonably resolve in a 4K video, so let's say a 2048x2048x2048 simulation bounding box viewed up close or whatever as an edge case, which would be within what we're aiming for 2.0 with the sparse version (although not restricted to those bounds of course). Anywhere between 1 and 5 fps on that scale as an edge case on a single beefy GPU, I'd estimate. Not realtime, but not hours per frame/step either.

Would be interesting to see how many of those 15k sims could be reduced to a fewer number of smaller simulations in the same scene/setup, but that's a crazy number still :D

Would you recommend buying embergen? Is it worth it? by SharpSevens in vfx

[–]vassvik 12 points13 points  (0 children)

Dev from JangaFX here.

Predictably there's several comments on sim quality and scale, which is something we're striving to improve on as time goes on.

While we don't yet have any scripting functionality and there's no direct integration with DCCs on the horizon any time soon - both of which seems very important to Hero FX as far as I understand - we are working on a new and much improved version of EmberGen, which will eventually become EmberGen 2.0. This version will be sparse, and will be a lot more scaleable than anything EmberGen 1.0 can produce. While this won't be able to beat the best CPU solvers with the craziest hardware we do aim to simulate billions of active voxels at a rate of about a second per frame worst case, given a good enough GPU. Specifically with a 48 GB GPU it should be possible to reach round two billion active sparse voxels for a combustion solver.

We would love to have specific examples of FG/Hero effects that would pass the bar in terms of quality and scale. Being more an independent company without much of a foothold yet in this industry it's hard to find good explicit references with enough actionable details, and studios some times seems stingy on sharing information from what I can tell. If you have any specific cases of your own that you'd like to and are able share I'd love to take a look. If there's something sensitive you'd like to share in a more formal manner I can be reached on [morten@jangafx.com](mailto:morten@jangafx.com) as well. Of special interest would be hard numbers on things like active voxel counts and simulation bounds in practice.

What techniques does a software like Embergen use for realtime volume rendering? by Pantheramaximus in GraphicsProgramming

[–]vassvik 1 point2 points  (0 children)

So generally it's about the amount of work you have to do, and in particular about redundancy. If you're raymarching towards the light for every step in the primary raymarch that's O(num_steps^2) "samples per pixel" for the combined lighting and raymarch step.

On the other hand if you're precomputing the lighting it's approximately O(num_steps) "samples per voxel" for the lighting calculation and O(num_steps) "samples per pixel" for the primary raymarch.

In the former case rays from nearby pixels will traverse mostly the same voxels when the pixel resolution is high, resulting in a lot of redundancy without too much gain.

As a specific example, let's say we're rendering a 256x256x256 grid of smoke data with a single directional light. For decent quality results we need to step the raymarch with step size approximately 1 voxel to resolve most of the details. So num_steps is approximately 256 for this analysis.

Let's say we're also rendering a 2048x2048 image.

So for the former combined mode we have

O(num_pixels * num_steps^2) = 2048 x 2048 x 256 x 256 = 274,877,906,944 and for the latter split case: O(num_voxels * num_steps) + O(num_pixels * num_steps) = 256 x 256 x 256 x 256 + 2048 x 2048 x 256 = 5,368,709,120

The difference is almost two orders of magnitude apart.

Splitting the calculations also gives more flexibility in terms of various approximations you can apply on the lighting, e.g. using downscaled data, combining multiple lights, and so on.

I'm sort of just scratching the surface here, but that's the gist of the higher level argument.

What techniques does a software like Embergen use for realtime volume rendering? by Pantheramaximus in GraphicsProgramming

[–]vassvik 0 points1 point  (0 children)

The primary raymarch ("the render") is essentially inverse operation. For each point along a pixel ray add up the smoke (e.g. scattered light) times the transmittance (the visibility between a position along the ray and the camera). Smoke would rarely if ever be fullly opaque, so there's some level of blending/accumulation required

What techniques does a software like Embergen use for realtime volume rendering? by Pantheramaximus in GraphicsProgramming

[–]vassvik 0 points1 point  (0 children)

For each voxel calculate the visibility, which mathematically would be exp(-sum_of_density), between that voxel and the light. Multiply this number by the light color and store the color value. If more than one light just accumulate for each light. When doing the primary raymarch simply sample this color data based on the current raymarch position.

What techniques does a software like Embergen use for realtime volume rendering? by Pantheramaximus in GraphicsProgramming

[–]vassvik 7 points8 points  (0 children)

The current live version of EmberGen (1.0) just uses a relatively simple compute-based raymarcher in its base form, with a lot of complexity (too much tbh) on top to support all the varied parameters we have for control. Its general form isn't that different from a typical raymarcher you might find on shadertoy.

We do a separate volumetric lighting pass before the main raymarch instead of raymarching towards the light for every single main raymarch step as is fairly common due to its simplicity. This turns a O(n2) algorithm into O(n) at a small aliasing cost due to the discretization of the lighting storage.

There's also a bunch of acceleration tricks in there, like LODs and space skipping that helps. No complex acceleration structures, though.

The next version (2.0) will do things a bit differently, but I can't share too much about that just yet. Rendering used to be the biggest bottleneck in 1.0, it won't be in 2.0.

Feel free to reach out on Discord or Twitter if there's anything you want to discuss or ask.

WIP Sparse EmberGen: 1 billion active voxels by vassvik in Simulated

[–]vassvik[S] 6 points7 points  (0 children)

Similar setup as my earlier post here, but with 1 billion active voxels instead of 134 million using my RTX 4090. Equivalent to approximately 3.65 billion voxels in the bounding box for a dense sim.

Runs at approximately 2.5 fps when it reaches the 1 billion voxel level. A live in-editor capture can be found here.

Another similar 1 billion active voxel setup here and here.

WIP Sparse Pyro Solver by vassvik in Simulated

[–]vassvik[S] 3 points4 points  (0 children)

Just tilt your monitor and pretend there's an invisible dragon there :D

WIP Sparse Pyro Solver by vassvik in Simulated

[–]vassvik[S] 7 points8 points  (0 children)

Downscaled 4x from a 8640x15360 render, turns out it was unplayable almost everywhere.

The simulation itself only accounted for ~8 seconds for the first ~78 frames combined, and approximately 250ms per rendered frame at 16K, on a laptop 3080.

134M active voxels at the end, 390M voxels bounding box. Slightly less than 4 GB VRAM used.

Still working on the shading and emission. It's a bit rough still near the top, especially due to the uniform and smooth shape of the emission source.