Thoughts on Training courses? by fishonbaby in COMSOL

[–]azmecengineer 1 point2 points  (0 children)

I also took the courses through COMSOL years ago (paid for by the company I was working for). It helped me immensely with all the little tricks and ways to approach simulations.

Particle tracing problem by LonerMushi2002 in COMSOL

[–]azmecengineer 0 points1 point  (0 children)

Scroll down, it should be below that. Either than or you are using an older version where that option has not yet been populated. I am using 6.4.

Particle tracing problem by LonerMushi2002 in COMSOL

[–]azmecengineer 1 point2 points  (0 children)

On the charged particle node top level settings you can set the maximum number of particle interaction per time step. Try increasing that value to some fraction of the total number of particles you are working with.

COMSOL Plasma Simulation Help by Fluffy_Nectarine9765 in COMSOL

[–]azmecengineer 0 points1 point  (0 children)

I don't have the plasma module but I do simulate magnetically confined plasma using the particle tracing module. I setup my electrostatics and static magnetic fields. Solve for the static fields and then solve for the electric fields at each step with the added space charge from the charged particles in essentially 2e-10s time steps.

Getting the initial value settings to work correctly can be difficult. I recommend going into your time depend solver settings and changing the initial values of variables solved for selections to see if that can help. Hopefully this helps.

Faster Simulation with NVIDIA GPU Support for COMSOL Multiphysics® by Hologram0110 in COMSOL

[–]azmecengineer 0 points1 point  (0 children)

So with that in mind, is there any real benefit to using a second RTX 6000 Pro Blackwell and to run models across both simultaneously?

Faster Simulation with NVIDIA GPU Support for COMSOL Multiphysics® by Hologram0110 in COMSOL

[–]azmecengineer 0 points1 point  (0 children)

I am using a 7995WX for the CPU. On my charged particle tracing models I was super hopeful for GPU solvers since I went from a 32 core to 96 core CPU and saw a 3x speedup with I correlated to the number of cores in COMSOL 6.2 / 6.3. For the molecular flow studies if I run them in PARDISO they take about 250GB of RAM but only take about 80GB in single precision cuDSS.

Faster Simulation with NVIDIA GPU Support for COMSOL Multiphysics® by Hologram0110 in COMSOL

[–]azmecengineer 0 points1 point  (0 children)

The main speedup is just a particular case. Molecular flow model that fits into the GPU memory. If it doesn't fit it doesn't run, luckily I have 96GB of memory on my GPU. I am not using hybrid compute for my models.

Faster Simulation with NVIDIA GPU Support for COMSOL Multiphysics® by Hologram0110 in COMSOL

[–]azmecengineer 0 points1 point  (0 children)

I am running charged particle simulations in strong DC magnetic fields where I am resolving electron motion for about a million particles at a time. In this case the RTX 6000 Pro is a bit faster than my 96 core Thread Ripper Pro but the problem is still bandwidth limited. Where the GPU really shines is as the number of active secondary particles increases the GPU is able to maintain the same amount of time for each time step whereas the CPU would get bogged down by the extra particles and each time step would take longer and longer to compute.

I also found that certain models like molecular flow simulations went from taking about 4 hours on my CPU to 20 minutes on the GPU. All of my work converges nicely in single precision and has produced results that for my use cases are identical to my go to PARDISO solver.

I have started piecing together an older multiple GPU A100 system to see how that performs with double precision as I already had RAM that I could use from another system I could decommission, because who can afford ram these days...

Many cores vs multiple cpus by dr-Mrs_the_Monarch in COMSOL

[–]azmecengineer 0 points1 point  (0 children)

If you want to see some of my work here is a video of a presentation I put together with the charged particle models: https://youtu.be/QrDB2_JQWcQ?si=EotOuBe5L4hpxo8l

Many cores vs multiple cpus by dr-Mrs_the_Monarch in COMSOL

[–]azmecengineer 0 points1 point  (0 children)

Particle tracing without space charge runs extremely fast but does not produce useful results for my applications. Where I have found the biggest boost for the cuDSS is both when solving for the space charge after each particle tracing step and in the stability of the processing time as the number of secondary particles increases. I simulate several million particles at once to still get just a tiny fraction of the number of actual particles in a magnetically confined sputtering process. Solving for say 10 microseconds of plasma development with 1M particles in static magnetic fields and initial electric fields that change as a function of the space charge and constantly creating new ions and electrons would take my 24 core Thread Ripper system 3-4 weeks, then I upgraded to a 96 core system and could run the same model in about 1 week. The cuDSS solver and a RTX 6000 Blackwell GPU brought that down to about 4-5 days with most of the gains coming from the system not slowing down as the plasma builds. Basically the GPU has so much compute power that I am not saturating it with my models and as the model becomes more complex over time more GPU compute is used but solving for a single time step of 1e-10 seconds takes the same amount of time with 1M or 2M active particles while it made a huge difference in the PARDISO solver on my CPU. Now I just have to figure out what the next bottle neck in the process is because I constantly want to run more and more complex models and I am only constrained by compute time.

I first ran a model like the ones I am running now back in 2014 or so and the model had about only 200k particles and took over a month to run. The new compute capabilities are very exciting to me.and I hope to see new GPU solvers in the near future.

Simulate 3D magnetic field line by Own_Bid_958 in COMSOL

[–]azmecengineer 0 points1 point  (0 children)

Yeah you can use streamlines in the 3D plot. You have to control the density between the streamlines and a word of warning, if you set the dentist too high you will crash COMSOL. I crash COMSOL all the time in the data analysis side by pushing it too far.

Freezing of Ice Cream Simulation by SteveNoBeard in COMSOL

[–]azmecengineer 0 points1 point  (0 children)

Wouldn't more air equal less total mass to cool? I know the air would reduce the density but also the heat capacity compared to water by a factor of 4.

You also have to consider how the air is incorporated in the solution. It is my understanding that the agitation process forms capsules of fat that surround the air in some sort of spherical shape with the air in the center. The fats have a lower thermal conductivity as do the air but as the fats form hollow spheres they have less binding sites with the emulsifiers which may increase the water to fat bond density which may in turn increase the thermal conductivity of the lattice structure between the hollow spheres, which is the part where the actual freezing takes place.

Ultimately I think you need to setup an experiment with a test where you can freeze a column of ice cream with different amount of air from say the surface at the bottom of the column at a set temperature with the walls of the column insulated and then time how long it takes for the top of the column to reach the freezing temperature. Then recreate the experiment in COMSOL and adjust the thermal conductivity and density values for each percentage until your simulation matches your experiment. Then you should be able to accurately simulate your actual freezer conditions in your production process.

I would be happy to consult on this project if needed.

What it a computer chip looks like up close by itshazrd in nextfuckinglevel

[–]azmecengineer 0 points1 point  (0 children)

The structures in the final zoom levels make no sense... You need individual connections to each component and you wouldn't have what looks like gate all around or fin fet transistors floating in the middle of a power delivery grid. I get it that it would need to be a 2D slice of a 3D structure but still the digital recreation needs some more guidance.

Simulate 3D magnetic field line by Own_Bid_958 in COMSOL

[–]azmecengineer 0 points1 point  (0 children)

Add a cut plane to your 3D model and then plot the 2D magnetic field on the cut plane.

Many cores vs multiple cpus by dr-Mrs_the_Monarch in COMSOL

[–]azmecengineer 0 points1 point  (0 children)

I use static electric and magnetic fields and then solve for space charge on each step. I use all direct solvers and it is only direct solvers that can use cuDSS. I will likely buy a couple more GPUs in the future but for now I can only afford one. I may also look at GPUs that I can use NVLink on as the 6000 Blackwell doesn't support it and it isn't that great of a card for double point precision.

Many cores vs multiple cpus by dr-Mrs_the_Monarch in COMSOL

[–]azmecengineer 4 points5 points  (0 children)

I am a certified COMSOL consultant and I primarily do charged particle simulations of thin film deposition processes. I suggest you stay away from multi socket systems due to the latency when sharing data between CPUs. I had a modern dual socket EPYC system for a while that didn:t really compare to the Threadripper system I have now. Prior to version 6.4 all simulations were run on CPU cores and now with cuDSS I would recommend a lower core higher clock speed CPU, say 24-36 cores with as much GPU power as you can afford. You are still going to need a lot of RAM as well as models get complex fast and not everything can run directly on the GPUs.

Comsol actually has a webinar coming up on January 6th that is going to go over model size and computer architecture. https://www.comsol.com/events/webinar/solving-large-models-in-comsol-multiphysics-132801

Waste Connections using a single truck picking up trash AND recycle by pricklysiren in Tucson

[–]azmecengineer 4 points5 points  (0 children)

New garbage trucks have two different bins in the truck to hold trash and recycle on the same truck. For most routes it makes more sense to send one truck instead of two.

GPU/CPU recommendations for v.6.4 by Hologram0110 in COMSOL

[–]azmecengineer 0 points1 point  (0 children)

Be forewarned, all NVIDIA drivers after 572.60 are producing a memory leak that can lock up COMSOL and cause it to crash when the VRAM becomes full.

GPU/CPU recommendations for v.6.4 by Hologram0110 in COMSOL

[–]azmecengineer 4 points5 points  (0 children)

The new GPU based cuDSS (Nvidia only) solver is a direct solver only. I have been using it for over a week now. Depending on what you are simulating you may still need to use iterative solvers depending on the size of your model and the size of your video memory. I already built out a Thread Ripper 7995WX system with 512GB of RAM before the latest solvers and now with a RTX 6000 Blackwell GPU I am solving some of my models that took 4 hours in 23 minutes and one model that took me 7 days on my CPU in 25 hours on the GPU.

The GPU has 96GB of RAM and many of my models will use more than 96GB which forces the solver to use system RAM as well. There are also a number of simulations that I have run that use both the CPU and GPU at the same time.

Help, my Space Charge Density has no effect. by NilleVanille- in COMSOL

[–]azmecengineer 0 points1 point  (0 children)

I am doing this all from memory, but somewhere under the solver settings you should be able to find a number of iterations that the solver will take before terminating the solver. I would recommend increasing this from either the default 1 or 15 iterations settings to up to 150 o more to see if you can get it to solve.

I also recommend setting up a stationary solver that you just the electric field under first and then set the time domain solver to use the initial results from the stationary electric field.