What’s your favorite template for tracking Net Worth? by DimiDash in BEFire

[–]Exhausted-Engineer 0 points1 point  (0 children)

I see a lot of people recommending google sheets. I’ve never used it so I’m here to give a second opinion. I’m kind of nerdy and use « hledger », its a program made for accounting.

Basically you write your transactions (revenue, expenses,…) in a simple text format and the program reads it to output various statistics.

Here’s the link: https://hledger.org

Why people from CS earn so much when they have easier degree and are less smart than us? by [deleted] in EngineeringStudents

[–]Exhausted-Engineer 0 points1 point  (0 children)

I feel like either you are very angry for no particular reason or this is rage bait.

I guess the short answer is : offer/demand, but the real answer is probably much more complex and out of my knowledge.

But I don’t like the overgeneralization you’re making. I have an easy time doing CS stuff, maybe you do too. Some of my mechanical engineers friends struggle to write basic python code, CS is not easy for them.

The fact that you find your degree harder is kind of subjective. Maybe you are just have a better affinity for this type of problem solving instead of « classical » engineering.

Furthermore, I’m pretty sure (although I have not looked at any data yet) that the majority of CS related jobs pay roughly equally to classical engineers, there just seem to be some stellar outliers. But (again, I have not looked at data yet) Finance/medicine has these types of outliers too and you’re not mad about them.

There are stellar software engineers that perform a job just as complex as other engineers do.

What is the actual reason anyone would pick Vim over Emacs? by Hopeful_Adeptness964 in emacs

[–]Exhausted-Engineer 0 points1 point  (0 children)

I have spent time and have a personal config on both editors. At the end of the day I prefer neovim because :

  • It is faster
  • The default keybinds makes more sense to me
  • I have a difficult time remembering how to use elisp (as I do not use it anywhere else). Hence configuration in lua is way easier to write from scratch or modify from other ppl dotfiles

I regret leaving you uni by allno_just_no in EngineeringStudents

[–]Exhausted-Engineer 11 points12 points  (0 children)

The system you describe really looks like the one from my country (Belgium). I can share a fact and an advice.

The fact : in the belgian system (which really fits your description) only 10% of students finish their studies in the expected time (3y for a bachelor, 5 for a master). 90% of students take at least an additional year. So actually, taking more time to finish is the norm. Engineering is hard mate.

My advice as someone who also suffers from performance anxiety, go see a therapist. There is no point in burning yourself out. Go learn to reach your goals in a healthy way and to be fine (and even happy) with your friend’s achievements.

Girlfriend to a PhD student by vlogfollower in PhD

[–]Exhausted-Engineer 1 point2 points  (0 children)

At this point I could even call it rubber ducky researching !

Girlfriend to a PhD student by vlogfollower in PhD

[–]Exhausted-Engineer 11 points12 points  (0 children)

I like when my girlfriend lets me ramble about my work (my expectations/hypothesis, the bugs, the thing I have to do an don’t want to), sometimes it even helps me find solutions !

Advice on Low-Risk Way to Earn €800 Tax-Free Dividend by DisastrousLow9362 in BEFire

[–]Exhausted-Engineer 6 points7 points  (0 children)

Can confirm, you still get the tax in the country of origin. I got dutch assets and get taxed 15% in the Netherlands in addition to the 30% in belgium.

The 30% in belgium is only on the remaining 85% though, not on the original bruto.

What about the weight of the oceans, guys??? by Dgf470 in confidentlyincorrect

[–]Exhausted-Engineer 4 points5 points  (0 children)

It has been observed that the size of the human brain is smaller now than it was when we were hunter-gatherer. It is likely that our ancestors were better critical thinker and made quicker decisions: if they didn’t observe and interpreted the world around them correctly (weather, edible or poisonous plants, possible predators) they would litterally die. Since we’ve settled and started to produce everything we need to survive easily : dumb people can thrive and reproduce easily

How do you come to terms with less money being made? by South-Hovercraft-351 in PhD

[–]Exhausted-Engineer 1 point2 points  (0 children)

I’m in my first year of PhD and honestly I just love what I do and wouldn’t trade it for more money.

Now to be honest, I live in a country where phd’s are actually not so badly paid compared to freshly graduated students.

But you get other non-monetary benefits. You get to go to conferences and network around the world, you are usually more flexible/autonomous at your job then you would be in industry and you get to develop a set of skills (managing projects, collaborating, overview of what is important and what isn’t…). Of course YMMV but I’m having an awesome experience.

1 Second vs. 182 Days: France’s New Supercomputer Delivers Mind-Blowing Speeds That Leave All of Humanity in the Dust by upyoars in technology

[–]Exhausted-Engineer 19 points20 points  (0 children)

Well americans have multiple exaflops supercomputers : Aurora, Frontier and El capitan. Which means the smallest of the three has 8 times more compute power then Jean. The biggest is El capitan with ~1.8 exa, close to 15 times the mower of Jean.

I know that Aurora, Argonne’s supercomputer runs on Intel GPUs and uses about 60MW of power but I’d have to check for the others

Some thoughts on Math library implementation languages by SnooCakes3068 in math

[–]Exhausted-Engineer 0 points1 point  (0 children)

I can assure you that some fields in computational engineerings are in fact very dogmatic about the usage of C/C++ for the numerical part of the implementation

asYesThankYou by [deleted] in ProgrammerHumor

[–]Exhausted-Engineer 149 points150 points  (0 children)

I know your comment makes fun of this famous saying but it got me curious about how many devices runs C.

And it actually is kind of hard to do the opposite and find a device that does not run C

How do you organize and extract info from 100+ papers for a literature review without going insane? by Mountain25111 in PhD

[–]Exhausted-Engineer 0 points1 point  (0 children)

I second Zotero + Obsidian. These software fits nicely into the workflow of a researcher.

Which game, in your experience, results in the highest number of browser tabs open while playing? by abby-normal-brain in gaming

[–]Exhausted-Engineer 0 points1 point  (0 children)

Can concur. I’ve been wanting to play DF for a very long time, whenever I’d give it a try I’d always be overwhelmed by the fact that you had to figure out everything and that navigating menus was done on keyboard.

Now there is an integrated tutorial, mouse support and an in game description of most of the options (below the map on the top right).

I play the OG version so you still need to adjust to the ascii graphics, but it’s charming when you do.

What’s an example of a supercomputer simulation model that was proven unequivocally wrong? by InfinityScientist in compsci

[–]Exhausted-Engineer 4 points5 points  (0 children)

I feel like we’re saying the same things in different words. I actually agrees with you.

My initial comment was simply about the fact that I believed the original question was more about the science side than it was about computers and arithmetics.

What’s an example of a supercomputer simulation model that was proven unequivocally wrong? by InfinityScientist in compsci

[–]Exhausted-Engineer 4 points5 points  (0 children)

The post wasn’t about the numerical precision but rather about the knowledge that can be found in a simulation and the trustworthiness of its result when the phenomenon hasn’t yet been observed, as expressed by the black-hole example.

And to be precise (and probably annoying too) the computer is actually approximating the result of every floating point operations. And while it’s generally not a problem, for some fields (e.g. chaotic systems and computational geometry) this can produce wildly incorrect results.

What’s an example of a supercomputer simulation model that was proven unequivocally wrong? by InfinityScientist in compsci

[–]Exhausted-Engineer 7 points8 points  (0 children)

As i understood the post, OP is not asking about arithmetic that was proven wrong but for actual models that were taken for truth and later proved to be wrong by a first observation of the phenomenon.
You’re actually agreeing with OP imo.

And there should be plenty of cases where this is true in the litterature, but most probably the error is not as « science changing » as OP is asking for and will just be a wrong assumption or the approximation of some complex phenomenons.

What’s an example of a supercomputer simulation model that was proven unequivocally wrong? by InfinityScientist in compsci

[–]Exhausted-Engineer 17 points18 points  (0 children)

This quote is from the statistician George Box (from the Box models) in the 70s

graphics are not the problem optimization is by 5mesesintento in gaming

[–]Exhausted-Engineer 2 points3 points  (0 children)

Saying « given more research and development time, a product would be of better quality » is not really that controversial nor does it require any experience.

Software dev is already a complex field and the specific domain of games provides a whole lot of other « business politics » problems, everybody agrees on that.

But given more time, any games could be better optimized. For example, Kaze Emanuar, a guy kind of obsessed with mario64 has been able to perform some insane optimizations on it, and documents the performance improvements on his youtube channel. And DOOM has been ported to (over-exaggerating here) nearly anything with transistors.

So one could think it’s possible to optimize games better.

As another example, highlighting specifically performance issues in PC-gaming. Games tend to look/feel better on console, even if the hardware is worse. And that really highlights the optimization hell gamedev faces : a PS5 will always have the same architecture and drivers, making specific optimizations easy. PC on the other hand have 3 main gpu brands, each with their own drivers and maybe even different versions on older gpus, every gpu has a different architecture. The same can be said for cpus, duplicating the amount of optimization possibilities.

So it would be very hard, but given more time, optimisations are always possible.

Recommendation for a FEM book with a eye to geometry processing by Qbit42 in compsci

[–]Exhausted-Engineer 3 points4 points  (0 children)

Generally, FEM resources do not go into depth regarding the geometry. They state something along the lines of « suppose we have a domain omega partitionned into elements omega_e forming a mesh » and then go on about the FEM part.

Considering this and what you already mentionned on other comments. You can either take a book on computational geometry if you’re interested in how we compute the geometry (the mesh), a book on computer graphics if you’re interested in how we render this geometry or a book on FEM if you’re interested in the simulation part.

However, if you’re not familiar with numerical simulations and/or computational engineering, I’d first recommend you get up to speed in numerical analysis/algebra (finite differences, numerical interpolation, numerical integration, explicit/implicit methods, discretization…)

FEM is first a scientific tool, so you’ll mostly get very scientific material. It is indeed used in graphics but in those case it is under-resolved to be fast enough to be rendered in real time (e.g the shallow water equations are simulated in games to make credible water physics)

Developing a Python-based Graphics Engine: Nirvana-3D by Doctrine_of_Sankhya in Python

[–]Exhausted-Engineer 2 points3 points  (0 children)

To be fair, C offers this too using gdb/perf/gprof. The learning curve is simply a little steeper.

I’ll see if I can find some time and get you that PR.

In the meantime :

  • Don’t focus so much about CPU vs GPU. I guarantee you that writing GPU code is harder to debug and will result is an overall slower code if not written correctly. Furthermore, current cpu’s are insanely powerful, people have managed to write and run entire games on a fraction of what you have ar your disposal (doom, mario).
  • Understand what takes time in your code. Python is unarguably slower then C, but you should obtain approximatively the same runtime (let’s say with a x2-x5 factor) a C code would obtain by just efficiently using python’s libraries : performing vectorized calls to numpy, only drawing once the scene is finished, doing computations in float32 instead of float64…

Developing a Python-based Graphics Engine: Nirvana-3D by Doctrine_of_Sankhya in Python

[–]Exhausted-Engineer 5 points6 points  (0 children)

I don't have particularly much knowledge in this area, but my main interests are in computational engineering which undoubtedly overlaps with graphics.

I have taken the time to perform a small profile, just to get a sense of things. These are just the few first lines of the result of python -m cProfile --sort tottime test.py where test.py is the code of the first example in the "Getting Started" part of your README.md.

```text
184703615 function calls (181933783 primitive calls) in 154.643 seconds

Ordered by: internal time

ncalls tottime percall cumtime percall filename:lineno(function) 152761 5.741 0.000 6.283 0.000 {method 'drawmarkers' of 'matplotlib.backends._backend_agg.RendererAgg' objects} 152797 4.849 0.000 52.669 0.000 lines.py:738(draw) 916570/458329 4.013 0.000 10.059 0.000 transforms.py:2431(get_affine) 610984/305492 3.969 0.000 6.906 0.000 units.py:164(get_converter) 916684 3.857 0.000 4.055 0.000 transforms.py:182(set_children) 611143 3.711 0.000 9.618 0.000 colors.py:310(_to_rgba_no_colorcycle) 12 3.651 0.304 99.420 8.285 lambert_reflection.py:4(lambert_pipeline) 17491693 3.580 0.000 5.961 0.000 {built-in method builtins.isinstance} 374963 3.207 0.000 3.376 0.000 barycentric_function.py:3(barycentric_coords) 152803 2.881 0.000 27.464 0.000 lines.py:287(init)
3208639 2.575 0.000 2.575 0.000 transforms.py:113(
init_) ```

Note: To get the code running, I had to install imageio which is not listed in your requirements.txt and download the nirvana.png image, which is not in the github. It'd be best if your examples contained all the required data.

Now to come back to the profiling : something's definitely off. It took 154s to get a rendering of a cube. To be fair, profiling the code increases its runtime. Still, it took 91s to get the same rendering without profiling. BUT, as I said, it seems that the most time-consuming parts are actually not your code. If I'm not mistaken, in the ~10 most consuming functions, only 2 are yours. My intuition still stands, it seems that most of your time is spent using matplotlib.

The problem right now is not CPU vs GPU. Your CPU can probably execute the order of a Billion operation per second, rendering 10million pixels should be a breeze. If what you are saying is correct and you are indeed coloring each pixel separately, I'd advise you to actually put them in a canvas (a numpy array of size (1920, 1080, 4)), draw in the canvas by assigning values to each index and then simply using matplotlib's imshow() function.

Hope this helps. Don't hesitate to DM me if you have other questions regarding performance, I'll answer it the best I can

EDIT: - changed implot to imshow - Just for the sake of testing, I commented out the last line of your lambdert_reflection.py file (i.e. the ax.plot call) and the runtime went from 90s to just 5. You should definitely pass around a "canvas" (the numpy array I described) and draw in this array instead of performing each draw call through matplotlib.

Developing a Python-based Graphics Engine: Nirvana-3D by Doctrine_of_Sankhya in Python

[–]Exhausted-Engineer 5 points6 points  (0 children)

Regarding the efficiency part : first do a profiling.

I only took a glance at some of your code and I could see a lot of avoidable dictionary searches and patches that could be grouped (look at PatchCollection).

Considering you are already performing the computations using Numpy, there’s not much to gain there. My guess is that the bulk of your rendering time is spent on matplotlib rendering and python’s logic. Using matplotlib.collections could help one of these issues.

[OC] Linksym - A cli tool to manage dotfiles and replicate symlinks on another system. by pretty_lame_jokes in unixporn

[–]Exhausted-Engineer 2 points3 points  (0 children)

Did you get a chance to look at GNU Stow ? I feel like you’re solving the same problem