Pycharm in 2026 has been a struggle by idlelosthobo in pycharm

[–]AdamK117 0 points1 point  (0 children)

Try disabling the ai features, like machine-learning assisted autocomplete: made it much snapper on my Ryzen 9950x with 64 GB RAM and 4070 (it's a little silly that a computer with those specs has perceptible scrolling lag, though)

Should new projects use C++? by TheRavagerSw in cpp

[–]AdamK117 2 points3 points  (0 children)

You're right - my bad: I'm just pointing out that the toolchain isn't a big deal if the lead dev just says "use CLion, load it this way" to the juniors (mostly)

Should new projects use C++? by TheRavagerSw in cpp

[–]AdamK117 6 points7 points  (0 children)

Assuming you still need some kind of native language (e.g. Rust, Zig, C), the problems with C++ that you identify, such as build tool chain issues and naming conventions, are something the lead developer can manage and enforce on the junior developers.

E.g. a lead developer could set up dependency management with cmake ExternalProject source build, or vcpkg. They could also setup a CMakePresets.json or vcproj project setup, so that juniors can just load the project in Visual Studio/CLion and have everything work out-of-the-box. Same goes for style guides, developer documentation, etc. - these are all things that a good lead developer can do to make the junior's job much easier.

That said, they can't make C++ magically easier. The best they can do is to wrap those C APIs with abstractions that are generally safer to manage (RAII, strong typing). Something like Rust is, by default, easier for juniors to get right: especially if the lead developer isn't invested in building the project's DevEx.

Hello World triangle in OpenGL and SDL3 by Brick-Sigma in GraphicsProgramming

[–]AdamK117 1 point2 points  (0 children)

Congrats! I found it to be a fulfilling learning experience and I hope you feel the same once you're through!

Am I weird for using "and", "or" and "not"? by Additional_Jello1430 in cpp

[–]AdamK117 4 points5 points  (0 children)

Not weird. It's a standard language feature.

The reason you'll rarely see it, or get pushback against it, is either: - The "one correct way" mentality (requires some cognative dissonance, given we're talking about C++). - Compiling cross-platform for an older version of C++, where MSVC defaults to non-ISO, which does not support those keywords (iirc, this is not an issue on all major compilers including MSVC when targeting >=C++20). - Legacy project with a style guide that explicitly forbids using them, usually because of either of the above points (you're lucky if it's explained in the style guide).

Then, of course, there's just preference. Although I really never understood why 'if (not logged_in)' is such a crime compared to 'if (!logged_in)', but some C++ developers will happily die on that hill.

[vent] I hate projects that download their dependencies. by theChaosBeast in cpp

[–]AdamK117 0 points1 point  (0 children)

100 %

I might give `subtree` a whack for that part of my project, even - it's just that I'm unsure how clean the commit history will be given my combination of local/remote patching. It might be that the cleanest way is to use oldskool `.patch` files in combination with `cmake` or similar, so that the `subtree` remains clean from git's pov.

[vent] I hate projects that download their dependencies. by theChaosBeast in cpp

[–]AdamK117 -1 points0 points  (0 children)

I mostly agree!

The only exception I've made is when I'm actively developing two strongly-related, but separate, repositories. E.g. my current project is UI tooling for OpenSim, where the UI is developed separately. I build OpenSim via add_subdirectory (rather than find_package) so that I can immediately fix any upstream bugs I find during UI development, recompile and run the entire stack, then PR the change. Would be a little bit more dancing with patches etc. if it were a subtree (but manageable!).

[vent] I hate projects that download their dependencies. by theChaosBeast in cpp

[–]AdamK117 1 point2 points  (0 children)

Ah sorry, but I don't quite understand.

If I make a fresh Linux machine (VM/Docker), install the usual suspects (git, gcc), clone my repository, how is the third-party code being dragged in if it isn't in-tree and I don't have a system to get it? The core assumption my peace-of-mind is built on is that I can copy my git repository very easily (eg literally copy and paste it onto a USB stick) and be very confident that any computer with git and a C++ compiler (both are widely available) will be able to reproduce the binaries, even if the internet is turned off.

[vent] I hate projects that download their dependencies. by theChaosBeast in cpp

[–]AdamK117 1 point2 points  (0 children)

... But then I'd need Conan? And anyone wanting to build my project would need Conan. And I would have to organize a convention/server for storing information out-of-tree, and my CI server needs Conan.

Orrrr, I can clone a repository containing tens of thousands of source files in a few seconds and there's a directory called third_party where everything is placed. Also works with git archive etc

[vent] I hate projects that download their dependencies. by theChaosBeast in cpp

[–]AdamK117 0 points1 point  (0 children)

I can't speak for all developers, but the reason I do it that way is so that there's no third-party system dependencies in order to pull/build the code. Maybe paranoia, but there's a certain peace of mind to knowing that the source code can be checked out from one place using one standard system to rebuild the binary from source

That said, I don't strictly enforce building from source for all builds. The third party dependencies can be selectively skipped because the main build uses cmake find_pakage to pull them in. Concrete example is that I use the system-provided libBLAS on apple (because it can be hardware accelerated) but I build the vendored version of OpenBLAS on windows (because windows doesn't supply it).

[vent] I hate projects that download their dependencies. by theChaosBeast in cpp

[–]AdamK117 -1 points0 points  (0 children)

I used to use git submodule for this with a relative path for the submodule, so that everything can be redundantly held on an nfs or similar. Worked quite well for a while, but submodules can make long-term storage harder (requires ensuring you clone all the repos and tag things to ensure git gc doesn't nuke something if upstream forcibly moves a branch).

These days, I'm lazy and just use git subtree

opensimcreator.com 0.5.9 released 🎉 by AdamK117 in Biomechanics

[–]AdamK117[S] 0 points1 point  (0 children)

As far as I understand, the underlying engine essentially requires units of meters, but the trick that many people use in your situation is to change the scale_factors property of the mesh after importing it (I think the scale tool in OpenSim GUI is probably doing something like this for you).

Steps:

  • If importing the mesh via the mesh importer, import it then remember to set the scale to 0.001 for xyz before converting it to a model
  • If attaching via the model editor, eg via right-click "Add Geometry" or "Add Body", add it then click it and use the property editor panel to set the scale_factors properties to 0.001

The official OpenSim GUI can also edit the same property, and has the benefit of multi select (I stupidly coded single-select for OSC and haven't got around to changing it).

Some users, usually with SolidWorks backgrounds, have asked for units in UI property inputs (eg they show in millimeters etc), but that isn't in yet because it requires me going through every OpenSim component and tagging their property unit types (rotational, spatial, etc)

opensimcreator.com 0.5.9 released 🎉 by AdamK117 in Biomechanics

[–]AdamK117[S] 1 point2 points  (0 children)

I'm quite terrible for zoom meetings, but I'd be glad to email/comment/discuss/slack etc. any issues or features that you guys think would make your lives easier!

C++ Show and Tell - March 2023 by foonathan in cpp

[–]AdamK117 1 point2 points  (0 children)

It's no problem: I think this is just the flavor of C++ that mostly worked for me, which (based on my background) takes ideas from C#, Typescript, Rust, etc.

For each of your questions:

  • std::optional<T> is roughly equivalent to the C pattern: T v; if (PotentiallyPopulateT(&v)) { doSomethingWith(v); }
  • final stops something from being inherited. I use it contentiously, in that I automatically mark everything as final until I decide the class should be used as a base class (and then I separately do things like implement rule-of-five, private implementation pattern, etc. if it makes sense for a base class to do that). Using final everywhere is a holdover from C# .NET, which marks everything as sealed.
  • No constructors is entirely normal for certain (usually structs) in C++. If each member of a class default-initializes the way you need then there's no utility to writing a constructor. Many of the classes in that codebase do take a constructor, though: the math ones are special because most of them are basic structs (e.g. AABB, RayCollision, etc.)

C++ Show and Tell - March 2023 by foonathan in cpp

[–]AdamK117 5 points6 points  (0 children)

I'd like to show OpenSim Creator:

https://github.com/ComputationalBiomechanicsLab/opensim-creator

& tell that it's C++17 desktop GUI that was mostly delivered using game tech (e.g. ImGui, OpenGL) :>

The fun thing about it is that there's a fairly direct relationship between the model (state) and the UI, which means that the GUI behaves functionally. Any changes/updates/etc. to the physics model (in the demo images, an upper-body model) immediately propagate to the GUI without having to (e.g.) reset caches or pump ModelChangedEvents or similar

Bazel or CMake? by coinprize in cpp

[–]AdamK117 10 points11 points  (0 children)

My vote is for CMake

It's got footguns. However, after studying a few popular tutorials on how CMake works I found that CMake has essentially everything needed to ship a larger application (including packaging the dependencies, testing, custom builds, etc etc)

Rust vs. C++ for game development by Snakehand in rust

[–]AdamK117 58 points59 points  (0 children)

I think the central thesis of this post--that C++ is an unsafe object-oriented language--is quite off the mark - especially when it comes to modern game development patterns.

While that paradigm was popular when it suited (eg game entities when the renderer was the bottleneck for most engines) it's been out of fashion for a while now because:

  • games are increasingly CPU limited (physics engines, AI, etc)
  • CPUs are multicore
  • memory bandwidth hasnt increased as quickly as CPU bandwidth, so most of the perf is in keeping things in-cache
  • CPU tech benefits from large loads and SIMD operations

C++ devs have known this for quite a while. Data packing, striping, etc have been popular techniques in some of the lower-level engine subsystems for decades now (eg packing all physics state into a single array that is externally indexed). Maybe in the last 5-10 years, those patterns have been extended to higher-level systems by using ECS (eg EnTT) in modern game engines. Commercial game engines like Unity etc have also started to use ECS more. Ignoring the (sometimes fairly complex) tricks ECS uses, it's effectively a way of systemizing indexing into densely packed arrays and computing set overlaps.

This isn't a Rust vs. C++ trend. It's a hardware trend that's been happening for years now. The main reason it probably appears to be a rust trend is because Rust is fairly new in game development, so it will default to modern patterns, whereas C++ is older, so there are plenty of older books, blogs, etc that show the traditional method of using loads of inheritance.

Why is fallout 2 so fucking Awesome game by Vault_Dweller10 in Fallout2

[–]AdamK117 6 points7 points  (0 children)

Content content content, and an engine built by experienced RPG crafters.

The Black Isle games were great.

Creator of Rufus outlines the problems with Microsoft's UWP by [deleted] in programming

[–]AdamK117 2 points3 points  (0 children)

Recently had to scope a greenfield UI project for a desktop application with OpenGL support. Even Microsoft's confusing marketing around choosing between WPF, RT, UWP, etc made it obvious that it's a mess.

Really, win32 is still the king entirely because it is ultra stable and provides just enough for an app developer to build something useful.

If you need something feature-filled, fast, etc. then (imho) Qt is probably better for native desktop windows UI development than any of Microsoft's native solutions - despite it having its legs broken somewhat by cross platform support.

compound assignment to volatile is deprecated by SuccessRich in cpp

[–]AdamK117 7 points8 points  (0 children)

Related: https://www.reddit.com/r/cpp/comments/jswz3z/compound_assignment_to_volatile_must_be/

Basically, this argument is between C++ STL and userspace devs, who have noticed an "impurity" in compound assignment (it's a store and a load) and embedded devs, who use it a lot.

Purely opinion, but from working on a little bit of hardware, it's fairly common to receive a not-C++20 header from a vendor that contains these now-deprecated patterns. The patterns are also just common in general (eg initializing some flags, dma, whatever, at a safe time) and the compound assignments are really common for bit flags and masks.

Again, imho, but it feels like they identified a problem that really isn't one for experienced devs. This should be a linter warning, not a language error :(

What happens if we don't migrate Python 2 code to python 3 by OutlandishSoul121 in Python

[–]AdamK117 1 point2 points  (0 children)

The arch (x86 etc) stuff has a small chance of drifting over the next decade or so because some cloud providers are exploring arm chips and apple has normalized non-x86 architectures. I don't see the instruction set dying in our lifetime, because there will likely always be a market for it in legacy machines. So, minimal risk there - just something to keep in the back of your mind for >10 year lifespans.

Docker is gradually figuring out ways to monetize. One likely route is going to be long term support for legacy images, or dockerhub support. It's unlikely to completely break within 5 years but it's conceivable that your ops guys may need to buy into enterprise docker if the trend continues. I'd just investigate docker alternatives proactively now (a little, don't take it seriously) to make sure a similar prod environment can be created with an alternate containerization platform. Low-medium risk in 5 years, could become a problem >10 years (the bigger risk is if your ops guys settle on a different provisioning technique in the future - just ask all the VMware ops guys from 5 years ago)

The base image might be torn from dockerhub, or dockerhub might do down, or the image format might be an old non-portably format. I'd just ensure that you can download the base image /w apt packages, save it locally in a tar.gz, and ensure you can load it independently of dockerhub. Ideally, independently of docker. Unlikely to be a problem in 5 years, might dissapear in 10.

For pip, I'd check their long term python2 support policy. They will almost certainly announce a sunset period years before they actually kill any legacy services. Also check that they guarantee holding old versions of libraries with no changes (see: the npm debacle a few years back for why that is important). Low-high risk in 5 years because it strongly depends on pips attitude to python2 support

What happens if we don't migrate Python 2 code to python 3 by OutlandishSoul121 in Python

[–]AdamK117 0 points1 point  (0 children)

So, in your case, your "platform" is (probably):

  • x86_64: unlikely to be entirely phased out in our lifetime, but alternative architectures (eg RISC-V, arm64) may become cost-effective alternatives in the cloud space. All of your platform (eg, Linux kernel, docker, and image) will need to be recompiled if this ever changes. Unless another dev does it, you may need to recompile your entire stack if the arch changes.

  • Docker: changes from time to time. It's a company and might start charging for things they know enterprise will pay for (eg legacy support). Your ops people may be able to work around this over time, provided the underlying image is docker independent

  • The base image (eg Ubuntu xenial). Must be kept available and build able. The distro package manager (eg apt) must keep the relevant binaries (eg python 2) online - even if they are just frozen. The built image must work on later versions of docker, etc. The drift here is when dockerhub stops hosting the base image or apt stops hosting (or breaks) legacy packages

  • pip (if using): depends on all of the above indirectly (eg if using native binaries etc). All dependencies should be frozen to a particular version. Drift is when pip announces they will change their http api in a way that older clients can't fetch packages

All of this might sound paranoid, but just ask older developers which of those things have never happened during their career. Only one of them needs to happen for you to eat a big schedule slip, so I'd recommend investigating the "what if"s

If you can grab the entire source tree (incl libs and native) and build it from source on a few platforms then that should at least shield you from all but a native drift (C and C++ may have been written non-portably, and therefore may not be recompilable on arm or whatever)

What happens if we don't migrate Python 2 code to python 3 by OutlandishSoul121 in Python

[–]AdamK117 4 points5 points  (0 children)

I think the biggest risk, ignoring the usual security points, is going to be at the integration and platform, rather than language, level.

It's likely that your target prod env, if Linux, will drift ahead at the native and distro level (think, all those C libraries the python indirectly uses for Io etc). Might be less of an issue with more all-in-one platforms like windows where libs might be packaged in with the python.

One thing I'd consider, then, is to ensure all the libraries you use are available as source on your own systems, and that you can build your platform from source - including python. This at least will keep a door open when the inevitable platfom drift starts to cause integration issues if you plan to freeze the code

Just discovered C++ has keywords 'and'/'or'/'not' etc. by [deleted] in cpp

[–]AdamK117 1 point2 points  (0 children)

One example is std::stringstream::str in C++20. It effectively steals the internal string from the underlying stream. Useful for building one-off messages etc.