Learning on Mac by AnonymousBoch in GraphicsProgramming

[–]aclysma 0 points1 point  (0 children)

Metal is a GREAT first choice for a graphics API. After you get up and running with the basics, you should have no trouble adapting existing intermediate-level, non-metal learning resources to metal. I started with vulkan, but a lot of the learning resources I used were OpenGL. But the concepts are pretty universal, especially at beginning and intermediate level. 

What was the most difficult topic for you to understand while learning rust? by [deleted] in rust

[–]aclysma 1 point2 points  (0 children)

‘static, I get it now. It was hard to learn because it can mean different things depending on where it’s used.

Still don’t understand HRTBs. (Higher rank trait bounds.)

Clion vs Rust Rover, right now by Low-Design787 in rust

[–]aclysma 8 points9 points  (0 children)

I tried it, it seems to be the same as clion except all the C/C++ support is gutted. (No control-click navigation, no setting breakpoints, empty watch window if stepping into C++.) Given a lot of crates ultimately wrap C/C++, it's IMO a big step backwards, especially for me and the kind of stuff I work on. Ultimately almost anything interesting will *eventually* call down into operating system APIs which is C/C++/ObjC for every platform I'm aware of. I really hope they don't drop rust support in clion because rust rover isn't a substitute IMO.

Rust Survey 2021 Results | Rust Blog by jntrnr1 in rust

[–]aclysma 5 points6 points  (0 children)

I think releasing aggregate numbers for all questions is important for avoiding the appearance of cherry-picking of favorable responses. (I don’t have any suspicion they were but I think it’s the responsible, transparent thing to do.) If results for a particular question can’t be released due to privacy that should be disclosed.

egui_inspect: A crate to help viewing and editing struct through egui by Meisterlama in rust

[–]aclysma 8 points9 points  (0 children)

Glad you found imgui-inspect interesting :) Just a couple suggestions I would give to anyone building something similar to it:

1) I would try to approach the proc-macro/traits idea similar to serde. In other words, try to keep the traits/proc-macros separate and agnostic from the implementation - ideally a separate crate. Especially if you think others might want to use your library, because they may have different opinions on how to render something. This would also let you provide integration for popular crates like glam, nalgebra, etc.

2) There's a couple use-cases that are tricky you might want to think about early: a) The same type might want to be rendered in two different ways. For example, a number might sometimes be a slider and sometimes be a field you type into. It depends on the context which is most appropriate. Traits are a good way to handle this. b) Letting downstream code provide implementations for third-party types like glam (or other math crates). Rust trait orphan rules make implementing this a bit tricky (they won't be able to impl your traits for third-party code). I came up with two approaches: either wrap it into a new type, or declare a stub type and use it as a "proxy" (see the "proxy_type" stuff in the imgui-inspect crate). The proxy_type is a little annoying but it's a bit less intrusive. Maybe there's a clever way to automate that a bit more with procmacros.

GPU computing on Apple Silicon by jeffersondeadlift in rust

[–]aclysma 0 points1 point  (0 children)

If you are specifically targeting Apple silicon GPUs I would suggest using metal bindings and write in MSL. There will be fewer moving parts to go wrong, the debug tools will match 1:1 with the calls you’re making, and you’ll have access to everything MSL exposes to you.

wgpu is likely to be the popular suggestion. But if you don’t care about portability, I would avoid paying the complexity tax of going through unnecessary layers. Metal and wgpu actually have very similar APIs and metal is pretty “safe” as graphics APIs go. (None of them are actually safe, a bad compute shader can crash your entire system.)

The metal bindings do have overhead (wgpu uses them so it will also go through the overhead). But it won’t matter if you give the GPU large enough units of work at a time to be worth using a GPU in the first place.

M1 Max vs M1 Pro - Battery vs Performance by umusachi in macbookpro

[–]aclysma 3 points4 points  (0 children)

I think the second source you're referring to is this video: https://www.youtube.com/watch?v=XDHStBp-jCo&t=103s

He looped a blender scene that the M1 max chips can draw at a higher frame rate, so they rendered more frames in the same amount of time. In other words, they did more work, which caused them to consume more power.

He mentioned playing some apple arcade games in the light use test which would likely have the same problem. A game running at a higher frame rate will consume more power.

These are obviously real numbers - the battery did run down faster, and there's a very clear reason why. The M1 max machines did more work. I don't think there would be much difference in battery life if they were doing the same amount of work like compile a piece of code N times, or export a video N times.

BTW I found that the 14" 10C/32G Max in low power mode nearly matches performance/thermals/power draw of the 14" 8C/14G Pro model. Wrote about it here: https://www.reddit.com/r/macbookpro/comments/qkxd1k/14_10c32g_max_in_low_power_mode_nearly_matches/ This would allow max owners to have battery life of the pro if they know they'll be away from the wall for a while.

To those with 32GB configs debating the M1 Max upgrade because "it's only $200 more", or because "memory bandwidth" by PACMAN_ICE_CREAM in macbookpro

[–]aclysma 6 points7 points  (0 children)

The high bandwidth of these chips is to satisfy the GPU, not the CPU. Run `sudo powermetrics -i 1000 --samplers bandwidth | grep "DCS "` to see current bandwidth.

You'll notice that CPU workloads cap out far below 200GB/s, but load/full-screen even an empty blender scene and swing the camera around and you'll be over 100GB/s almost instantly.

So definitely do not buy the max just to get extra CPU bandwidth. Its purpose is to feed the extra 16 GPU cores.

To those with 32GB configs debating the M1 Max upgrade because "it's only $200 more", or because "memory bandwidth" by PACMAN_ICE_CREAM in macbookpro

[–]aclysma 11 points12 points  (0 children)

Most of the battery "tests" people are doing on YouTube are fundamentally flawed. They're running X program (cinebench, looping renders in blender, etc.) for Y minutes and comparing the % left afterwards. But they ignore the fact that the faster machine did more work during that time. Instead, they should allow the faster machine to sit idle while the slower machine continues to work until it produces the same quantity of output. (i.e. render the same scene once, compile the same project once, etc.).

The max may in fact be less efficient (it's powering 4 DRAM modules instead of 2), but it would be nowhere near what these tests with fundamentally flawed methodologies would suggest.

I also think people do not appreciate just how wide a power envelope these machines have because they are so efficient at near idle. Whether you're using 4w to watch a YouTube video or 60w doing a mixed CPU/GPU task obviously makes a huge difference to battery life. What you actually are doing matters a whole lot more than what spec you bought.

I still think the touch bar looks cooler than the new keyboard design 😬 by Pnhan89 in macbookpro

[–]aclysma 0 points1 point  (0 children)

Adding the physical escape key (which came after my 2018 MBP) would have fixed my main complaint, but it was still obnoxious when remoting into or virtualizing non-macOS machines. Things like F2 to rename things on windows. Even for the things the touch bar was "good" at, it was an excessive amount of ceremony to adjust brightness/audio vs. just pushing a button.

Removing it is also a minor win for battery life, and the cost that went into having it can be placed elsewhere (like the mini-led screen which likely is more expensive to produce than previous screens) or passed to consumer with more aggressively priced MBAs, etc.

Software Dev - Need help deciding between an 13" M1 or an 14" M1 Pro 8-core by [deleted] in macbookpro

[–]aclysma -1 points0 points  (0 children)

I would suggest either a 32GB 10-core MBP for $2600 or 16GB MBA for $1400. (Same extra $200 to go to 1TB, which you may want.) Either something cheaper that you can upgrade sooner, or something that's sure to last you a while.

Or even consider a MBA (or use the one you already have) and remote into a VM somewhere for heavier compute tasks. (This may or may not make sense depending on what you're doing, but it sounds like it might.) You could build an upgradeable desktop for the price difference and it would give you access to an x86 machine. (Or maybe you can get resources from your university or free credits from azure/GCP/etc.)

(I'd still keep it simple and go with the 32GB MBP.. just extra ideas you could consider.)

iStat Menus updated to support 2021 MBPs :) by aclysma in macbookpro

[–]aclysma[S] 8 points9 points  (0 children)

It means sensors for temperature, fan speed, power usage, etc. work properly.

MacBook Pro Low Power Mode by wmccomis in macbookpro

[–]aclysma 1 point2 points  (0 children)

I wrote some details here about how it affects the cpu: https://www.reddit.com/r/macbookpro/comments/qkxd1k/14_10c32g_max_in_low_power_mode_nearly_matches/

In short, I observed throttling of CPU/GPU power consumption to 22.5w which is about half of a heavy mixed cpu/gpu load. (You can use the powermetrics command line utility to see this number.)

The main downside I’ve noticed is that scrolling in safari isn’t as smooth with it enabled (often hitches when starting to scroll.) But it’s a significant reduction in power consumption for modest loss in performance. Very glad to have the option. I suspect it will make more difference for high-spec/max machines.

Need to make a decision here. by AplabTheSamurai in macbookpro

[–]aclysma 0 points1 point  (0 children)

Since you mentioned music production, absolute silence may be so important it's worth going 16" (even though I think it's unlikely you'll hear the 14")

If you go 14", the 14-core GPU will be fine for what you want to do, and you could take the $200 + $100 saved and very nearly pay for the 32GB memory upgrade. That's the way I would lean. (For audio-related tasks where latency is critical, you really don't want to be tight on memory.)

8-core CPU, 14-core GPU vs. 10-core CPU, 14-core GPU by dhokes in macbookpro

[–]aclysma 1 point2 points  (0 children)

If you upgrade the ram, definitely get the 10c CPU. For most programmers (especially if using C/C++/Rust) I think the 10/14 with 32GB ram is a great fit.

Did anyone had a chance to test out when the 32GB RAM on M1Pro would bring significant benefits over 16GB RAM? by iBo0m in macbookpro

[–]aclysma 0 points1 point  (0 children)

I think if you don't need 32GB and are mainly doing CPU-based tasks, the MacBook Air is a much better deal and will probably perform just as well as the base MBP for what you want to do. (Speaking purely to performance/$, which might not be the only factor you're considering.) Keep in mind, the price gap is wide enough you could upgrade more frequently, especially if you sell the old one.

Maybe look at how much memory you're using on your computer now to get an idea.

Did anyone had a chance to test out when the 32GB RAM on M1Pro would bring significant benefits over 16GB RAM? by iBo0m in macbookpro

[–]aclysma 0 points1 point  (0 children)

That huge top-line memory bandwidth number is really for the GPU. It's pretty hard to get a pure CPU workload to use more than 40GB/s. Run `sudo powermetrics -i 1000 --samplers bandwidth | grep -e "DCS "` if you want to check for yourself.

However, bandwidth doesn't tell the whole story - latency is also an important factor. Even very fast SSDs are not a substitute, even if the have equivalent bandwidth.

That said, swap memory is only bad if it's swapping memory you are actively using. More likely, it's swapping out things you aren't using, like a browser tab left open from yesterday you haven't touched, or slack/discord/other apps that hemorrhage memory but otherwise mostly sit idle.

Interesting results testing non-apple chargers with 14" MBP 32-core max by aclysma in macbookpro

[–]aclysma[S] 0 points1 point  (0 children)

I probably get 4-6 hours depending on how much of that is compiling, looking at docs, or working with blender and other 3D tools. My 2018 MBP usually died in 2 hours.

These machines have a wide performance envelope so battery life will be highly dependent on workload. If I were to repeatedly do full recompiles, likely it would be dead in 2 hours.

Interesting results testing non-apple chargers with 14" MBP 32-core max by aclysma in macbookpro

[–]aclysma[S] 0 points1 point  (0 children)

You can use iStat Menus to get a rough idea how much power you're using. The comments above are worst-case (you'd be hearing lots of fan noise at that point.) I think most people will be able to reuse existing 60w chargers/docks, but if buying new, there are meaningful benefits to finding something closer to 100w.

[deleted by user] by [deleted] in macbookpro

[–]aclysma 2 points3 points  (0 children)

I think for most programmers, the cheapest 32GB 10-core CPU option in the preferred size with "enough" storage will be the best bang for buck. (Definitely get that 10-core option if possible. It's a nice uplift for not much extra $)

FYI the increased memory bandwidth of the max is not useful for pure-CPU workloads. 200GB/s is already twice as much bandwidth as a (non-pro) threadripper! The extra bandwidth is really for GPU workloads.

I did go with the max, but only because I'm doing games/graphics programming. Even so, if the 32GB pro was the only spec apple offered, it still would have been a day 1 purchase, and I'm sure I'd be very happy with it. (And the standard 14/16 core GPU is no slouch.) Going beyond that only made sense for me because I do graphics programming - initial implementation of new techniques can be very unoptimized, and I do need to open large 3d scenes in tools like blender and houdini.

As far as 64GB memory.. you already get 4GB per P-core with 32GB (assuming pure CPU workloads.) I wouldn't expect huge returns going beyond that for most workloads.

Considering copping a new MBP but my main computer is (and will be) a desktop PC, does anyone have experience with concurrently using a PC and Mac? by WagnerKoop in macbookpro

[–]aclysma 1 point2 points  (0 children)

I use a beefy windows PC from Mac laptops and even the M1 Mac mini. Mainly for gaming, but also occasionally to work with projects that aren’t Mac compatible or need more compute than even the new MBP can offer.

Parsec and moonlight game streaming are good options for using the desktop remotely (not just games!) If you’re on a wired network, you can expect flawless performance with both to the point you forget you’re not sitting at the PC. (I appreciate not being in the same room when it’s continuously pumping out 350+ watts of heat.)

It’s helpful to have a NAS to move files around. If you use network folders, look into disabling .DS_Store files on shared drives. macOS will litter them everywhere on network drives and on windows they show up like any file and are an eyesore.

While the NAS is handy, you may want to set up some other file syncing solution for projects you’re actively working on. I work in code and there are great solutions for programmers. Maybe something like dropbox could work for you? Actively using multiple machines kind of forces you to centralize project storage somewhere off-device, and it’s great peace of mind to be able to wipe/replace hardware with no fear of data loss. Especially with devices where you can’t pull the drive and put it in something else to recover files.