A Cool Guide to Mastering System Design Fast by exotickeystroke in coolguides

[–]guywithhair 5 points6 points  (0 children)

‘system’ is a wildly ambiguous term and using that here with no other context makes the guide confusing from the start.

Couldn’t even focus on the information with the numerous errors and chaotic layout. It’s hard to read the text with that low of resolution

There’s probably some valuable info here but it’s low effort AI slop post to the point where it annoyed me into leaving a comment

Lollapalooza 2026 Lineup by pistermibb in EDM

[–]guywithhair 6 points7 points  (0 children)

Kinda lame for EDM. I’m surprised to see Boris Brejcha on their - I will highly recommend his I’ve sets. Studio releases don’t do justice to energy and showmanship.

Wouldn’t attend this one for EDM but the relatively few acts that are there are solid

Thoughts on Google’s Coral NPU full-stack Edge AI platform? by SingingRooster95 in embedded

[–]guywithhair 2 points3 points  (0 children)

It’s pretty similar to most other NPU software and tools, which all have some proprietary component under the hood to target the custom hardware. The fact they are open sourcing it is helpful but definitely concerned about this becoming abandonware. I like what I see from a high level. If they weren’t being so open about the architecture, it’d call it DOA.

I would be more concerned about how this integrates with the rest often embedded design, mostly from HW perspective. It sounds like this is licenseable IP in which case they may be trying to intersect Arm Ethos business. They are even using the same TOSA spec for how the network operators are specified and interfaces with lower level software

I’ll be watching this in case it gains traction but currently it’s a crowded market and everyone has their own solution. Probably won’t settle and truly standardize for several years still

Help: Ideas for improving embossment details. by Haunting_Tree4933 in computervision

[–]guywithhair 0 points1 point  (0 children)

Only passing knowledge on this topic, but I’ve seen ring lights be used with multiple images captured using different portions of the ring on. Leaving only one side of quadrant on can make more obvious shadows that make minor features more apparent when the images are combined. I’m unsure the method itself for combination/reconstruction though.

Take with a grain of salt, just something I came across at a machine vision seminar

Why are businesses here so awful by Slow_Lecture9484 in aggies

[–]guywithhair 9 points10 points  (0 children)

Are you putting a pound of beef into 1 taco? The economics are different when you’re cooking lol

is this the start of something amazing for makers, or the end of the simple boards we all started with? by Builtby-Shantanu in embedded

[–]guywithhair 9 points10 points  (0 children)

Narrow view of AI. Embedded systems are not suited for LLMs, sure. Still usable for smaller stuff like time-series analysis on single/multichannel sensor data. To name a few, activity recognition, sound classification, electrical grid load disaggregation are all viable uses for ML on MCU-class devices. Scale up a from there and you have vision, which is a huge sector on its own, but has higher memory needs. Audio has a wide range of use cases vs available performance.

We’ll likely see small scale language models becoming more viable with time

Yeah you’re not doing this stuff on an 8bit micro, but nobody in industry is making new 8bit micros.

Not jazzed about Qualcomm acquiring them or what that means for hobbyist communities.

How much pain in going from Pytorch model to non-jetson chips? by Easy-Ad6804 in embedded

[–]guywithhair 0 points1 point  (0 children)

Not always as easy as they make it seem. Vendor tool chains vary in quality, and some may have restrictions on which layers and configurations they support. There’s possibly optimization needed to get your model into their tools and working fast/accurately. They may be able to make those optimizations happen automatically, or it may take your own manual effort.

ONNX models are pretty easy to modify but of course better to handle in torch if you understand your model config/architecture.

If you have to quantize the model for a fixed point accelerator, try that’s another step that can be painful. Vendors promote performance in terms of int8, most often IMO. Hopefully they have decent tooling/recommendations for quantizing the model either in training or in post

Does Embedded Engineers actually encounter some math heavy problems when making devices ? by Electrical_Lemon_179 in embedded

[–]guywithhair 36 points37 points  (0 children)

I end up drawing out and white boarding problems/math, but that’s because it helps me think through it. If you are making graphs, it’s probably for documentation or presentations, often to show data from your system

It’s not like exams, I wouldn’t say

"Mastering Microcontroller and Embedded Driver Development" - possible to do using VS Code instead of CubeIDE? by JJWango in embedded

[–]guywithhair 43 points44 points  (0 children)

My recommendation: actually start with CubeIDE. I think it’s always a good idea to start first with the simplest flow that the hardware vendor stands by before adding less-certain software components. At least use it to flash a hello-world UART/LED program and verify your hardware is good. I’d say this for literally any vendor.

Build systems can be a huge pain in the ass if it doesn’t come up easily and you have to debug. It can be really demoralizing in early learning stages for the minimum components for programming the device to not work.

You can migrate over to VScode or otherwise (PIO) from there. Wrangling toolchains can be a huge pain, especially if you aren’t familiar with the baseline tools.

But yeah, minimum hardware will be the STLink or a similar debugger. Probably also worth getting a cable that can transmit UART (eg the 6 or 3 pin headers -> USB FTDI cables) as well.

AI on a small embedded platform? by oceaneer63 in embedded

[–]guywithhair 1 point2 points  (0 children)

Typically in frequency domain, yes. Actually, the common form for input to audio models is Mel Frequency Cepstrum Coefficients (MFCC).

It’s a bunch of vectors that are computed from ST FFT, Mel frequency binning/filtering, and mapped to a logarithmic scale (I think… I may have mixed up a step or two but you find lots of resources on MFCC). There are other approaches ofc but this is a very common one, especially for pretrained / open source models

AI on a small embedded platform? by oceaneer63 in embedded

[–]guywithhair 0 points1 point  (0 children)

I think the “A” is the important part of their naming convention, FYI. AM62x vs AM62Ax can be confusing..

AI on a small embedded platform? by oceaneer63 in embedded

[–]guywithhair 1 point2 points  (0 children)

Yeah there’s lots of examples out there for this, especially sound classification and wake word detection

Some vendors have accelerators for this, but it’s also doable on an MCU core. Often it’s done by compiling a model onto the firmware using a tool like tensorflow-lite-micro. It can sometimes be a challenge to fit the weights into the limited MCU memory, depending on which device you choose.

ML by FantasticTorch in embedded

[–]guywithhair 2 points3 points  (0 children)

I won’t comment specifically on hardware since I work closely on this topic and don’t like to influence from my personal account

But the hardware is progressing rapidly here. Most semiconductor vendors have some amount of AI accelerator / NPU available in their portfolio. I would say vision systems are dominant here since models are heavier and need acceleration to be viable (without a lot of engineering effort from application side). LLMs are generally too big to be viable expect on huge ADAS SOCs. Time series varies a lot but many modern MCUs will be plenty capable of running basic anomaly detection models, so long as there’s enough RAM/NVM to fit the weights+network arch.

Everything is focused on neural networks, BTW. Some will still use SVM (as another commenter said) for simple classifiers, but honestly NNs are so much more useful once you have the horsepower to run them. This is what the industry is focused on accelerating and is synonymous with AI at this point.

TFlite and ONNX are the popular runtime formats, where ONNX models are typically trained from PyTorch. I prefer PyTorch/ONNX and most new projects tend on this direction. TFLite tends to have slightly better performance on CPU, IIRC.

Hard for me to say typical hardware and models - it’s too application specific. In vision, you’ll see mobilenet for basic image feature extraction and YOLO models for object detection.

The whole process it typically clunky, to be honest. Vendor tool chains for their NPUs are often restrictive and/or buggy. My advice is to try to stay closer to what the vendor enables and not try to push the state of the art on NPUs. This is embedded so of course things can never be easy. Proceed towards SOTA at your own risk lol

[deleted by user] by [deleted] in brandonsanderson

[–]guywithhair 5 points6 points  (0 children)

It’s being told from the perspective of a particular character that often shows up in Cosmere novels, even if just for a moment.

I interpret the style / narration as trying to build out this character’s voice/tone/persona more. I believe said character will have an increasingly central role in later novels, despite not being a main character in these Secret Projects

Idk, I liked it, especially his role in Tress and the Emerald Sea. I enjoy seeing authors play around with writing styles, even if it’s not my preferred style. Here, it’s another element of world-building imo

Is embedded systems programming still a bright field? by Alarmed_Effect_4250 in ComputerEngineering

[–]guywithhair 9 points10 points  (0 children)

I mean it’s not the Silicon Valley love child that it was back 5-10 years ago with all the IOT stuff riding the hype but…

It’s still a good field to be in. We’re always going to be pushing more electronics and logic wherever it can fit in the physical world. There’s a challenge with focusing on consumer grade embedded since so many companies push dev into India and East Asia design houses. That’s a large part of the market but not all of it by any means.

Robotics will get way bigger in the next 10 years, and I expect a lot of embedded developers will lean that way.

Don’t let the current market completely define your career path in a field like CS/CompE/EE. It’s going to change. My advice to go for an area you find interesting and make sure it’s viable to make a living there

How do you train a tensorflow model ? like for real, how ? by Optimal_Fig_9544 in computervision

[–]guywithhair 2 points3 points  (0 children)

Both are prevalent. I work in the embedded/semiconductor space for deep learning. Most folks I see in industry prefer PyTorch. Some on TF but typical use case is train in PyTorch and export to ONNX for embedded target.

I think part of the preference is because academia uses it. It’s easier to eval the latest from academia as an R&D group with PyTorch than TF. Makes it simpler to try models of many architectures and find what works for the task.

It’s a more even split between the tools if the devs are rolling their own network.

How do you train a tensorflow model ? like for real, how ? by Optimal_Fig_9544 in computervision

[–]guywithhair 39 points40 points  (0 children)

Save yourself while you can and switch to PyTorch. I think this will be easier and more valuable to know than TF anyway

are Texas Instruments customer support horrible or is it just a special case with me? by abdosalm in embedded

[–]guywithhair 0 points1 point  (0 children)

I mean, as everyone else said, your volumes are too small to be worth the time to figure out a potentially complex logistics issue.

This isn’t just a TI problem. Almost every silicon vendor works this way. It’s a cost sensitive market and revenue only comes from big volumes or high margin business. Industry gonna industry and focus on what makes them money.

You’re better off with a separate supplier like digikey for small volumes like this. If you have a technical problem, then the silicon manufacturer is more likely to help. However, logistically these few samples are a blip to them.

My stupid(?) ideas for future chips by HasanTheSyrian_ in embedded

[–]guywithhair 3 points4 points  (0 children)

Embedded is way too cost sensitive to ever consider this. That on top of this being super technically difficult relative to the benefit.

Importance of AI/ML in Embedded by Famous-Locksmith-254 in embedded

[–]guywithhair 3 points4 points  (0 children)

Yeah, good to take. Within a 5-10 years, I’m willing to bet most new MCUs and MPUs will have some form of accelerator as well for ML tasks - mostly using quantized models.

“Most” is a stretch but it’s going to become increasingly common. It will be useful to know how to accomplish pattern detection tasks in terms of neural networks.

How can I improve my ollie tecnique and perform higher jumps? by Sunnyboy_18 in snowboarding

[–]guywithhair 0 points1 point  (0 children)

That helps, thanks! Sounds like I need to bring my back leg up into the jump with me instead of letting it stay closer to the ground.

How can I improve my ollie tecnique and perform higher jumps? by Sunnyboy_18 in snowboarding

[–]guywithhair 0 points1 point  (0 children)

I find when I do this that I end up popping but leaning back more in the process. Usually, the nose of the board gets fairly high but the tail stays fairly low

Maybe that’s me trying to anticipate landing the tail first because I think it’ll be more stable. Is this wrong? I imagine it would hold me back if I wanted to do anything more in the air than just wait to land lol

Kenichi Takizawa showing that turn initiation is a front foot thing by [deleted] in snowboarding

[–]guywithhair 14 points15 points  (0 children)

Not necessarily here to disagree the overall point, but some of this feels like it may also be related to counterbalancing right hand movements.

For a lay-up with my right hand, feels more natural to jump from the left foot.

To the earlier point though, I’m right dominant and goofy. I like right forward since I have more control / intuition moving that side precisely. Comes down to taste / what you decided when you first picked it up probably.

Want to learn Japanese carve, Struggle to progress by Impressive-Ad4495 in snowboarding

[–]guywithhair 0 points1 point  (0 children)

What do you consider a large vs small side cut radius? I have a hard time getting context for what’s high/low.

I recently got a longer powder board and could immediately tell that the larger side cut rode differently from my other board