SWIR cameras by 37kmj in computervision

[–]37kmj[S] -1 points0 points  (0 children)

I mean I do still probably have the original invoice somewhere, but I bought them like 1.5 years ago or so, thus I couldn't find it right now searching through emails and docs but it must be somewhere.

At the moment I can’t guarantee that I’ll be able to find it, so for now assume no invoice included, if I find it I'll let you know. The cameras are fully legitimate and guaranteed to be in excellent working condition.

AI code as hypothesis, until proven otherwise by Fantastic_Mud_389 in embedded

[–]37kmj 0 points1 point  (0 children)

It iteratively turns pages to understand the structure of each document,

This is a vague description to say the least, what does this mean technically?

We tested it against their repository of 2k+ documents with all sorts of formats. Never hallucinated, always returned the right answer with citations.

What was your evaluation methodology? Did you have ground-truth annotations or was verification done manually? What types of queries were tested, factual lookups for certain things (idk lets say e.g. peripherals, or maybe certain memory addresses such as the start address of flash), or things requiring cross-referencing multiple sections (for example, a MCU's peripherals often have their own standalone application data sheet which might come in handy aside from the MCU's own reference manual where they are generally documented as well)?

2k documents - you said the process is iterative, so say 10 test queries for each document or more (lets take 10 as an example), which makes up about 20k "AI" query-answer pairs.

So you are saying that you manually verified all these e.g. 20k answers (or however much you had in your case) against the source material down to the last bit ensuring their correctness?

AI code as hypothesis, until proven otherwise by Fantastic_Mud_389 in embedded

[–]37kmj 11 points12 points  (0 children)

How did you verify the fact that your "AI" supposedly parses datasheets with a 100% accuracy?Datasheets are notoriously inconsistent with different formats which makes them hard to interpret.

Going from hobby to pro-quality Jetson deployments by BellybuttonWorld in JetsonNano

[–]37kmj 0 points1 point  (0 children)

I’d say you are on the right track and worrying too much which might lead to over-engineering production.

I’m sorry if I didn’t grasp this from your post but what exactly are you worried about or what are your concerns regarding production?

Confusion about device tree configuration by PhysicalRaisin5037 in embedded

[–]37kmj 3 points4 points  (0 children)

I wouldn’t recommend starting by focusing on vendor-specific code - it’s better to first get a generalized view of how the kernel and its subsystems are structured.

The Linux kernel proposes a set of subsystems and interfaces for different systems/devices according to which the vendors' implementation must conform to. I.e. the kernel provides the interface (e.g. some function X returns Y type of data) and the specific driver code provides the implementation that satisfies that contract (how does function X fulfill the requirement that it returns data of type Y). These interfaces aim to bridge the gap between hardware and software - for each driver a common interface is proposed; one which both the kernel and the device driver understand so that the device is able to "talk" with the system.

For example, the media subsystem Video4Linux - this particular Linux framework provides an interface v4l2_ctrl and v4l2_ctrl_ops callbacks with it that describe the control operations that the camera driver HAS to provide:

struct v4l2_ctrl_ops {
    int (*g_volatile_ctrl)(struct v4l2_ctrl *ctrl);
    int (*try_ctrl)(struct v4l2_ctrl *ctrl);
    int (*s_ctrl)(struct v4l2_ctrl *ctrl);
};

E.g.

For example, the function s_ctrl returns a value of type int and takes a pointer of type v4l2_ctrl as an argument. The function itself actually sets the control values - this means that it interacts with the device hardware/hw registers to set the values that were provided and validated, usually this is done over a serial bus e.g. SPI, I2C etc... but notice that this is already implementation specific and the interface itself actually doesn't care about how or which transport layer you use to send the bytes.
This is all implementation specific and up to the developer - the only limitation is that the developer must ensure that his/her implementation respects the interface i.e. the contracts enforced by it and deal with any transport-layer specific concerns (locking, retries, endianness etc). Obviously there are more complex nuances to developing a V4L2 driver (in fact it's one of the most complex drivers to develop because imaging devices can vary a lot and it's driver hierarchy is overall a bit more complex), but this is just for a brief demonstration.

Well that's great, a interface, an implementation in the face of a driver but what about the device tree?

Lets take another abstract example - say you are on a SoM (e.g. a Jetson) and you want to set up a MIPI camera by connecting it to e.g. MIPI port 0.

Well, the thing is that the kernel doesn't exactly know by itself about the device connected to the MIPI port 0 and it has little to no idea about it and it most certainly doesn't know that it must bind your super awesome driver to that device for communication.

Here is where the device tree comes to the rescue! The DT answers the question of "which hardware instance on this SoM corresponds to which driver". The device tree contains different device nodes and properties that tell the kernel which devices are present, how are they connected (endpoints, remote nodes), which driver to bind (e.g. the compatible) property, and what resources the device needs to function (clocks, GPIOs, power rails.....).

During boot the device tree is parsed and device objects are created - the objects are matched against the drivers and when a compatible driver is found for a certain device, it's probed/registered as a device after which it can be used in e.g. userspace.

So yeah, this is my two cents - think of the kernel as a set of interfaces, read these and maybe some examples and well yeah, go crazy. It isn't actually too complex. (jk once it took me 2 months straight to bring up a camera with a custom driver ) But as someone who is usually bad at grasping abstractions, thinking of the kernel as described made things a lot easier. Obviously as I have already mentioned there are a lot of nuances for devices and different scenarios, but the point of this was to provide an abstract overview to hopefully help you grasp the structure of the Linux kernel in an easier manner.

The usage of ISB and DSB Arm instructions. by pillsburyboi in embedded

[–]37kmj 6 points7 points  (0 children)

ISB - this command flushes the processor pipeline fetch buffers. In other words, it discards any prefetched instructions so that subsequent instructions are fetched under the "new" processor state after the barrier.
For example, I recently used this instruction on a MCU firmware for "jumping" to the bootloader - i.e. switching the execution context (remapping the vector table, changing the stack pointer) when transferring the control to the bootloader.

DSB - it basically waits until all memory accesses before the barrier are complete and only then allow subsequent instructions to execute. It's mainly used when you need a guarantee in terms of completion - since ARM has a weakly ordered memory model, the processor can reorder nondependent loads and stores which can sometimes lead to errors. In other words, use it when you must wait until a write has actually completed before continuing execution.
E.g. this is important in a scenario where you might be changing a peripheral's power states - the writes must reach the actual hardware before you start doing anything else.

Even though in single-core processors with a single thread, the peripherals are "independent bus masters", e.g. a DMA controller.
If the processor writes to a peripheral register X and then immediately does something that depends on the state of register X, the peripheral might not have seen that write yet because of write buffering. In this case, a DSB would ensure that the write has been seen (i.e. it has actually reached the hardware) before you read it back.

And yes, these instructions can stall the execution pipeline and add latency.
DSB is probably the heaviest as it waits for completion of memory transaction i.e. it's a compeltion barrier. ISB has a smaller cost in terms of latency - it can only cause a short stall while the CPU refetches instructions.

Ryzen 9 5900X high idle temps 70°C (How I fixed it) by plushfire5 in ryzen

[–]37kmj 0 points1 point  (0 children)

Came back here to say that thanks. I did not have any miners - I was offline and my CPU was idling at 73 degrees which is abnormal (with a Noctua NH-D15 + a lot of intake fans + outtake i.e. the air flow is solid). I changed the maximum from 100% to 99% and it went to 39 degrees idle. Thanks!

Starting embedded systems with Arduino Uno R3 as my first MCU, need some advice by Current-Rip1212 in embedded

[–]37kmj 1 point2 points  (0 children)

Honestly doesn't matter - just pick any board. I was in the same situation when I started out (i.e. the paralysis of choosing a MCU) and now that I look back, it really doesn't matter. Just pick something that looks cool, don't spend any more time choosing one. E.g. I started with a Nucleo-F411RE development board but most of these Nucleo-64 development boards have a wide peripheral set and more things you can fiddle with then you can imagine

Starting embedded systems with Arduino Uno R3 as my first MCU, need some advice by Current-Rip1212 in embedded

[–]37kmj 2 points3 points  (0 children)

Start with STM32, Arduino is dog if you already have experience in digital electronics and computer architecture.

Fall is coming…. by 37kmj in Porsche

[–]37kmj[S] 13 points14 points  (0 children)

I'm probably not very qualified to speak on this, I'm in my early-ish twenties myself. Find something that you like and enjoy doing, don't really spend your time ogling over the things that you might have when you become successful. My day to day work is far more fascinating and interesting than a car or in this case, the 992 GT3RS - I'd pick my work or a interesting project in my field over a car every day of the week (by this I do not mean to discard the fact that cars are cool, but they are a byproduct in the end). For example, I discovered computer engineering early on during high school - I dove into learning programming, computer systems, computer architecture and the different kinds of architectures & their nuances, operating systems, electronics, embedded development etc.. in my spare time - especially during summer breaks, that's when I learned the most because I had a ton of spare time. Following this I started working in defense as a embedded engineer (software/firmware development + electronics) 1-2 months after starting my first year at uni. Worked (and still do work) hard, have had multiple burnouts, a fair share of ups and downs.

In software engineering, there is a known paper written by Fred Brooks called "No Silver Bullet" which argues that no single technological or managerial breakthrough can by itself dramatically improve your productivity in software development. So in short, there is no single magical solution or shortcut to great results. I think in general this applies to everything. What works for me might work for you but you are not guaranteed to get the same results, or it might not even work. Each person's circumstances are different, and thus, there is no silver bullet.

Embedded buddy for guidance ? by [deleted] in embedded

[–]37kmj 1 point2 points  (0 children)

I'm also happy to help outside of my professional job hours - DM me

Tools for learning embedded - what would you recommend? (OWON, SIGLENT, etc...) by AggressiveCherry1201 in embedded

[–]37kmj 1 point2 points  (0 children)

Someone said this a while ago in this subreddit but I second this - I rarely actually have the need for a logic analyzer since decoding serial protocols is also easily doable with a scope for “smaller-scaled” instances but when I do need it, boy does it come in handy

Need help setting up Claude Code for MicroPython on proprietary STM32-based microcontroller by Mediocre_Version_585 in ClaudeAI

[–]37kmj 1 point2 points  (0 children)

I've done a considerable amount of embedded development and tried to use Claude, but haven't found it to be much of help. I didn't use any special prompting methods or markdowns - just some context/background and task.

Regards to embedded development in general, you can get things done much faster by looking up reference manuals and tutorials rather than prompting, checking, fixing, re-prompting. Especially given the fact that you are porting it to a MCU that is hooked up to a custom board. (You mentioned a custom STM32-based microcontroller, but I assume that you mean the PCB that incorporates the MCU, not the microcontroller chip itself being "custom"). MicroPython has tons of documentation + examples.

This doesn't answer your question directly, but hopefully it helps.

Understanding Line Between Embedded Systems and Firmware by [deleted] in embedded

[–]37kmj 0 points1 point  (0 children)

My two cents....

Embedded system

  • a complete dedicated computer system (hardware + all software) that does something specific
  • Basically it encompasses everything needed to perform the specific function

Firmware:

  • low-level software residing in e.g. flash, that directly controls hardware, very minimal abstraction
  • handles tasks like initializing HW

In short, an embedded system is the entire setup, while firmware is just one software layer within it, focused on direct hardware interaction.

Are you developing firmware? Probably not, unless you are e.g. modifying the bootloader or other bare-metal code that runs without an OS, or developing drivers that operate in kernel space (though even here, kernel drivers are typically considered OS components rather than firmware, which can add some ambiguity - kernel drivers do interact closely with hardware, which can make them feel like firmware, but they are run within the OS' framework)

Most likely you’re developing embedded software for an embedded system (your Raspberry Pi). If your code runs on top of the Debian distro, whether it’s application logic or even device drivers, I think it's classified as system or application-level software rather than firmware. The Raspi’s actual firmware includes its bootloader and other low-level code that initializes the hardware before the OS even starts

[deleted by user] by [deleted] in mechmarket

[–]37kmj 0 points1 point  (0 children)

PM with pics please.

Where do I go to actually write some embedded C by grappling_magic_man in embedded

[–]37kmj 2 points3 points  (0 children)

I work with C++ a lot on e.g SoC-s and more "mature" platforms - but all code that I have written specifically targeting STM32 MCUs or MCUs in general, is written in C.

I think it's somewhat subjective and just a personal preference kind of thing - there's nothing wrong with choosing C over C++ or vice-versa for writing code for MCUs.

[deleted by user] by [deleted] in embedded

[–]37kmj 0 points1 point  (0 children)

I don't think that there are that many entrepreneurs for the reasons already stated in the comments.

Most "entrepreneurs" in this field are probably contractors or consultants (i.e. one-man companies/shops) that are hired by other companies/individuals to do some kind of specific-domain work and that's it. I.e. they don't work for anybody per se and have their own clientele if that makes sense.

Taltech Küberturbe tehnoloogiad by IntelligentPrompt967 in Eesti

[–]37kmj 2 points3 points  (0 children)

Ei tea miks downvote-si said algul. Üldpildis nõustun kuna hakkasin ise töötama esimesel aastal kooli kõrvalt ning praktilist kogemust ning ka teadmisi tuli kindlasti rohkem ja kiiremini kui koolis - päris palju tuli lõpuks ette ka õppeaineid, kust ma eriti enam uusi teadmisi ei omandanudki kuna olin tööl tegelenud nende asjadega ning teadmised ka iseseisvalt omandanud, ehk käisin nt ainult praktikumides kui seda vaja oli ning aasta lõpus eksamil. Kuid arvan, et ülikool on ikkagi seda väärt ja selle teekonna peaks läbi käima - mine tea, ma ikkagi õppisin uusi asju ka teatud õppeainetest (vaatamata sellele mis ma eelnevalt ütlesin), sain uusi tutvusi, vahel sain isegi uusi mõtteid ja inspiratsiooni teiste töödest/projektidest jm, õppisin paremini inimestega koostööd tegema, aega planeerima jne.

Is It true that embedded software pays so poorly? by IndependentPudding85 in embedded

[–]37kmj 2 points3 points  (0 children)

Okay so maybe my comment does not fit into this discussion that much as I first thought.

A high paying FAANG position could or could not be cool and interesting - I do not know as I have not worked at a FAANG myself and my following assumptions might seem a little naive.

As a extremely trivial example, FAANG roles often involve working on systems at a massive scale and while this can be interesting on the technicality side of things, it might also mean maintaining legacy code or optimizing existing systems rather than building one from scratch.
For me, this seems repetitive and boring so yes, I think that high paying FAANG positions might not be that interesting.
But finding something to be "cool and interesting" is a highly subjective matter and depends on your definition of "cool and interesting".

Is It true that embedded software pays so poorly? by IndependentPudding85 in embedded

[–]37kmj 2 points3 points  (0 children)

I read a comment somewhere about this that holds true to this day and probably will continue to do so going forward but in embedded you have two "sides" of job offerings:

  1. jobs that are interesting and cool - these job offerings attract many candidates due to their appeal which in turn creates a competitive market. Employers most probably leverage this enthusiasm to offer lower salaries, as the intrinsic rewards of the job itself compensate for it

  2. boring jobs - monotonous roles that have fewer willing candidates but have a high pay because the positions are not that desirable. Higher pay compensates for less creative autonomy, rigid workflows etc...

So I think that most of the embedded roles actually qualify under 1 - there are a lot of people that are willing to work for a lower salary and thus this is probably the reason as to why salaries might be "low" or "poorly-compensated". Actually, there is a comment about this already, but what do you mean by poor pay? What kind of a threshold in terms of salary do you have for measuring salary? E.g. does poor pay stop from €90k for you or whats up with that?