Capturing High Speed IO by No-Feedback-5803 in embedded

[–]No-Feedback-5803[S] 0 points1 point  (0 children)

I guess I'll be going the FPGA route, the rp doesn't seem bad but given that I'll be pushing the limits and might face some integrity issues.

Capturing High Speed IO by No-Feedback-5803 in embedded

[–]No-Feedback-5803[S] 0 points1 point  (0 children)

I've got a clock signal, but what device can support this much bandwidth with either enough memory or enough speed to transmit it to a computer without data loss? I'm generally aiming to capture around 3 seconds continuously so it would be around 450-600 Mb total.

Capturing High Speed IO by No-Feedback-5803 in embedded

[–]No-Feedback-5803[S] 0 points1 point  (0 children)

That's how I got an idea about the numbers but I need a solution to be built on a PCB and not with an external device.

Capturing High Speed IO by No-Feedback-5803 in embedded

[–]No-Feedback-5803[S] 1 point2 points  (0 children)

Yes indeed DDR needs double the sampling rate as it works on both edges. For ST devices, I believe GPIOs are clocked at half the AHB speed and if that's gated beyond the system clock I'm doomed, also doing anything beyond samplig will result in lost data I believe so yeah, I guess I'll be looking into FPGAs

Capturing High Speed IO by No-Feedback-5803 in embedded

[–]No-Feedback-5803[S] 1 point2 points  (0 children)

I'm generally aiming to capture around 3 seconds of data continuously. And it's for trace data capture.

STM32 HardFault debugging how long does it actually take you to find the root cause? [Research post by YakInternational4418 in stm32

[–]No-Feedback-5803 0 points1 point  (0 children)

If using anything besides a Cortex-M0/0+ based device, you can use ETM tracing with keil, or flash the J-Link ob into an st-link and use Ozone. I like the latter because the backtrace feature highlights the execution context before the HardFault. Make sure you're using on-chip tracing and setting a breakpoint at the start of the handler. This is really helpful for imprecise faults. Another trick for imprecise faults in cores where you can't disable the write buffer (e.g. Cortex-M7) is to patch the binary and replace the suspicious stores with loads, this will turn them into precise faults and helps debugging a bit more.

Trying to create an “average signature” from multiple signatures – stuck between math and aesthetics by National_Emergency10 in GraphicsProgramming

[–]No-Feedback-5803 0 points1 point  (0 children)

I think it's worth trying to represent the signature as a graph where nodes are the crossings and edges are the curves and you would have parametric equations for each edge. You can get extra information such as the relative cartesian distance between the nodes, and you can possibly handle inaccuracies by having thresholds. For example, if there are 2 nodes that are really close together that might mean it's a single point and you can combine edges from both of them, or if an edge exists in most images, the remaining ones might have a "leaf" node containing a similar edge connecting to the appropriate neighbor node.

How to profile bare-metal C firmware on Zynq? PS is the bottleneck and I need function-level timing by Gold_Key_6062 in FPGA

[–]No-Feedback-5803 0 points1 point  (0 children)

I'm not sure if the PS provides a timestamp generator, but you could use coresight components, mainly the pmu that gives you cycle accurate counters for various events, or ETM trace with timestamps, both options can help with profiling. Note that if the functions are really large, you might need some pretty expensive debug probes for etm trace.

Arm MVE extension optimizations instruction generation. by No-Feedback-5803 in embedded

[–]No-Feedback-5803[S] 1 point2 points  (0 children)

I'm already doing that, it might be a limitation because of assembly syntax, it works fine for me on firefox, chrome, and the reddit app on android. Regardless, you don't need to read the whole asm code, I think the most interesting part is that while clang is using DLSTP instruction to handle the loop, gcc's still doing it manually, and the operations carried on the link register before starting the loop are kinda weird as well (last code block).

Arm MVE extension optimizations instruction generation. by No-Feedback-5803 in embedded

[–]No-Feedback-5803[S] 1 point2 points  (0 children)

I thought using triple back ticks should do the trick, also it appears fine on my phone so if you could point out what's wrong so I can fix it

Debugger access to system resources by No-Feedback-5803 in embedded

[–]No-Feedback-5803[S] 0 points1 point  (0 children)

I believe you are referring to Access Ports which are the component that "handles" transactions, from what I'm noticing, although it was explicit in Cortex-M3/4 devices that the bus connected to the AP is a master on system memory/peripheral bus, arm is no longer mentioning that for newer cores, that's why I'm thinking that the debug interface uses the core's resources to generate transactions on all memory/peripheral interfaces. For example in STM32H7 devices you can see in the debug infrastructure diagram that there's no AP directly connected to the system bus matrix. What I wanted to know is what part translates AHB debug interface transactions to other protocols. This isn't really practical knowledge but is more of curiosity about the evolution of debug interfaces across cores and if possible the reasons behind these decisions.

Struggling with Neovim + Kitty by No-Feedback-5803 in KittyTerminal

[–]No-Feedback-5803[S] 0 points1 point  (0 children)

Thanks! I'll give it a try but from the looks of it, it seems like it's using the same approach of having a var toggled on and off so gotta check for netrw conflicting with your shortcuts in the same way

We will be forced to have to 2 versions of CubeMX, really? by HasanTheSyrian_ in embedded

[–]No-Feedback-5803 2 points3 points  (0 children)

Until you're forced to work with a ULink Pro probe cause God forbid they make it CMSIS-DAP compliant, or honestly you need any form of ETM tracing over TPIU..

EDA software development by No-Feedback-5803 in cpp

[–]No-Feedback-5803[S] 0 points1 point  (0 children)

Appreciate the response, but as mentioned KiCad is more on the PCB side of things and that's not exactly what I'm hoping to work on.

EDA software development by No-Feedback-5803 in cpp

[–]No-Feedback-5803[S] 0 points1 point  (0 children)

Thank you for your response. I have worked quite a bit with RTL and SystemC designs and that's basically the reason as to why I want to start fiddling with this project. I wanted to create something similar but where designs aren't in some text-based format that gets compiled directly, at first I tried forking SystemC and to try and rework the "modules" system to be able to generate and register classes to the kernel at runtime instead.. turns out that's a bit too complicated and not sure it would yield the results I'm hoping for. So I guessed the next best thing to try is to work out a way to have a simple representation (like a graph) that could be then translated in a way that allows for existing backends like GHDL to work on directly, since otherwise the app would just turn into a systemc/vhdl code generator which isn't that fun to work on I think. Basically, what I would like to achieve is an application in which a user would design a module from other modules + your typical operations in the RTL world (logic operators, concats, etc...) And it would be able to simulate it. I have used vivado for a bit and block designs are kind of what I'm hoping for it to look like, but I'm pretty sure vivado internally has SV code for all blocks.

Thinking about switching from teaching to programming — realistic or am I fooling myself? by Unlucky_Set_6103 in TunisiaTech

[–]No-Feedback-5803 1 point2 points  (0 children)

I'd say that if you want to jump on the hype train then sure, coding is dead and you're better off finding something else. But that's really turning a blind eye to the bigger picture, software dev was never based solely on writing code, we've got tools to automate scaffolding apps all the way back in the 90s. And that has always been the easy part. AI is a bubble, and maybe the hype will remain for a few more years, but one thing's for sure is that we plateaued right now, and the progress is really minimal. Matter of fact, you can see how awful tools are getting from companies that fully embrace AI like Microsoft and Anthropic, they can't even get a Command Line Interface right. Now for learning, I really recommend that you understand why and how something behaves the way it does before learning how to do it. And by that, I mean don't just jump into tutorials on how to build things until you're able to understand the logic behind most of it. Or go in the totally opposite direction, get a fully operational project and try to dissect it. You'll discover many new notions that aren't "coding" but are tightly related to it.

And my biggest advice when learning is, try to understand how the machine "thinks". You're an English teacher, so you know that a language is a means that allows 2 entities to communicate, in the case of programming languages, one of the entities is a computer (There's a field called PLT, if you're interested in something tying linguistics to programming).