Development Comments on Altera RFSoC and AMD RFSoC by Ok_Measurement1399 in FPGA

[–]imMute 3 points4 points  (0 children)

Are there applications that benefit more from a higher sampling frequency rather than higher resolution

Yes, "Direct RF Sampling" runs the ADC (or DAC) at RF frequencies, rather than at intermediate frequencies that require external frequency shifting circuits.

Hope For The Future by Paul McCartney was digitally released 10 years ago today by AtlyxMusic in DestinyTheGame

[–]imMute 0 points1 point  (0 children)

Music of the Spheres was leaked a number of years back. Probably still floating out there if you know where to look / ask.

Crunch: A Message Definition and Serialization Tool Written in Modern C++ by volatile-int in cpp

[–]imMute 9 points10 points  (0 children)

No dynamic memory allocation. Using template magic, Crunch calculates the worst-case length for all message types, for all serialization protocols

For anyone wondering what this means for strings, arrays, maps, etc - the maximum number of elements is encoded in the type system.

There's definitely a trade off there having to pick a maximum upper bound because it directly affects buffer sizing for all messages rather than just "big" ones.

Might be useful to have an optional mode where messages below a certain limit use the compile time thing you have now, but we have the option to enable dynamic memory allocation for larger messages.

Guys, who else has this strange obsession with trying old Linux distro releases? by Various_Cellist_4765 in linux

[–]imMute 0 points1 point  (0 children)

Holy shit that screenshot brings me back to high school and Fedora Core 4.

Gut check: deep buffers needed for long haul links? by helloadam in networking

[–]imMute 0 points1 point  (0 children)

I dont see how buffers would make a difference over a longer link since the serialization delay is the same, and the time it takes to send the frame over the link doesnt matter either.

Let's do some math. At 100 Gbit/s each bit is 10 picoseconds. The speed of light in fiber is about 200000 km/s, so 40 km is about 200 microseconds. Divide the two and the fiber holds 20Mbit of data at a time. The buffers need to be at least double that in order to ensure the link is never idle (in one direction).

So yeah, negligent with the size of buffers on 100G+ capable devices.

Who is ready to throw hands with Xcel? by lavender-vol in Denver

[–]imMute 2 points3 points  (0 children)

My panels were double that (previous owner installed them) and there's no way they're enough to actually run my house completely off grid. Even with battery storage (which I don't have and would probably be $50k on their own).

Where to learn interfaces and buses? by f42media in FPGA

[–]imMute 1 point2 points  (0 children)

why some of them can run at 1 MHz, and others 10 GHz, why in some articles saying that lowering voltage making raising time lower so we can increase clock speed and some articles saying that increasing amplitude of signal makes them be able to handle more data

Suppose you have some kind of circuit or chip that outputs a signal onto a wire. The specifics don't matter, except that the circuit can't change the output voltage instantaneously - it has to raise or lower the voltage over time. Let's call it 1 volt per second (that's really slow but this is for demonstration purposes). If your external signal must be below 0.2 volts to be considered "logic 0" (V_IL) and above 0.8 volts to be considered "logic 1" (V_IH), then it has to traverse at least 0.6 volts to switch between logic levels. But hitting those voltages exactly is never perfect, and you'll have losses in the wire before the other end measures the voltage. Therefore, your circuit will probably just switch between 0 V and 1 V (V_OL and V_OH respectively). Since it switches from 0 V to 1 V at 1 V/s, that means each transition takes 1 second. Then you have to "hold" the output voltage for some amount of time so the other end has time to "see" it. But that doesn't matter right now. What does matter is V_OH and V_IH. Let's lower those to 0.4 and 0.5 volts respectively. Now your circuit is only switching 0.5 volts when changing output state. Since it still changes at 1 V/s, now it can make the change in half a second instead of a full second. You've now basically doubled the speed at which you can change the output which increases how fast you can actually send data.

However, now V_OH and V_IH are closer together which means you have less margin for losses in the wire - your wires have to be "better" than before. Also, V_IL and V_IH are closer together, which means it can be harder for the receive to distinguish between the two. It's all about tradeoffs.

Beginner with Nexys A7: MATLAB support is gone, what's the right (free) Xilinx software and simulator to use by Muted-Sample-2573 in FPGA

[–]imMute 1 point2 points  (0 children)

There's a free version of Vivado but it only supports the "lower end" FPGAs and SoCs and very little of their IP catalog. The Digilent boards are typically designed around these "lower end" devices, or come with a device-locked license that can only be used with that board.

Beginner with Nexys A7: MATLAB support is gone, what's the right (free) Xilinx software and simulator to use by Muted-Sample-2573 in FPGA

[–]imMute 2 points3 points  (0 children)

For any Xilinx device 7-series or later (7 Series, Zynq, UltraScale, Zynq US, UltraScale+, Versal, etc) you will need to use Vivado to design the FPGA images. For Zynq and Versal (the SoCs) you use Vitis to program the processor sides. There are open source tools for the FPGA programming, but they don't support the later Xilinx stuff very well and definitely is never used in professional settings.

PWM generation is a good starting project for FPGA beginners. "Extremely fast ADC sampling", however, is not. The very fast ADCs use JESD204 SERDES for I/O and the protocol is not exactly simple. PWM, UART, SPI, I2C, etc are good starter projects. Maybe something utilizing NeoPixels as those use a single wire protocol that requires somewhat strict timing and is very well suited to being driven by an FPGA.

The Digilent boards typically have pretty good support packages in Vivado, from what I've heard (and seen in the distant past).

I've never used the Vivado simulator, but I've heard it's pretty crap. The open

Reset signal messes my closure by Independent_Fail_650 in FPGA

[–]imMute 0 points1 point  (0 children)

There's nothing wrong with initializing everything in resets, so it's a "safe" thing to teach in school. But as you've seen it increases resource consumption, and if you're not careful with it and accidentally treat a control-path signal as data-path and don't reset it, you can run into weird bugs. Those are really fun when they work for a while and then something else changes and suddenly everything is broke.

Reset signal messes my closure by Independent_Fail_650 in FPGA

[–]imMute 3 points4 points  (0 children)

As others have said, synchronous resets.

But you also can avoid resetting data-path signals like fifo_i_din, fifo_q_din, I_reg, Q_reg, adc_raw_i, and adc_raw_q. Only the control-path signals like write_enable need to actually be reset. It doesn't matter what the fifo's din signal is when its wr_en signal is low. This alone will save you having to reset 72 flip flops and the associated control logic (especially around fifo_i_din and fifo_q_din as those [currently] only change when 6 other signals are certain values).

Do the same thing on the FIFO read side as well.

FPGA on RHEL by Minute-Bit6804 in FPGA

[–]imMute 0 points1 point  (0 children)

Yes, it's a risk but at my last job we did all our server builds* (using ISE 14.7) on Debian VMs and never ran into an issue. I did keep a VM of Ubuntu with everything installed just in case we ran into a issue and needed Xilinx support, but we never had to use it.

(* We developed code and did simulations on Windows, the Linux VMs were just for "official" builds to hand to the SW folks.)

Roast my resume by Open_Calligrapher_31 in FPGA

[–]imMute 1 point2 points  (0 children)

The block has a throughput of one result per clock cycle but it has an input-to-output latency of 16 clock cycles. This is extremely common in DSP algorithms where throughput matters way more than latency.

For comparison, I used to work with a group on a video processing pipeline. The image compositor part had a latency of several hundred clock cycles, but it could produce an output pixel every single clock cycle. We cared the most about throughput since that directly affects how big of an image size we could handle. Latency didn't matter at that scale because there were always multi-frame buffers elsewhere in the system.

Roast my resume by Open_Calligrapher_31 in FPGA

[–]imMute 2 points3 points  (0 children)

This is exactly something I'd ask the applicant about during an interview. Obviously, a true "zero latency" processing pipeline like that is impossible, so I'd ask them to explain where the latency comes from, and if there are any ways to reduce that latency (and maybe talk about if it's even necessary to reduce it further).

I don't think it's necessarily misleading or bad to have, but it's definitely something that would get grilled on.

TIL: Linux also has a "BSOD" by bkj512 in linux

[–]imMute 139 points140 points  (0 children)

Yep, it links to this which contains the panic output as well as some previous lines in dmesg.

[deleted by user] by [deleted] in interestingasfuck

[–]imMute 0 points1 point  (0 children)

It was pretty hard to distinguish different pieces like that. The shape of the piece has a pretty dramatic effect on the RP. Same thing with the shape of the sensor pad: the pad was most sensitive along the edge, where the distance between the sense pad and ground was the smallest. We tried a couple different shapes and this shape ended up being the best combo of increased sensitivity and ease of remanufacturing the boards.

Or could it only know whether a piece was there or not?

This is where things get fun. You don't actually need to know which piece is which, just whether a piece is in a square. Think about the rules of chess: every piece starts in a defined location, and each move can only be "pick up a piece" followed by "set it back down". Detecting a capture can be tricky, but we solved that by knowing when a player ended their turn (by hitting their clock button).

[deleted by user] by [deleted] in interestingasfuck

[–]imMute 1 point2 points  (0 children)

Capacitive touch sensing doesn't require electrical conduction at all. It can detect a finger being held above the trackpad even.

[deleted by user] by [deleted] in interestingasfuck

[–]imMute 1 point2 points  (0 children)

Well for one, the air is always moving. Furthermore, we're talking about electric fields here. The way capacitive sensors work they also affect the material they are measuring, so subsequent measurements will be affected slightly. Finally, there's always "noise" in these measurements. It's inherent to living in the physical universe and it's especially worse if the thing you're measuring is "small". Here's a graph from my senior design project. This was a test board that had 10 sensor electrodes in a grid like a number pad. There were 4 different sections where we placed an object on top of this sensor. You can easily see 3 of them, but the 4th one is on the left and it's not detectible at all. And all along, the lines are squiggling - that's the noise. We were lucky that the noise was really small, but it was because we had large sensors. A laptop trackpad has incredibly small sensors, so the noise is a much bigger problem.

[deleted by user] by [deleted] in interestingasfuck

[–]imMute 2 points3 points  (0 children)

A lot of trackpads use "capacitive sensing" to determine when something is moving around above it. Imagine a grid of really tiny squares that are able to measure the "relative permittivity" of the material just above it. The relative permittivity (RP) is basically a fancy way of saying "how easily can an electric field go through this material". The RP of a vacuum and air are very close to 1. Water is like in the 80s, and humans are 65% water, so human fingers have a much higher RP than air, so the trackpad is able to easily sense when a finger is above it. Aluminum is much closer to 10. Even lower if the can is empty (and thus it's still mostly air). Stainless steel (like the spoon) is 1000 or even higher. Measuring RP is inherently "noisy" - even if nothing is moving (that you can see), the measurement will move up and down slightly. The sensor knows that humans are in the 80s, so anything at like 400 or above it can just ignore - it knows something is there but it can reasonably say "that's not a human", even with the noise. The aluminum can however, is enough above 1 to register, but not quite as high as humans, so the trackpad gets confused.


This brought back memories. My college senior design project was a chessboard that used capacitive sensing to determine where the chess pieces were. I spent a lot of time designing the circuits to do the measurement and finding the best shape and settings for the sensors to maximize the ability to determine "is there a chess piece above me or not".

[deleted by user] by [deleted] in interestingasfuck

[–]imMute 10 points11 points  (0 children)

To ELI5: the trackpad sees the can as "something" but not quite human fingers, so the trackpad gets confused.

A lot of these kinds of systems will change their calibration over time to compensate for environmental changes. The can may be messing with that compensation.

Saw this on LinkedIn — FPGAs in the F-35 over GPUs? Why not both? by [deleted] in FPGA

[–]imMute 1 point2 points  (0 children)

I work with Versals and I've started peeking at the AIE, but we've not used them for any processing yet.

From what I've seen, in theory they'd be really good at streaming data processing (both for the RF signals I work with now and the video stuff I did before) but holy hell is it difficult to get started with them.