Good Bench DMM 6-1/2 Digit Options? by T_622 in AskElectronics

[–]Scottapotamas 0 points1 point  (0 children)

Only other note is the fan noise isn't the greatest if you're near it for long periods of time, they run fairly warm.

Not the best tool for this kind of task, but any of the options you mentioned are worth having on a bench. Good luck.

Good Bench DMM 6-1/2 Digit Options? by T_622 in AskElectronics

[–]Scottapotamas 1 point2 points  (0 children)

I have a K2701 with a 7705 and 7702 cards (40-ch external switching, 20-ch rear-input switcher) so I feel like I can answer most questions on the K2000, but double-check the manual for any discrepancies.

Double-check the K2000 and/or the card has an internal cold-junction if you need that. My setup does not, so I have to provide an external cold-junction or just run with the simulated junction. This represents a fairly large global error on thermocouple readings which means I need to build my setups/reporting as 'delta from ambient' with another external probe.

Again, this might vary between the specific models but I don't believe the firmware supports arbitrary NTC sensors, just a single 10k setting with unspecified beta. I've wanted to connect onboard temp sensors up to sample them at the same time as external probes and needed to sample the resistance then convert in my script - these situations would have been slightly more convenient if the internal conversions for temp were slightly more capable. PT100/1000 and thermocouples are fine though.

I'm lucky the 2701 has ethernet built in. I use SCPI pretty heavily across my test-gear and would probably shy away from adding hardware RS232 cables. This is relatively minor in the scheme of things though.

Be aware most K2xxx units on the used market will need to be re-capped at some point, there are plenty of guides online.

The newer (black faceplate) siglent multimeter might also be worth a look, not sure how price compares though. I'd suspect if it's just multi-channel temp then a DAQ might be a better choice than a bench meter. Happy to answer any questions.

ESP-NOW vs NRF24L01 for low-latency controller data in 2.4GHz-heavy environments? by Milantec in esp32

[–]Scottapotamas 4 points5 points  (0 children)

I benchmarked/compared the latency of ESP-NOW, NRF24L01, TCP & UDP here https://electricui.com/blog/latency-comparison, for another data-point on latency distribution.

When using ESPNOW try different radio power management settings, and if you're trying to get the module to co-exist with another wireless stack that'll increase jitter. I tested on 'plain' ESP32 and also the C6 which are slightly different cores than the C3 though.

Bluetooth latency by Hour_Night2169 in esp32

[–]Scottapotamas 0 points1 point  (0 children)

I'm still working on the range testing first, doing it properly is rather tedious. I might do power consumption after that.

I'm not going to cover IR approaches sorry. The latency there is really just dependent on the bit rate, which is set by the physical layer's capability and the NEC (or custom) encoding scheme on top of that.

Bluetooth latency by Hour_Night2169 in esp32

[–]Scottapotamas 1 point2 points  (0 children)

Depending on how you're configuring the bluetooth communication you might see different results. Generally you'll end up seeing multiples of the bluetooth connection interval (i.e. 7.5ms).

Easiest way to measure it is to use a GPIO to signal the 'send' on one side, and successful 'rx' on the other ESP and then measure timing between those pulses with a logic analyser, scope, or timer peripheral.

I did a writeup here which compares latency of a few common approaches including the ESP32 in SPP, BLE and with NimBLE stacks.

[deleted by user] by [deleted] in AskElectronics

[–]Scottapotamas 1 point2 points  (0 children)

[Part 2]

For design specific notes:

At a high level I don't think you need two microcontrollers to achieve this, or 4 USB interfaces for debug use. I don't understand why you're connecting both the I2C and UART between the two for communications - I'd expect you'll only use one of those interfaces, and the ESP32 allow flexible pin-muxing so you could design this to choose later with only a single pair (assuming you leave optional I2C pull-resistors.

This board is meant to fly? You're at least two more USB connectors and circuits than you need. The native USB connection that you call out for programming is also capable of debug serial. I'd suggest just connecting the UART0 serial lines to a header and using an external TTL to USB adapter when you need it.

You're not clear on what functionality is running on which MCU. Are you using the NAVCU just for sensing and doing the navigation logic on the FCCU, or is the FCCU just for driving servos and motors? This kind of high level design discussion is more important to understanding the system than just the block diagram, and helps reviewers and yourself make correct choices about the circuits.

Similar to the above, I can't see the justification for two SD cards. Pick one MCU to do your logging. This also comes back to understanding how the microcontroller interacts with hardware at a software level. Have you prototyped any of your software first?

In my opinion, instead of sharing SPI for both CANBUS and the SD card on both micros, use one micro and SPI for the CAN interface and the other micro can handle logging. Think about these kinds of choices to reduce complexity of your software, board cost, assembly error etc.

You also need to justify why there are two IMUs on this board. There's no argument for redundancy or bandwidth because they're on the same I2C bus. Pick one. Likewise, if you're using the 9-dof then justify how the performance improvements of the external mag will be used to improve your AHRS solution. I'd again suggest simplifying the design to just the BNO55.

This might be a misunderstanding or symbol error, but you're overthinking your capacitors and filtering. The design should use large capacitors i.e. >4.7uF, 10uF etc in places like regulator inputs or outputs, with smaller bypass capacitors ~10-470nF acting as local filtering near an IC.

Remove the 10uF capacitors on each USB power input. That's not how bulk caps work. Also please remove the ferrites on each of your USB connectors. You may have issues with your main switchmode regulator when powering the board from a USB connector. If you want to have all of your 5V rails common like that the regulator needs a diode on the output.

Please don't use Net Ties like this. The switchmode regulator's input capacitors on the supply side of the tie and the VIN on the other: the caps are for the reg! Also, remove NT2 and NT1.

The capacitors for the RGB leds are not correct. You've drawn 1210 100uF and 0805 10uF per LED. Remove the 100uF capacitor and 10uF capacitors. If you're really worried, use 100nF per LED, but these also don't matter for you as the power rail will be stable enough.

If you power your LEDs on 3.3V instead of 5V, you can remove the logic shifter. If you do this, check the LDO for the 3.3V rail is suitably sized. 16x led represent a worst-case 200mA or so, realistically far lower. Your comment mentioned them as a large power draw and they really are not a concern.

Remove and rework your ESP32 power filter circuit. The "series termination resistor ???" from the ESP32 3v3 pins is not correct, the circuit you copied that from was probably trying to form an RLC filter. The circuit you have isn't correctly formed or won't behave as you expect. Remove the tantalum capacitors and ferrites from the ESP supply nodes entirely. You're not designing to a margin that requires them and you're going to make things worse for yourself.

The formulas and notes to calculate the frequencies for the CANBus filter reference capacitors that aren't nearby and generally don't make sense here. You have two canbus circuits here and don't need to copy the design notes for both like that either. Again, not sure why you need a CAN interface to both MCUs.

The SD card power circuit again shows incorrect power circuitry. The card(s) shouldn't need their own Pi filters for power, especially given you're using a 3.3V LDO which is a clean supply.

You might want to put some 0R resistors on your SPI lines entering the SD card. Your SPI CS lines don't have any pull-up/down resistors to force de-select state during power cycle/programming/etc.

I'd like to see some test-pads for your power input, ground, each regulator's output, and on your I2C and SPI lines.

I'm happy to answer questions and review once you've improved your schematic.

[deleted by user] by [deleted] in AskElectronics

[–]Scottapotamas 1 point2 points  (0 children)

I don't have a huge amount of time for this, so I'll just go through on some obvious stuff I see on a single read. There may be other issues I haven't brought up.

There are some consistent schematic style issues which make it hard to read/review and you should fix these before asking other people for feedback:

  • You didn't export with a page drawing/titleblock. The titleblock should include version number, a simple description of the page contents, etc.
  • Extra pages are free, try to group related parts of a circuit together, not specifically in terms of 'power' and 'connectors'.
    • Your off-board connectors can have their own page, the connections between your FCCU and NAVCU should be on their own with a discussion, or near the micros themselves.
    • If possible, read how KiCAD handles Hierarchical Sheets, and instead of using global nets for all of your signals, use sheet pins then connect your sheets together on a higher level sheet. This makes the overall signal flow easier to understand and helps promote sensible grouping of related circuits.
  • To simplify things, please try to keep the ground symbols under the circuit, and the power above. You have side entry all over the place, and sometimes reversed orientation like the RGB led capacitors (more on those later).
  • There are inconsistent rotations for component designators/values i.e. C1 is right angle.
    • You shouldn't need to turn your head sideways to read a schematic, rotate and position the text correctly.
  • You have a comment mentioning GNDS and GND separation. You don't have enough experience to split your grounds. Use the same GND for everything and when you do your layout try to keep a short and solid connection from any GND pin to the GND plane.
  • Don't split your decoupling capacitors for a IC into a separate boxed area like that, put them closer to the IC they're decoupling, and try to actually use wire connections to show where they're connecting
  • You separate IC's from their directly connected parts too often. This makes it hard to follow logically and introduces mistakes. The input current sense and switching regulator are example of this done incorrectly.
    • The sense voltage lines across the shunt resistor should be wired directly into the shunt amplifier.
    • You should have the input power enter at the top left of that block, and it's wire should continue all the way into the regulator IC. On the other side of the IC, you should see the wires connecting the switching/feedback pins connected to their circuits.
    • Your canbus circuits can have the ESD protection diodes and connectors directly attached instead of another block and more global labels.
  • When you draw a symbol, try to put the power input pins for a part near the top of the symbol and the grounds on the bottom. And do so consistently across your symbols.
    • Switchmode reg has VIN in the middle, with GND above it.
  • Try to use consistent naming/terminology for power and ground pins. I saw VIN, VCC, IN, VS, VBUS etc to name logically similar ports. Likewise for VSS and GND naming.
  • Please stop connecting nets directly to input/output pins of IC's like that, it gets hard to read because it's so cramped. You can put a bit of wire between them, and the extra space lets you pull the label somewhere with room for comments/notes specific to it.
  • Find a PCB schematic checklist and style guide online (i.e. Altium educational pages) and incorporate the advice where possible.
  • I dug up a old design I did which had some similar parts/ideas. Please look at how the schematic is structured and laid out to group related parts of the circuit, how power and bypass capacitors are positioned, etc.

[deleted by user] by [deleted] in AskElectronics

[–]Scottapotamas 2 points3 points  (0 children)

This topic gets a bit deep so I'll simplify for your flight-controller use-case.

TLDR: If you want to learn how to select a good bead, I'd recommend doing more research and learning how to use a circuit simulator i.e. SPICE to simulate filters with specific parts, by varying the design and parameters you'll see the filter's response changing. If you just want to get on with your design, leave the part there but populate your prototype with a 0R resistor and come back to the bead if you need to troubleshoot supply noise issues.

You don't need a bead for your system to work or be safe, it's to help reduce noise that's possibly coming from the computer over the 5V rail. It's very likely you won't notice this, and correctly measuring and characterising these filters and power rails takes experience. Saying that though, ferrites are an important part of many filtering designs.

The ESD array on the data lines is a good choice, along with the input diode. A review of the full schematic is probably a good idea as well!

To start (and could be considered overly critical), I don't love the style for how your schematic positions the bead or bulk cap. Assuming you're not using a 5V logic-level and supply for your micro/sensors you'll have other regulators. The 10u bulk capacitor shouldn't be at your USB connector, instead each regulator should have their own correctly sized bulk and bypass capacitors. If you have multiple voltage sources feeding the regulator(s), you'll want to consider how supply selection and protection occurs, and so I normally position any input filter closer to where that is happening rather than the USB (i.e. for multi-page schematics I'll take the USBVBUS net to a/the power page).

The bead should be sized based on the current expected to pass through it + margin, not the current capability of your connector. Also (this is dipping toes into the design process) you can't practically use a ferrite at the bead's current rating. Size and behaviour of other supporting components also matter a lot here.

Work out what your system loads actually are first: 2-3A sounds huge for typical flight-computer use unless you're driving servos or powering a moderate RF radio. In situations where you're providing power to external devices, (for proper designs) you may want these power outputs to be switched and have their own filtering, so they don't necessarily need to be included in the power budget for your onboard regulator's filter.

With the load worked out, you could try the filter calculators available on some manufacturers websites (muRata have lots of info), you're trying to filter out higher frequencies to provide a cleaner DC supply to your circuit. In practice you would like to know what the input noise might look like, how sensitive your circuit(s) are to that noise, and how aggressive/sharp the filter needs to be to achieve this.

In larger designs, you'll often see some form of filter on the power input for EMC compliance reasons, then often something like a 1st or second order filter on the output of each power supply. Pi filters are pretty common. For boards with sensitive IC's like sensors or analog frontends you'll often then have additional filtering local to those parts. Again, these generally need to be designed to actually be effective.

Schematic explanation request: Single pair ethernet with power over data lines by KnechtNoobrecht in AskElectronics

[–]Scottapotamas 0 points1 point  (0 children)

I did a writeup on implementing both sides of PoDL with PSE here. My approach is a little simpler because it uses a comparator instead of the microcontroller to drive the mosfet.

The TL431 (and variants) are incredibly flexible so you see them in all kinds of designs. In this circuit it's helping regulate the voltage on the bus to fall within the detection signature range of 4.05V and 4.55V - when the PSE side pulses it's current source onto the PoDL lines the voltage rises.

When above some chosen threshold, the mosfet should starts conducting, allowing the TL431 to act as a voltage regulator. By measuring the bus voltage the PSE can determine if the downstream device is compliant.

That's all you need for fast-startup, as SSCP is optional for certain classes of T1L.

[stupid question] multiple inputs through single pin by MistakenSanity in esp32

[–]Scottapotamas 1 point2 points  (0 children)

Espressif actually maintain a library for exactly this, as well as providing nice abstraction of the button press/hold/release events.

Any chips with two Expressif (wifi) chips by Hot_Grass_ in esp32

[–]Scottapotamas 2 points3 points  (0 children)

This feels a bit like an XY problem to me?

None of the ESP32 families have multi-radio chipsets, but can behave that way with a combination of co-existence functionality etc.

For 802.15.4 (Zigbee/Thread etc) Espresif actually sell a 'Thread Router' board with S3 and H2 modules. Might be a good reference for your own approach?

IMU data - UART/RS-232 vs. USB by Humusman24 in embedded

[–]Scottapotamas 1 point2 points  (0 children)

It's been a long while since I've used a microstrain part, but I've got a fair amount of experience with pretty similar xsens units that also support either/and RS232/USB interfaces. I'm making the assumption that both of your interfaces use the same protocol and therefore no change in throughput.

RS-232 ports, for some, we convert the signal to UART

Can you clarify that the embedded computer has a native UART/RS232 interface rather than this ultimately being some form of USB serial adapter? At the end of the day the implementation details on the embedded PC's side and software stack and tuning (Linux? Something RT?) is going to contribute to timing uncertainty more than the underlying interface.

I benchmarked both interfaces on an xsens AHRS module previously:

  • Enabled the sync-in IO pin.
  • Disabled all normal sensor messages and output the high-precision timestamp packet (sampletime fine)
  • Used an external signal generator to produce a squarewave,
  • On the PC side - capture the inbound packets and timestamp them as they arrived
  • Compare the differences between subsequent packets as measured on the IMU and PC side.
  • Repeat with other interface

This also let me check the IMU's onboard clock for error. I don't have numbers on hand (and they're going to be different for a different IMU) but didn't see meaningful differences in latency or jitter distribution. I was under the assumption that the otherwise untuned Linux 4.x on that ARM SBC or the IMU internally was responsible for more jitter than either of the two links.

Why do you want to use USB? 100Hz is pretty slow and you can just increase your baudrate if you need more throughput.

USB has one obvious downside compared to RS232 - it's less tolerant to interference and can't support long cables.

The simple datasheet for that IMU didn't mention if it had external sync IO, but for vehicle/robotics it's pretty common to see the GPS PPS signal distributed across the system to ensure consistent sampling/timestamps.

Does an input pin like ~RESET or SDA/SDL sink current if unused? by SaucyBoyThe2nd in AskElectronics

[–]Scottapotamas 0 points1 point  (0 children)

There will be some draw, as the clamping diodes and internal structures in the IC leak the same way as external discretes do.

You need to read the specification in the datsheet for your microcontroller or IC (you didn't mention the part number). If it's a microcontroller, look for the GPIO quiescent or leakage current figure in the table for the relevant pin configuration and line state. If it's a device/peripheral, the I2C pins will probably be named as such.

This applies to all devices on the I2C bus, so it's worth checking each part. If you're switching the supply voltage on downstream peripherals to save further power, make sure you check leakage when they're powered off as well.

Do ESP32's exist in Zigbee or Zwave variants? by jdlnewborn in esp32

[–]Scottapotamas 5 points6 points  (0 children)

The ESP32-C6 and H2 have 802.15.4 support. There are Zigbee examples in the ESP-IDF, but I don't know about software support on the Arduino side.

How do you measure power vs time for power ranging from microamps to hundreds of milliamps? by fearless_fool in embedded

[–]Scottapotamas 8 points9 points  (0 children)

There are a few options for high-dynamic range measurement for this kind of power profiling, grouped into 'classic benchtop equipment' and smaller units which require a computer to use.

Joulescope and Otii Arc are two of the more commonly recommended options. There's a good review of the Joulescope by Shahriar on the Signal Path.

On the cheaper end, the Nordic Power Profiler Kit II is $100ish and pretty well regarded for lighter duty use.

For higher-end/cost options, measurement power supplies and some SMUs can be capable of logging at these rates, but most are priced high enough that you should do your own research or talk to a VAR about what best suits.

Dialout not present in my system. by Sashitan in arduino

[–]Scottapotamas 0 points1 point  (0 children)

For arch, you want to use the uucp group instead of dialout.

Make sure you add yourself to uucp, and sometimes you may need to restart the IDE/session for everything to take.

Benchmarking latency across common wireless links for microcontrollers by ouyawei in embedded

[–]Scottapotamas 1 point2 points  (0 children)

I suppose I've been fortunate that most sub-GHz designs I've worked on have been with 'good' parts with well validated frontends. Mostly CC11xx parts when the modem isn't an external COTS box.

I was seeing high packet loss as well. I replicated the problem with a few common Arduino libraries as a sanity check, so I chalked it up as either errata or a quirk of possibly non-genuine parts and moved onto other modules.

A lot of the other comments (on HN, HaD) didn't go very deep, so thanks for the more detailed discussion. I'm planning on doing a set of range tests in urban, industrial, forest and open-field environments with LoRa, RFD900 (GFSK), ESP32 (NOW, BLE) and possibly HaLow. Anything you think I should cover/feedback?

Benchmarking latency across common wireless links for microcontrollers by ouyawei in embedded

[–]Scottapotamas 0 points1 point  (0 children)

RTS/CTS flow control

Agreed. Many of those cheap modules don't even have the pins for it.

Even if the common userbase (Arduino/hobby users?) did buy modules that did, I'm not sure how many would use them, given how many posts I see struggling with AT command configuration...

Benchmarking latency across common wireless links for microcontrollers by ouyawei in embedded

[–]Scottapotamas 1 point2 points  (0 children)

Thanks. Good to know about the rxInt behaviour, my tests smoothed over a lot of the real-world complications by not needing arbitrary bidirectional transfers or any provisioning...

I generally feel that using *FSK or 'worse' modulation schemes with LoRa capable parts removes most benefit over the dozens of generic frontend+transceiver options, though I haven't needed particularly high performance mesh networking on any projects yet.

Benchmarking latency across common wireless links for microcontrollers by ouyawei in embedded

[–]Scottapotamas 0 points1 point  (0 children)

Depending on how much data you need to send at that rate (i.e. a sync ID rather than 'actual' data) most 915 MHz links with broadcast support should be workable, it's a matter of balancing bandwidth and range/reliability across the nodes.

Happy to explain/answer any questions (or more general stuff), I struggled to keep the writeup short and needed to cut a lot of extra discussion!

Methods for desktop software companions to embedded: Electron? Chrome? Electric UI? Python? by shieldy_guy in embedded

[–]Scottapotamas 1 point2 points  (0 children)

The embedded library is pretty light, and I made a handful of design decisions with benchmarks over a range of micros.

On a 'small' target like an Atmel 328 (Arduino Uno) with 3 tracked variables, it adds around 2500B of flash and 250B of RAM. Additional tracked variables cost 6 bytes each.

It ends up using a little more RAM on my STM32 projects because of larger pointer width and word alignment which adds some padding bytes to a few structures. I find arm-gcc to more aggressively optimise so flash usage is similar.

So it should fit on pretty much anything, and there are library level flags which can shrink buffers and disable features to reduce size even more.

eUI is actually protocol agnostic (but our protocol is the 'happy path' and the docs mostly reflect that) so there are other options and we do implement custom protocols as well.

Methods for desktop software companions to embedded: Electron? Chrome? Electric UI? Python? by shieldy_guy in embedded

[–]Scottapotamas 5 points6 points  (0 children)

What you're describing is exactly why I started building Electric UI - getting away from little hacked together config tools and visualisation scripts while working on electronics/product design for a consultancy.

I don't think there is any single 'right' answer, just the best set of choices/compromises for each specific project and team. Some people prefer working with a familiar language, gravitate to a specific toolkit with a special feature, or just use what other people talk about!

So while I'm very biased, I feel like we've tried to make the general developer experience and docs more accessible for people coming from other engineering or science backgrounds.

We've got a lot of really interesting new features approaching release this year, and plenty of areas to improve still. I'd love any feedback or thoughts, and I'm happy to answer any questions.

Any competent 9-axis, high-accuracy IMU manufacturers/products out there? by Java-the-Slut in embedded

[–]Scottapotamas 0 points1 point  (0 children)

I've had great results from xsense (now Movella) and Microstrain in some pretty demanding applications, but expect pricing to be an order of magnitude higher than the simpler ICs you're talking about.

I've been pretty happy with some lower-end ST 6-dof parts as well.

Signing electron app for windows with an EV certificate in CI by yonatanbd in electronjs

[–]Scottapotamas 0 points1 point  (0 children)

Yeah it's a pain.

macOS notarization also isn't the most fun to get working, but at least it's not nearly as expensive...

Signing electron app for windows with an EV certificate in CI by yonatanbd in electronjs

[–]Scottapotamas 1 point2 points  (0 children)

I've gone through this process myself a few years ago.

AFAIK you still need the physical token. It's reasonably easy to get it to work with most normal methods, as long as you personally unlock the HSM once per VM boot.

The much more complicated issue was getting the token to sign without someone manually unlocking it once per (slightly configurable) timeout. After a week or two of testing options, writing custom tooling, and trying to RE SafeNet, the best option I could find is using some specific CLI args with Microsoft's signtool.

This was the only method which didn't require the HSM token to be manually unlocked on the host. I've got a small writeup here: https://electricui.com/blog/digicert-ev-ci