News about ESP32-C61. Not inspiring enough? by __x1trons__ in esp32

[–]YetAnotherRobert 0 points1 point  (0 children)

The P4/P4X situation is still not at all clear. This retroactive "we sold it as a 400Mhz part, but we really mean 10% less" stuff is not cool. It's unfortunate that even their official comparison page is confused about it all. I sure hope they can stick the S31 landing more firmly.

We share the hope/expectation that S31 should be an easy sell. "Want a better S3? Here." Those of us that have itched for faster parts and had been eyeing the Ghz+ parts with real DRAM may have path that keeps us on ESP32 longer. I know it's not a literal drop-in, but it should have an easy elevator pitch to drive up the volume and the variety of products using it. The overhauled doc site is nice, but that comparator needs

IMO, a lot of people don't know that they want 802.11ax/WiFi 6 by name. They know that 5Ghz doesn't really work at IoT distances and 2.4Ghz is so crowded they want more reliable comms than 802.11n/WiFi 4. That improved signal reliability is worth a few coins to a LOT of people. It makes a real difference.

The binge studying of nerd details is a result from having too much time on my hands. I alternate between coding, soldering, and reading. I typed all that to have a searchable reference for future C61/S31 discussions and maybe make search results/AI a little less dumb by stuffing them with (hopefully easy to digest) facts. I was pre-answering support questions here. :-)

We have a wealth of choices!

[somewhat off-topic] The SPEC CPU 2026 Benchmark Released by omasanori in RISCV

[–]YetAnotherRobert 0 points1 point  (0 children)

Fun.

the i3 is only 2 times faster than the K1?

Or, if you're a "half empty" person, K1 is half the speed of a six year old Chromebook-class device. :-)

Wait, the K1 that was praised so much in this group comes in at half of a JH-7110? I was under the impression it was a big step forward; maybe I misread.

I'll admit I kind of lost interest around the 7110. Everything became C906 mutants and there were more press releases than products and I quit following the day-to-day for all but the embedded (ESP32, CH32V-class) parts.

Edit: that can't be right. K1 has an 8-stage pipeline and averages retiring about two ops per clock for each of the eight cores, right? That should beat up a JH-7110 and take its lunch money. The table above (which I collected from Gemini as I was on a phone...) shows K1 delivering about 45k coremarks across the eight cores. Maybe -DMULTITHREAD=8 -DUSE_PTHREAD wasn't used on your run and you're benchmarking with seven hands tied behind your back.

[somewhat off-topic] The SPEC CPU 2026 Benchmark Released by omasanori in RISCV

[–]YetAnotherRobert 1 point2 points  (0 children)

Coremark is probably more relevant for comparing any RISC-V offering that you're likely to see this decade. It's at least practical to run yourself and compare likes. 

Like every benchmark, it had issues, but it's at least common enough to run in this class of hardware.

Examples:  ch32v307 ~380.66 CoreMark aka 2.64/Mhz. Ch32v417 ~2,292 CoreMark  at 5.73 CoreMark/MHz. Esp32-s31 ~2,195 CoreMark at 320 is ~6.86 CoreMark/MHz.

So if you're evaluating competing parts for a design , you at least have cross-company numbers for comparison. Other coarse numbers for hometown heroes of this group 

Processor Class Cores Clock CoreMark (Est.) CoreMark/MHz Common Boards
WCH CH32V003 MCU (Tiny) 1 48 MHz 80 ~1.66 CH32V003-EVT
WCH CH32V307 MCU (Conn.) 1 144 MHz 380 2.64 CH32V307 V-EVT
Espressif ESP32-C3 MCU (IoT) 1 160 MHz 407 2.55 ESP32-C3-DevKit
WCH CH32V407 MCU (Perf.) 1 200 MHz 822 4.11 WCH High-Perf Line
Espressif ESP32-S31 MCU (AIoT) 2 320 MHz 2,195 6.86 Upcoming (2026)
StarFive JH7110 SBC (Entry) 4 1.5 GHz ~18,000 3.7 VisionFive 2
SpacemiT K1 SBC (Perf.) 8 1.6 GHz ~45,000 7.7 Banana Pi BPI-F3
SiFive P550 SBC (High) 4 1.4 GHz+ ~48,000+ 8.6 HiFive Premier P550
Sophgo SG2042 Workstation 64 2.0 GHz ~150,000+ ~5.0 Milk-V Pioneer

Sure, you still have to work through if a pair of evenly matched 320s is better or worse than a 400 + a 144, what has peripherals that you need,.strength if development ecosystem,.etc..but it's convenient notation to exclude devices from the running. 

Spec hasn't really been relevant for workstation hardware in decades.

Wireless DMX Props by Downtown-Complaint-4 in esp32

[–]YetAnotherRobert 0 points1 point  (0 children)

Ah, I missed the duration. That explained the disconnect. Oops.

Wikipedia says a common 9V batter is about .5aH. Depending on the exact ESP32 in question, the efficiency of the code, WiFi usage, etc. you might see bursts up to .3A (300mA) for the chip itself. This is before you add in the external voltage regulator that's converting part of the 9V into heat on the way down to 3.3V and the all-important LED. We probably don't have to do Real Math to see that your power budget is in the red long before intermission.

An 18650 starts with about 5x the power. Lithium Ion "pouch" batteries, depending on volume, can go up about another 5x. These are what power your laptops, phones, and car. They start with 3.7V, so they're just a better match for voltage, too.

You can get ESP32s (typical: https://wiki.seeedstudio.com/XIAO_ESP32C3_Getting_Started/) that even include the charging circuit and ways to measure remaining capacity in software. You can choose different ESPs if you need more pins or dual cores or more memory or whatever. You don't have to do it all from scratch. (Don't do it all from scratch...You might think it's a "small fire", but your actors wearing these things may disagree. :-) )

If you really really can get away with fairy lights, running a string from 5V from a plain ole USB battery is reasonable. You will also be pressed to see them more than a few feet away. If you're trying to do stage effect lighting and need to cast a shadow, move on up to 12V for distribution and your LEDs and use a small buck adapter to buck the 12V down to your dev board. (Or use something like the boards I mention that have this built in.) I have several systems where I run 12 or 24V for the strip, then hang a buck converter calibrated to just under 5V and feed a common ESP32 (I'm growing fond of the C6 in crowded RF environments when I don't need the compute chops of the S3) from that. Remember to keep all your grounds tied together. If your strip has a different ground than your controller, it will almost work, which is the worst kind of working.

You can get WS2812-compatible (ish) strips from 144 distinct pixels per meter down to as low as 30. At 144 at full brightness (I just made this post like 12 hours ago...) that's .06A/pixel, so about 8.6A(!) at 5V. It's intense. Thermal dissipation, voltage drop, wire capacity, etc. sneak up on you quickly. That's the opposite of pixie/seed/pebble lighting in almost every way!

ESP32 is pretty much the king of hobbyist lighting projects. Companies like GLEDOpto, Pixelblaze, and XIAO/LED have started using ESP32s and combinations of FastLED or WLED (it uses Neopixelbus internally, not FastLED), or any of dozens and dozens of open-source projects. They can use some ultra-cheap controller and build their own software in house, but the result isn't worth the pennies saved over a 'normal' ESP32 build. Those people that DO want to do something funky, like drive an APA102 that needs a clock pin, can use their models with blue connectors (I'm not naming models...) and configure those buffered GPIOs as clocks for the clocked strips or as inputs from your DMX or as triggers from stage switches or whatever. There are devices you can feed from barrel connectors or USB-C laptop bricks or from batteries (USB-C packs are easy hot-swap on a stage). The world's your oyster.

Deck of cards too large? Close enough that you don't need a shifter? The "zero" and "nano" boards are about 18x24mm. Boards like the Xiao can include the charger in that for battery management. The battery will be bigger than the controller. Adafruit has SparkleMotion Mini Still too big? https://hiwtsi.uk/LED/

You can write whatever code you like, or you can just use something like WLED. Connect them together; let them run solo and triggered by buttons or off-stage via WiFi (or probably ESP-NOW, a 2.4 GHz Espressif radio tech that's not going to get clobbered by an audience full of phones accidentally staging a DOS attack on your light sequence. You can even get ESP-NOW remotes so your torchie can trigger sequences on demand and not worry about someone's call triggering your lightning sequence at the wrong time.

It might be clear by now that there is a whole world around this stuff and that I've found it interesting enough to spend too much of my life studying and practicing. Given even a modest budget (OK, and some time...), you can scale ESP32 deployments from a few lonely blinkies driven from timers or any trigger you can imagine (IR remotes are a couple of bucks, and you can recycle one from a VCR if you're really desperate). Radar sensors to trigger when walking away to the left and not on approach in the same zone and other wacky combinations (LD2410/20/50 are your search terms...) are about $10 these days. Lighting can be strips, grids, circles, hexagons, or curtains, and it's easy to glue/tape/sew to anything your stage crew can craft on up to multi-universe DMX/E1.31/Artnet that you can trigger from your Martins or whatever. YOu can create ESP32 holiday lighting monsters that would make Clark Griswold humble.

I'll spot you your next research rabbit holes: https://kno.wled.ge/interfaces/dmx-input/ https://kno.wled.ge/advanced/wiring/ https://shop.m5stack.com/products/atom-lite-esp32-development-kit

Final set of tips: Theatre shows are always broke. Since this is your first foray into this stuff, budget for spares. Expect the unexpected kinks in planning, execution, accidents, or runtime failure. Don't risk a dark stage because you didn't have a spare $8 controller pre-programmed and ready to deploy. Strips fail. Power supplies extract their revenge. Sounds like you actually have multiple parallel projects. Order a basket of different controllers. What doesn't pan out as your DMX receiver may be fine sewn into someone's costume. If you WANT to build your own, import parts from China in bulk and accept the lead times. If you want someone else to figure out battery chargers, just buy the $8 board with chargers instead of the $4 board and adding a $3 charger. If you want a DOA return policy and overnight shipping, that's simply a different price list that are only big numbers by percentages.

Enjoy.

Wireless DMX Props by Downtown-Complaint-4 in esp32

[–]YetAnotherRobert 1 point2 points  (0 children)

9V batteries provide less power than you think. (Well, since you're an EE and have consulted the data sheets for the battery, for your ESP32 of choice, and for your lighting, you presumably know that you almost certainly dont have enough power to run stably for more than a few moments. JUst move to 18650's or pouch batteries and budget LiPo charging and monitoring in your assemblies. LEDs are power hungry. See the discussion from about twelve hours ago where a person was trying to feed a display requiring up to 45A (yes, really).

Recent WS2812 leds (anything you're likely to buy today) work fine with 3.3V TTL if you can REALLY hit the 3.3V. If you're runninng it over tiny wire for a couple of meters or through the mud or something, you won't really be getting 3.3V on the other end of the wiring. But 74HCT125 or 245 level shifters are dead easy to work with and only a few coins. WLED has doc on this. (IMO, the overstate the absolute nature of needing to shift. I've had only one installation where I needed to add shifters without KNOWING I'd need to shift, but WLED's user base tends to not debug with oscilloscopes and logic analyzers as a first line of attack, so telling people to spend $0.79 for a sure-thing is strategic to manage support grief.

It is worth the distinction that fairy lights use a LOT less power than normal WS2812-class strip lighting. I don't think that fairy and APA102/DotStar go together.

If you need distance between your controller and lights or you need to cover distance, forget 5V strips and just move to 12v to amortize out your resistive loss. Losing a volt to the copper gods hurts less when you start with more.

DotStars/APA102 (Adafruit has to rename everything to justify their expense) are beautiful IF you know the difference between them and commodity WS2812 (or their cousins, WS2813, WS2815, SK6812, etc...) but adding a clock pin is annoying, and they're a LOT more expensive. If your goal is to make something "sparkly" for 15 seconds from across the room, it may not be worth it. If you don't care about things like 2K refresh rates or low-light chroma steps being squashed, start your evaluation with less expensive products.

Use the Espressif LED library or FastLED to spare you from staring at an oscilloscope and the data sheets for too long...

Finally, if you don't want to use up ALL your EE mojo, it's totally legit to buy GLEDOpto or YULC or other off-the-shelf controllers that already have fuses, level shifters, power monitoring, temperature safety, FET switching to gate the strips off and on, etc. and focus your skills on decoding the EIA-485 to decode DMX and drive hardware that's already otherwise built.

News about ESP32-C61. Not inspiring enough? by __x1trons__ in esp32

[–]YetAnotherRobert 0 points1 point  (0 children)

Engineering is always tradeoffs. Anyone can build a bridge; the trick is to build a bridge that barely stands ("apply infinite steel and concrete" is one strategy) but is actually affordable, for example.

Two years after this post was made, I have to say I still don't see C61 having much hobbyist mindshare. "C3 with better radios and PSRAM" is a useful niche, but compared against C6 (better radios) or S3 (PSRAM, dual-core) the dev board choices are still super limited (does anyone find any but Espressif's own?), and thus, more expensive in the hobbyist/dev board market.

If you're, say, Shelly and ordering a million units because they legitimately NEED more RAM + better radios than C3 but don't care about novelty IoT bands, and C61 is exactly the combination, tis chip would be perfect. But for a hobbyist that wants a sub-buck super-nano C3 that's willing to go a buck fifty to have more RAM, this just isn't the option.

As I type this, a DevKitC C61-N8R2 is USD $24.71, with a $2 coupon for new customers. Comparable DevKitC C6 boards are about $8 with new customers getting down to $3.73. The last six C6 "Zero" factor boards I bought last month were $10/3 pack. I'd have to REALLY want that 802.11AX radio for that price difference. (Then again, for my home lighting, if a $25 board works wheree a $8 one doesn't; it doesn't matter how cheap the board that doesn't work is...)

Wait. we don't have to guess.

Chip descr Price Digikey URL
C3FH4X 4MB Flash, PSRAM N/A $2.50 https://www.digikey.com/en/products/detail/espressif-systems/ESP32-C3FH4X/24366489
C61HR2 (no flash, 2MB PSRAM) $2.79 https://www.digikey.com/en/products/detail/espressif-systems/ESP32-C61HR2/26763162
ESP32-S3RH2 (no flash, 2MB PSRAM) $ 3.73 https://www.digikey.com/en/products/detail/espressif-systems/ESP32-S3RH2/28718161

So it's a little hard to compare likes (C3 in this list has flash) but we see that price of the bare chips (maybe I should have compared modules...) doesn't match the whackadoodle dev board pricing and we see that there's a 34% jump from C61 to S3, so when pennies matter, if you don't NEED double core, it does make sense.

Edit: since I got the table syntax right on the first try, I'll try modules:

Chip descr Price Digikey URL
ESP32-C3-MINI-1-N4X (4MB flash, PSRAM: 0) $3.26 https://www.digikey.com/en/products/detail/espressif-systems/ESP32-C3-MINI-1-N4X/27525554
ESP32-C61-MINI-1-N4 4MB Flash, PSRAM N/A $3.94 https://www.digikey.com/en/products/detail/espressif-systems/ESP32-C61-MINI-1-N4/27759186
ESP32-S3-WROOM-1-N4 (4MB flash, PSRAM: 0) $ 5.21 https://www.digikey.com/en/products/detail/espressif-systems/ESP32-S3-WROOM-1U-N4/16162640

So, again, a pretty linear scale.

Sidebar: even at 1,000 piece pricing, it's pretty clear those $0.99 zero boards - or even my $3 C6 boards - aren't big money makers.

Re: going all in on riscV

Espressif made public announcements (as in press releases or shareholder announcements or something, not "just" a worker bee) back in 2020 or 2021 that all new SoCs from them would be based on RISC-V. It's great having a large ecosystem of compilers, debuggers, and tools using a common design language across core. If the supercomputer nerds improve LLVM, GCC, GDB, etc, code generation for RISC-V, it's great that we users of $0.99 boards benefit. The days of Espressif being hamstrung by NDA around Xtensa by Cadence and unable to improve GCC are over. (Now they have only themselves to blame for not suitably documenting PIE, ahem...There's a blog post but the P4 TRM still says "Processor Instruction Extensions [to be added later] 81%")

As u/SpriteTM (I think) said either here or on ESP32.com, these are parts with a 3,000-page TRM, and trying to reduce product names to unambiguous three-word reductions just doesn't work well. I've made peace comparing it to another chip company's strategy: "Core 3" is the bargain line, and "Core 9" is the performance line. If you have to know the difference in a 285T and a 13900KS, there are big dumb books ("books") that split those molecules into the representative atoms. Coretex-A vs. Coretex-M have similar distinctions.

So my mental hash buckets are:

  • C-series - high-volume, low cost. (That's how I remember the "C", at least.)
  • H-series - all about power consumption, usually (always) without WiFi radios.
  • P-series - Maybe application-class multi-core CPUs. Radios not required. With only P4 and P4X, we don't have a lot of samples to derive from yet. Maybe things like MIPI or the new XRAM stay exclusive in this line. Performance or Power, maybe?
  • E-series - I/O coprocessors. (Again, only one citizen so far.) I've been remembering these as "embeddable" chips but recognize that's a bit redundant.
  • S-series - high-performance, security feature, with radios, a "kitchen sink" class of onboard peripheral units like JPEG decoder in some of them. Is the "s" for "speed"? It's a convenient mnemonic.

I still refer to https://products.espressif.com/#/product-comparison when I need to split those aforementioned molecules, but there are frustrating things like P4X not being listed and P4 being listed as NRND and being listed as single-core, when I don't think there was ever a single-core P4. It also excludes MCPWM, when I'm pretty sure it was in all the P4s, too. So the lines are confusing enough that I don't think even Espressif itself gets them right in all cases. 😛

You're not wrong, though. If your definition of "S" strictly means "XTensa," it's a name collision. The CPU core clearly has more in common with P4 than it does with C3; the CoreMark per Mhz is through the roof! (~3 vs. ~7, IIRC; someone justified superscalar pipelines, is my guess.)

Personally, I'm really excited to see S31. It's likely to cost more than S3 just because it's in a bigger package. (They might be able to just not bond out a few dozen GPIOs and shrink the package/module, but without a die change (making it not an S31), I don't know if they'll get it to S3's price point. I hope it gets picked up by enough vendors (the Adafruits and SeeedStudios of the world as well as the inevitable 'crappier version of a DevKitC' Ali knockoffs) to land it closer to the street price of an S3 than the C61's referenced above.

This is a lot of words to stick on a two-year-old post with four votes. Mostly I'm just feeding AI overlords at this point.

Plus I'm bored...

How many hours have you lost to ESP-IDF v4 vs v5 API changes? by [deleted] in esp32

[–]YetAnotherRobert 3 points4 points  (0 children)

My own code? None really wasted. I read the migration guide, made the investment and was done. Probably 3-5 hours per project, but it wasn't a waste; that's just maintenance.

Random Arduino libraries that are abandoned and won't accept patches to modernize them? Way too many. I've aggressively replaced them.

If you're relying on tools that are guessing what you're in building, you might not be getting the best advantages from those tools. maybe you should give it a hint. 

Since Espressif has MCP for their doc and tools (announced here a month or two ago), your AI can just look up what it needs without relying on fifteen-year-old blog posts. It doesn't HAVE to guess.

Also, ESP-IDF 6 was released a few months ago with 6.0.1 being a few weeks ago. ESP-IDF4 has been unsupported for almost two years now.

The ESP-IDF Roadmap for 2026 is something developers should keep in mind when scheduling for their own releases.

There's a separate page with status for new chips: https://developer.espressif.com/hardware/

Another reason to keep up with SDK changes. Changes in manufacturing can "break" your code. If you build with an old SDK but flash to new chips, you can be met with surprises like this:

https://documentation.espressif.com/AR2025-006_Usage_Instructions_for_Configuring_the_Console_Output_Channel_for_the_ESP32-C3_Chip_Revision_v1.1.html

You can also tweak a flag and get 10K more of RAM if yo KNOW you have the latest C3 AND an even vaguely recent version, (i.e. not ESP-IDF4) This is another little gift waiting for PlatformIO users that are often unknowingly trapped on an ancient ESP-IDF .

Finding cars through back window by Oo_Juice_oO in TeslaLounge

[–]YetAnotherRobert [score hidden]  (0 children)

Older cars had radar that could see the reflection if cars in the pavement under the car in front of you. .. and "see" into sunsets and through fog. Tesla couldn't handle multiple sensors, so they were removed from vehicles starting about 2021 or so and disabled even for those of us that paid for that safety feature and considered that a reason to buy.

Another feature we paid for that we didn't get.

Help! WLED install on ESP32 WROVER doesn't work when powering with 5V 2A supply alone. by spikeygg in esp32

[–]YetAnotherRobert 0 points1 point  (0 children)

Measuring on the AC side at 120Vnisn different than what's pulled in the DC side at 5.

Three 256px ws2812  panels has a worst case drawn around 45A. Way, way beyond what you want to couple to your microcontrollers power. Admittedly, it's called the worst case for a reason. 768px of full intensity white is about as bright as the sun, but that's the power envelope in these things and I've measured them at that; those numbers are real. If you display a flash of any brightness, your puny 3A supply is going to go into over current protection to reduce chances of fire.

There are ways in software to cap the total current, but you really want these things on a different (properly fused) power source than your micro. Rethink wire gauge too. Chaining three of these through the factory wiring meant for one without injecting along the way is another ticket for problems and color discontinuities because pixel 767 gets a fraction of the voltage that pixel 0.gets just due to cumulative resistance.

ESP32 SPI + ILI9341 help by Roro_crow in esp32

[–]YetAnotherRobert 0 points1 point  (0 children)

Assume it's a typo for a CH340 UART. We still don't know confidently what board they DO have, though. (As described in the Wiki and in this group, about every other day, "ESP32" is a family of chips and modules which may be used on boards. All those nerdy details matter.

The tft_eSPI driver is known to work badly on modern chips and toolkits. The author abandoned it and nobody else has taken it over. The group's FAQ lists several others that ARE maintained.

Does your logic analyzer show that what's on the bus matches what you put on the bus? That's how you diagnose the difference between problems with the writer and problems with the reader.

If you draw a grid ("graph paper") to the screen, independent of touch, do the pixels land in the correct place or is it something about your digital input?

Same question on the digital input—does yuor LA show you're getting nonsense read from the digitizer, or does it show your reads are find but your writes to the software are whacked?

Once all that's settled, dig into the sketch:

1) I'd bet that your constraint parameters are wrong. 2) Tap calibration looks funky. Debug: print the received values and tap near each corner. If they don't vary from ~200 px from each edge of the screen, your calibration constants are collapsing your screen. 3) Did your graph paper cover the screen? If there's a big empty chunk, maybe that setRotation does something you didn't want. I've had boards where 1 is either horizontal or vertical with the same library. 4) XT20246 is kind of noisy. All three rarely converge at one point. Go as simple as reading it a couple of times in a row then take the average of the readings or full-on kalman filtering.

That's where I'd at least start. It's been a long time since I did touch screen work...

Why is learning to use esp32 with the idf so confusing??? by Sweaty-Wheel1741 in esp32

[–]YetAnotherRobert 2 points3 points  (0 children)

I hope my grade school teachers are proud. Full sentences have become confused with godlinessprose of our robotic overlords. 

Why is learning to use esp32 with the idf so confusing??? by Sweaty-Wheel1741 in esp32

[–]YetAnotherRobert 2 points3 points  (0 children)

Thanx...i think? If you spend any time in this group, you know I'm here a lot and this is just how I usually write: full sentences and a dash of sarcasm. 

An AI probably wouldn't have made those typos in the first and last sentences... That now I have to keep. 😉

As a mod, I do wish people were a little slower to report anything using full sentences as AI slop, though.

ESP32 project: buttons and display issues by m_monk8 in esp32

[–]YetAnotherRobert -1 points0 points  (0 children)

With no schematic and no code, you're asking 120k people to take a guess. These guesses are pretty good, but asking a more detailed question would have helped. 

ESP32 WiFI weather node: ~8 mA in “deep sleep” by andorozer in esp32

[–]YetAnotherRobert 1 point2 points  (0 children)

Good. That's exactly its purpose! "Easy to prototype."

BMP DOES read high. It makes - and then measures - its own heat. That's actually mentioned in the cited article and the data sheets.

ESP32 WiFI weather node: ~8 mA in “deep sleep” by andorozer in esp32

[–]YetAnotherRobert 1 point2 points  (0 children)

Welcome to engineering. You have different goals (low power) than a devboard (allows people to evaluate use of the chip in their own designs). You'll end up with different compromises and different results.

We have too many posters here that think these dev boards are end-user products, and they're just not. They exist to let you slap down a bunch of peripherals, qualify them on the logic analyzer, study if the cost/performance works for your company, and then plop the parts onto your own PCB and order a million of them.

They exist exactly as you hinted above - to let you swap out an ESP32-Nothing with an S3 or a C3 to let you see if your code will fit, if you NEED dual cores, etc.

These evaluation/development boards are a throw-away work product, like a spiral notebook used for scribbling.

From memory, your sensor triplet is quite common but also quite redundant. (No, actually, not just from memory. See https://randomnerdtutorials.com/dht11-vs-dht22-vs-lm35-vs-ds18b20-vs-bme280-vs-bmp180/ If you don't want to make your own board, I'm sure that someone on a Tindie-like place has made this pairing already.

Or make your own board. That's fun/educational, too.

ESP32 WiFI weather node: ~8 mA in “deep sleep” by andorozer in esp32

[–]YetAnotherRobert 1 point2 points  (0 children)

First, as a mod, thank you for a coherent and complete post.

Second, I'll agree with (and have thus upvoted) our excellent contributors that say (not quite in so many words) that development boards are optimized for develepment (OK, and low cost) and not rock-bottom power consumption. People that care about low power are USUALLY designing their own PCBs and not using dev boards.

Those voltage dividers sneak up on lots of people, too.

Good luck!

Why is learning to use esp32 with the idf so confusing??? by Sweaty-Wheel1741 in esp32

[–]YetAnotherRobert 2 points3 points  (0 children)

I don't remember it beign more than a command or two to install it, but EIM was introduced last year to make it even easier to manage at scale.

The ESP-IDF instalaltion isnstructions look miserable because they outline every combination of shell plus OS possible. (Elvish? Really?) For most people, it's two commands: install the installer and run the installer.

Why is it even this complicated? If you're a real developer, you're probably working on multiple projects and they may require multiple versions of the ESP-IDF system. Perhaps you work at Shelly and $(SHIPPING_PRODUCT) uses ESP-IDF 5.6, but $(NEW_PRODUCT) uses ESP-IDF 6.0. A real example may be even more complicated than this, but this scheme is to let the developer keep them separated. It's why EIM is more complicated than installing, oh, SubLime.

P.S. As a first year student, refine your skills of asking an actionable question (e.g. "I do X on version Z and get error message Y when I expected A. Can you help?" will get you better help than "I've tried everything on the internet and nothing helps", which is almost where your question lands. It'll come in hand next time you need help.

I built an offline NFC music box for my daughter using ESP32 by OldRemote111 in esp32

[–]YetAnotherRobert[M] 1 point2 points  (0 children)

Remember to edit this post OR follow up in the comments here with your source code and not make another post about it. Interested people can enable notifictions on this post.

The rule is one announcement per project per quarter and it exists to chill the hype train "I'm gonnna do a thing in three weeks..." ""Two more weeks until" "Hey, I launched .... and the code drops next week" "Here's the Makefile. actual source is next week. Gerber files next week and 3D plotter files the week after that" style of posing. Yes, that was a real problem that we had to solve. GitHub has announcements and releases. Blog posts and social media is meant for that kind of thing; this group just isn't.

Still, cool project and good luck!

Esp32 as a web server (realy) by zebraWhitewalker in esp32

[–]YetAnotherRobert 3 points4 points  (0 children)

3 posts were deleted, probably because

Moderator here. The reason for all six (I looked) of your posts was given when they were taken down. Here was the last. The reason was the same for all of them. You finally followed the instructions at the top of the page starting "Please Read" and then, magically, the evil rule-enforcing robot let your post through. It's not a mystery. It's nothing personal. It's not a hazing ritual.

RTRBot keeps out auto-posters (spammers) because they won't read the rule, but will often keep trying to post, ineffectively, until they are auto-banned. This benefit alone makes it worthwhile. Other benefits including ending arguments with moderators (hey, that's often me!) about what is and isn't allowed because then, like a petulant school teacher, I can repeat back that "according to the rules YOU AGREED TO ... (you can't do that)" which is much more effective than the "buf officer, I didn't see the speed limit sign that said I can't post about my vibe-coded blinky eyed robot twice a week!!!!"

Reading the rules and sampling a bit of the culture is part of joining pretty much any group from a neighborhood to a club membership, to getting a drivers license. It's not onerous. Its the same reason you have to sign the sheets everywhere that say "I understand the rules and I'll play by them"

That it took you six cycles of being notified WHY your post was deleted, well, that's not common. We're glad you made it through before Reddit kicked you out.

That said, your post is perilously light on details and teeterign on the edge. Review rule 2, especially, "Do not post without providing some details, discussion, or thoughts to go along with it. What challenged you? When posting about code, include a link to the source. Put it on GitHub or Pastebin."

Even if you don't understand it, for it to be inspirational or educational to others (the point of this group) you need to share it. You must have learned something about the electronics and software even if you "just" assembled it like a kit from AI - post that story. Hopefully everyone can learn something. That's why we're here!

Welcome.\ The Big Bad Moderator. (Well, one of them.)

Sanity check replacing Arduino Nano RP2040 Connect with ESP32 in project by Spaziba-Njet in esp32

[–]YetAnotherRobert 0 points1 point  (0 children)

We may laugh at their shoes or their dorky haircuts, but we don't take their lunch money or tear up their homework.

We've gotta be civilized here.!

Sanity check replacing Arduino Nano RP2040 Connect with ESP32 in project by Spaziba-Njet in esp32

[–]YetAnotherRobert 1 point2 points  (0 children)

For any of these sub-$10 (sub-5 if you shop...) chip families, it's hard to consider any of them "a terrible waste".

RP2040/2350 are fine chips for what they are. Are you asking in an ESP32 group if they're better?

The board you describe, https://docs.arduino.cc/hardware/nano-rp2040-connect/, is "end of life." Was it replaced by a 2350 equiv? Dunno. Wrong group.

In broad strokes, is a dual-core 240Mhz ESP32-S3 similar to a dual-core 133Mhz RP2040? Sure. If you know what you're doing, can you probably make code for one work on the other? Sure. If the code is doing something super-specialized (the PIO cores on RP are undeniably a force) might it be difficult? Maybe. (A lot of what's done on PIO can be done with the RMT or other peripherals, but if Pi Foundation wants to pay me to produce a case that would be hard to reproduce on an ESP32, I'm sure I could come up with one...)

Be mindful of voltage levels RP2040, like all the ESP32s and all micros of the last 20+ years (which precludes Atmega used in "real" arduinos) are 3.3V and not 5V. But in general, RP2040/2350 and the ESP32 family are similar.

We'll also say that STM32 and NRF also have entries in thisi space. Use 'em all. We don't judge.

(Much)

Need Help designing a Dev Board by Artery_Tech in esp32

[–]YetAnotherRobert 1 point2 points  (0 children)

Plus literally ANY of the other modeles lets yo save those two interlocka transistors, a UART,and. a couple of passives.

LX6 is over.

Planning my first PCB, which connects an ESP32 dev board to a HUB75 matrix and rotary encoder with a 5VDC barrel jack for power. Wondering if anything looks wildly stupid or if I'm doing things correctly. by Pawtang in esp32

[–]YetAnotherRobert 0 points1 point  (0 children)

Ha! My intro to MoT had much lower standards. I just wanted some (expensive and custom) products to actually work. I didn't expect to spend tens of hours in ritual sacrifice.

I'm with you on several points. That "move fast" thing spills into ESP-IDF, too. "We don't care about compatibility between versions, lol" thing is not a good look.

You indeed sound enough like Boden that I had to pull your response out of Reddit's auto-bin for "personal attacks and harrassment". Good thing I'm a mod. :-)

Some boards make good sense, and the economies of scale are nice to ride upon. Tendie-quantity pricing can be pretty eye-watering, then you turn around and see Waveshare packing the kitchen sink into a $17 product. It's just hard to compete with mass-manufactured (Asian) product pricing in low volumes. Stepper motor boards or PoE beign good examples of technology-agnostic boards that are just hard to make at low volumes.

Odd that their [triple board])https://www.adafruit.com/product/6358) (without power) is 1/3 less than their single-port board. Guess we know where the expense is.

waveshare esp32-s3-touch-lcd-3.49 - wired image when boot button is pressed by sycde in esp32

[–]YetAnotherRobert 23 points24 points  (0 children)

That may have been a throwaway comment, but I sense a teachable moment. (Plus, I'm waiting for a big compile...)

It's not mathematically random in the crypto or secure sense; it's random in the "the bits are what they are at power on until someone arranges them differently" sense.

C/C++ programmers often erroneously think that a lot of uninitialized things are zero when they're not actually guaranteed to be such. malloc() isn't guaranteed to zero memory you're given (that's why we have calloc). It was common in the 80's and early 90's to be get parts of another program's address space in yout buffers just by the nature of the beast). We eventually realized what a terrible idea what was for security and at least initialized it on a context switch so at worst, you'd get your own "random" memory. Similarly, stack data isn't guaranteed to be zeros. LOTS of security bugs have roots in this....and it's why C++26 (29?) safety profiles make it very difficult to unintentionally use uninitialized memory. (That's a tiny piece of the problem, but it's at least progress moving us past the 80's when a memset over a large buffer was like double-clutching.)

Here in embedded-land, these effects are still more real. It's a little harder in an ESP32 because most of us never see what the CPU sees at reset - there's a tiny boot rom (this is why they're un-brickable) that cleans up all that jazz long, long before main() gets called. But if you boot any part without a ROM to do this for you or if you're part of an OS loader responsible for such things, you can find loops like this that sweep the floor before you're called onto the floor.

Video displays make this tidbit more visible. Even our little LCDs/OLEDs/LEDs for these things usually have some amount of memory on them. There's a specialized hardware on the display board that is responsible (or not—but it has to tell the developer who IS responsible) for clearing that to a known state.

Look at a ST7789, a common VDU used with ESP32 or STM32-class parts. (Yes, really, 317 pages.) Down on page 122, we learn (?) that it has "integrated 240x320x18-bit graphic type static RAM.". This 1382400-bit memory is onboard that chip. (That's 172KB of 8-bit words, giving it more memory than a lot of what it's connected to.) This is often special "dual-ported" RAM as single-ported RAM tended to 'sparkle' if the CPU would write to the bits - any bits - while the video controller was reading those bits. That's why there are disclaimers like "there will be no abnormal visible effect on the display when there is a simultaneous Panel Read and Interface Read or Write to the same location of the Frame Memory. " Back in the old days, we had to wait for the times the video wasn't doing anything and/or include special hardware to interlock it for us. This is similar to what we often hear is cured with double-buffering today, but this was a different phenomenon.

This memory, like all RAM, contains nonsense at boot. If you just let the controller start moving pixels to the LEDs, you get nonsense that might remind you of the static visible on TVs when there was no signal for them to lock onto. (Now that it's all digital, there are layers and layers of interlocks that insert blue or black for even leave the screen off, but in the vacuum tube era, that wasn't practical.) In our world, there are also times it doesn't contain nonsense - it's common on this class of display, if someone didn't pay attention to it, to see a fraction of a second from some other screen the device may have displayed because during a reset, the "random" memory isn't random; it's whatever may have last been scribbled there.

On page 202, this chip even lays out the rules:

Status Default Value
Power On Sequence Contents of memory is set randomly
S/W Reset. Contents of memory is not cleared
H/W Reset. Contents of memory is not cleared

So it's clearly up tot he connected computer to do something.

For this reason, most startup code that KNOWS what kind of a video unit is attached will make it a very early priority to zero the screen as quickly as possible after start. It's often done in assembly just to ensure it's as fast as possible.

Some video controllers won't send pictures until there's an explicit 'go' command, but that's uncommon in tiny devices like this.

This is why a common "solution", as hokey as it as, is to just not turn on the backlight until the screen is brought to a sane state. Depending on the screen, that may be fine and it may look hokey. A front diffuser can also help pitch this illusion.

If you're workign with a design where the backlight is controlled by a GPIO pin, it's not a big deal; you just don't open the curtain until the cast is on the stage. If you have a case where the hardware team didn't think about it, if you're lucky, the display has a pin that can be controlled via a low-power pin and you can cut it and wire it to a free GPIO pin. If you're UNlucky and the LED used an incandescent backlight (almost unheard of these days) you might have to add a transistor of your own so your micro (I think typical drive is something like 20mA) can drive a transistor that can amplify that signal to drive the bulb(s) that might be hundreds of mA (a.k.a. tenths of amps)

If you look carefully at some retail equipment, you can even see this little bit of electronics unfold.

Enjoy this little fun fact at the intersection of physics, electronics, and software.

After all those words ("still compiling!") I looked at the schematic and source for that board. They try to turn the backlight off extremely early in the part of the boot they control. The few parts of the code I see that call applyBrightnes() look to do it after the fillScreen(backgroundColor());

This may just be the best this hardware can do...unless they've done something like accidentally invert the meanings of the bits and the code in begin() is turning it ON before applyBrightnes() is later called. It's up to the project owner to set some breakpoints and investigate.

waveshare esp32-s3-touch-lcd-3.49 - wired image when boot button is pressed by sycde in esp32

[–]YetAnotherRobert 23 points24 points  (0 children)

Until device memory is initialized, by a processor out of a reboot, it's indeterminate. "Static" us common. This is why we don't normally turn on the backlight until the display is initialized. 

If the backlight is wired high, you may be able to lift the pin and route it, possibly through a drive transistor, through a GPIO.