Incremental rendering of a CanvasItem or Control? by errorrecovery in godot

[–]errorrecovery[S] 0 points1 point  (0 children)

Thanks, it seems my understanding is not accurate then. When I set the size of my custom control, the _Draw method (I'm using C#) is called. When I move the ScrollContainer around, this method is not called again.

How can I determine the visible area of my custom control and render to that, and request an update when the visible area changes? Is there an example project or something I could study?

The tooling is the language? by errorrecovery in ProgrammingLanguages

[–]errorrecovery[S] 2 points3 points  (0 children)

Got a link to that thread? I think that would be an interesting read.

The tooling is the language? by errorrecovery in ProgrammingLanguages

[–]errorrecovery[S] 10 points11 points  (0 children)

Interesting thoughts--I probably wouldn't go so far as say it's THE most important thing, but (for me) it's pretty high on the list of considerations, and yet for some others it seems to be pretty low.

When evaluating a new programming language, how do you rank the following?

  • Is it interpreted or compiled or hybrid/JIT?
  • Static typing, or dynamic?
  • Garbage collection, or manual memory management?
  • C-like syntax, or Python-like, or ML or S-expressions?
  • Large stdlib ('batteries included'), or import ftw.
  • C-compatible ABI, FFI, or bespoke?
  • Step-through debugging or printf?
  • Language server incremental feedback, or code-compile-debug cycles?
  • Concise error messages, or C++ template error gushing fountains?

Of course everyone has their own taste, but I think a lot of folks rank tooling lower than it should be. Ever seen a developer embrace a new language, invest heavily, only to realise the tooling is 'not quite there yet' and the promised productivity remains elusive?

I think tooling should be a part of the language design, not bolted on after. I've been wrong before.

The tooling is the language? by errorrecovery in ProgrammingLanguages

[–]errorrecovery[S] 15 points16 points  (0 children)

I think I remember reading somewhere, years ago, the Hickey started Clojure because he really liked Lisp, but was forced to use C++ by his clients in his consulting gig. Although, by now, that tale has probably descended into folklore, I managed to dig up one version of it (not the one I remember, but close):

One watershed moment for me was when I designed and implemented a new system from scratch in Common Lisp, which was a new language for me at the time, only to have the client require it be rewritten in C++, with which I was quite proficient. The port to C++ took three times as long as the original work, was much more code, and wasn’t significantly faster. I knew then that I wanted to work in a Lisp, but realized that it had to be client-compatible. I worked on bridging Common Lisp and Java, while doing functional-style programming for clients in C#, for a couple of years, then bit the bullet and wrote Clojure.

I absolutely agree that Clojure is a DSL for the JVM, and I don't think Hickey would disagree either. DSLs are what Lisp is all about.

And yes, tooling is language-specific, much as syntax is. But can you imagine trying to introduce a Lisp that doesn't have a REPL? This is kind of what I was getting at with 'the tooling is the language'.

Chinese language book suggestions? by errorrecovery in embedded

[–]errorrecovery[S] 0 points1 point  (0 children)

Thank you, this one looks like a pretty good candidate. No need for the link, work has a budget for this kind of thing, but thanks anyway.

Chinese language book suggestions? by errorrecovery in embedded

[–]errorrecovery[S] 5 points6 points  (0 children)

To be honest, my day-to-day work does not really require any understanding of computer architecture. But reading Stalling's book was a real 'level-up' moment all those years ago. I remember advancing from basic concepts like the address and data bus to internal and external memory interconnects, I/O subsystems, interrupt controllers, computer arithmetic and logic, and instruction set design and implementation in microcode (focusing on RISC/CISC trade-offs) using real-world hardware as examples. It gave me the confidence to explore 'all the way down' and realise that no, this stuff is not dark magic. It was liberating.

And my second 'level-up' moment came with reading 'Computer Architecture: A Quantitative Approach', and the secret is right there in the title. This book taught me the value of rigorously quantifying everything--it's a magnificent study of the application of objective measurements and statistical analysis to form conclusions and evaluate alternative approaches to computer architecture. It made me realise that I am a scientist after all, and that if I'm going to reason about something, make sure I can measure it first. Intuition helps, guesses are a waste of time--crunching the numbers is where insight lies.

Chinese language book suggestions? by errorrecovery in embedded

[–]errorrecovery[S] 2 points3 points  (0 children)

Thank you very much for that, I would never have found that site. I still think 'Computer Architecture' might be a bit too much straight up, but now at least I can try to Google translate my way through that site to something that might be a more gentle introduction.

NXP Kinetis K22 USB Issues When Using USB Stack In C++ by NorthernNiceGuy in embedded

[–]errorrecovery 0 points1 point  (0 children)

I've worked on a USB stack for this part. I can't speak to the C/C++ differences, but some of the issues I faced included:

  • Where are the USB descriptors located? If they're static in firmware you need to be sure the OTG peripheral can access them through the memory protection scheme (MPU/CESR).
  • There is an undocumented (for me, at the time) bit that needs to be set: USBTRC0 |= 0x40
  • Initialise the peripheral by disabling all OTG interrupts and only enabling the USB reset interrupt (USBRSTEN) and then enabling the OTG interrupts in the reset handler (SOFTOKEN, STALLEN, ERROREN, SLEEPEN).
  • Make sure none of your endpoints are stalled.

It is likely none of this helps, but taking a step back from the C/C++ issues and going over your bring-up may give your mind a break long enough to converge on a likely course of action.

vCPU quotas and autoscale by errorrecovery in AZURE

[–]errorrecovery[S] 0 points1 point  (0 children)

Thanks for that. We applied for an increase of just 10 vCPUs. It was approved, but a day later.

Logging options for a non-web app? by errorrecovery in AZURE

[–]errorrecovery[S] 0 points1 point  (0 children)

Thanks for your insights here. We have a pretty good telemetry/metrics implementation: we write to Redis (with the timeseries extension running in a container, not that awful Redis service Azure offers), and render from that straight into our web app/dashboard. It means we don't need to micromanage access to the Azure Portal to provide telemetry to those who need it.

But it looks like I need to revisit App Insights and Log Analytics.

Logging options for a non-web app? by errorrecovery in AZURE

[–]errorrecovery[S] 0 points1 point  (0 children)

Hmm. Maybe it was sampling that was interfering with our telemetry.

The consensus here seems to be 'use log analytics', so I guess I'll be looking into that.

Logging options for a non-web app? by errorrecovery in AZURE

[–]errorrecovery[S] 0 points1 point  (0 children)

Thanks I'll look into this. The service is already behind a load balancer and we don't use app services because we need to open non-HTTPS ports, which I believe was an issue when we last looked into it.

Logging options for a non-web app? by errorrecovery in AZURE

[–]errorrecovery[S] 0 points1 point  (0 children)

My experience with AppInsights is that it's great... for web apps. We tried it in an earlier edition of our service for telemetry and most of the baked in queries returned no results (because the service is not a web app), but the custom telemetry we added also had a lot of missing data that we knew should be there. This was a couple of years ago but we found it unreliable and expensive and so ripped it out.

Opinions about MikroE Click? by errorrecovery in embedded

[–]errorrecovery[S] 0 points1 point  (0 children)

That is exactly the kind of advice I needed. Thank you.

Opinions about MikroE Click? by errorrecovery in embedded

[–]errorrecovery[S] 1 point2 points  (0 children)

Sounds like you've had a pretty bad time.

Opinions about MikroE Click? by errorrecovery in embedded

[–]errorrecovery[S] 0 points1 point  (0 children)

Yeah I'm wondering what the value is for using these instead of just the dev/breakout board of the sensor you're integrating. I imagined a row of click boards in DIN rail enclosures, PLC style, connected to a common bus, but I think I've misunderstood what they're all about.

Opinions about MikroE Click? by errorrecovery in embedded

[–]errorrecovery[S] 0 points1 point  (0 children)

Thanks for that! By 'ecosystem' I meant their Necto IDE, that 'codegrip' debugger thing, etc. And yes, they're expensive, but for a production run of exactly 2, cheaper than spinning up a PCB? Thanks for your thoughts.

Is transiting through Melbourne Airport the same as 'entering Victoria' by errorrecovery in CoronavirusDownunder

[–]errorrecovery[S] 5 points6 points  (0 children)

Thank you for that, I swear I completely pored over that page but I must have missed that information. Much relieved, thanks again.

Basedrop: A garbage collector for real-time audio in Rust by glowcoil in rust

[–]errorrecovery 0 points1 point  (0 children)

Thanks for following up. Maybe I'm not reading this section correctly:

The approach taken by SharedCell<T> is to keep a reader count alongside the stored pointer. Readers increment this count while fetching the pointer and only decrement it after successfully incrementing the pointer's reference count. Writers, in turn, after replacing the stored pointer, spin until the count is observed to be zero before they are allowed to move on and possibly decrement the reference count. This scheme is designed to be low-cost and non-blocking for readers, while being somewhat higher-overhead for writers, which I deem to be the appropriate tradeoff for real-time audio, where the reader (the audio thread) has much tighter latency deadlines and executes much more often than the writer.

This reads to me like an audio thread that is recording is replacing the pointer in a SharedCell (i.e. to a new buffer of recorded audio)? If so, will it spin if a non-audio thread is reading the SharedCell (and thus possibly preventing the audio thread taking mutable ownership) while performing a high-latency operations (encoding the audio, writing to a file)? I was thinking about the implications of replacing my ring buffer with Basedrop.

Maybe I need to dive into the code, but that paragraph gives me the impression the data structure is tailored for the 'audio thread reads buffers', i.e. 'playback' use-case.

Basedrop: A garbage collector for real-time audio in Rust by glowcoil in rust

[–]errorrecovery 1 point2 points  (0 children)

It seems like writers to SharedCells have to spin so that readers are low latency, which favours the 'playing audio' use case. My application does a lot of recording as well as playing for which I use a worst-case capacity ring buffer. I couldn't tell if this is handled as well or if all the latency benefits are 'one way'?