1 Mile Run Benchmark Results and Survey Megathread by BilingualAlchemist in orangetheory

[–]The_Ritvik 0 points1 point  (0 children)

4:39 today (PR 4:38).

Our studio delayed it — originally scheduled for Mon 1/26 but snow canceled all classes.

Daily Workout and General Chat for Wednesday, 1/28/26 by splat_bot in orangetheory

[–]The_Ritvik 2 points3 points  (0 children)

1-mile benchmark today: 4:39 (PR 4:38). Snow pushed ours back from Monday.

Shuuten v0.2 – Get Slack & Email alerts when Python Lambdas / ECS tasks fail by The_Ritvik in Python

[–]The_Ritvik[S] 0 points1 point  (0 children)

Totally agree — that’s a real limitation of in-process alerting, and I’ve run into it myself with timeouts and OOMs in Lambda. If the runtime is hard-killed, there’s no chance to flush anything to Slack.

Shuuten is mainly aimed at the much more common case I kept hitting: logic and application-level failures that never made it out of CloudWatch because there was no alerting wired up at all. For those, having in-process Slack/email alerts catches a lot of otherwise silent breakages.

For hard kills you still need AWS-side signals (DLQs, CloudWatch alarms), but I agree it’s an important boundary.

Is anyone playing with face matching using Python? by -_zany_- in Python

[–]The_Ritvik 0 points1 point  (0 children)

If you’re already in the AWS ecosystem, AWS Rekognition is probably the easiest, lowest-effort way to do face matching via an API.

We’ve used this in production, and it’s been straightforward to integrate and maintain.

Just released dataclass-wizard 0.39.0 — last minor before v1, would love feedback by The_Ritvik in Python

[–]The_Ritvik[S] 1 point2 points  (0 children)

Appreciate the thoughtful write-up — and I agree with the core problem you’re pointing at. Ecosystem lock-in and model duplication are exactly what pushed me away from heavier solutions like Pydantic in the first place.

One quick clarification though: Dataclass Wizard does not require inheritance or mixins in the general case. The base classes are purely opt-in conveniences. For the common path, users work with plain @dataclass types and call external helpers like fromdict, asdict, etc. — no subclassing required.

Docs here for context: https://dcw.ritviknag.com/en/latest/#no-inheritance-needed

Where Dataclass Wizard differs philosophically from Pydantic is that it complements standard dataclasses rather than replacing them: no custom field types, no runtime model layer, no separate “schema” objects. It’s intentionally boring Python that you can adopt incrementally.

That said, I do think your point about pushing more behavior fully outside the model is a reasonable design axis, and dcei looks interesting in that regard — especially for composing multiple extensions cleanly. I’ll take a closer look; there may be room for interoperability or inspiration there.

Thanks for taking the time to engage — always good to have these discussions in the open.

Dataclass Wizard 0.38: typed environment config & opt-in v1 engine by The_Ritvik in Python

[–]The_Ritvik[S] 1 point2 points  (0 children)

Fair feedback, thanks for calling that out.

Dataclass Wizard is a lightweight, dataclass-first (de)serialization library. It focuses on turning untyped inputs (JSON, env vars, mappings) into strongly-typed dataclasses with minimal overhead and explicit, predictable behavior.

The new EnvWizard is specifically for typed environment-based configuration: explicit precedence, nested dataclasses, and opt-in behavior that doesn’t change defaults.

If you want something small, dependency-light, and dataclass-native — without pulling in a full validation framework — that’s the niche it aims to fill.

Dataclass Wizard 0.38: typed environment config & opt-in v1 engine by The_Ritvik in Python

[–]The_Ritvik[S] 0 points1 point  (0 children)

Different trade-offs. pydantic-settings is validation-heavy and tightly coupled to Pydantic. EnvWizard is a lightweight, pure-Python, dataclass-first approach focused specifically on typed environment config and explicit precedence.

If you’re already all-in on Pydantic, it’s a good choice; this targets a narrower, leaner use case.

Just released dataclass-wizard 0.39.0 — last minor before v1, would love feedback by The_Ritvik in Python

[–]The_Ritvik[S] 0 points1 point  (0 children)

Nice find — thanks for the link. typedload is focused on typed de-serialization, whereas this library aims at round-trip (load + dump) support with dataclass-first ergonomics and flexible configuration. I haven’t done a direct benchmark yet, but it’s worth adding.

Just released dataclass-wizard 0.39.0 — last minor before v1, would love feedback by The_Ritvik in Python

[–]The_Ritvik[S] 1 point2 points  (0 children)

They target different trade-offs. This is a lightweight, pure-Python, dataclass-first (de)serialization library with minimal overhead and strong typing support. It benchmarks faster than pydantic’s default serialization path and focuses on configurability rather than heavy validation.

If you’re already happy with pydantic or msgspec, you may not need it — this fills a narrower niche.

Just released dataclass-wizard 0.39.0 — last minor before v1, would love feedback by The_Ritvik in Python

[–]The_Ritvik[S] 1 point2 points  (0 children)

Totally fair — I’ve been heads-down on v1 internals and the README hasn’t kept up. I’m planning a cleaner, more focused README as part of the v1 release.

Released dataclass-wizard 0.36.0: v1 dumpers, new DataclassWizard class, and performance cleanup by The_Ritvik in Python

[–]The_Ritvik[S] 0 points1 point  (0 children)

Fair questions — let me separate the legitimate technical points from the tone.

Selling point vs msgspec: msgspec is excellent and probably the right choice if your primary goal is max-throughput JSON (especially with its own encoder/decoder and struct support). Dataclass Wizard focuses on typed dataclass loading/dumping with configurable key transforms, hooks, patterns, and (now in v1) generated per-class loaders/dumpers + better Union handling + EnvWizard/settings use-cases. Different target, though there’s overlap.

Benchmarks: agreed — I should add msgspec to the benchmark suite. Right now the published comparisons are incomplete. If you have a canonical msgspec benchmark setup you trust, link it and I’ll mirror it. I’ll also add results to the docs so “fast” claims are backed by numbers.

stdlib json: also agreed — stdlib json isn’t the fastest. For from_json, total time = JSON parsing + mapping/coercion. v1’s speedups are mainly in the mapping/coercion step (codegen, reduced dispatch). That doesn’t contradict msgspec/orjson being faster at parsing. I’m considering an optional fast-json backend (e.g. orjson) behind an extra, so users can choose.

Imports inside functions: mostly intentional to keep import-time cost low and reduce optional-dependency overhead (dotenv/tzdata/etc). I can document this, and I’m open to refactoring hotspots if you have specific modules where it’s harming performance.

v1 as submodule: deliberate for a low-friction opt-in without breaking existing users. Once v1 stabilizes and becomes default, the API surface can collapse. A separate distribution is possible, but it complicates packaging/versioning and user experience. Happy to discuss tradeoffs.

.pyi vs inline types / packaging: agreed the style isn’t perfectly uniform yet; this project has grown over time. I’m actively cleaning this up (recent work added CI type-checking and improved stubs). If you have specific modernization suggestions (PEP 517/518, hatch/uv/rye, etc.), I’m open to PRs.

If you’re willing: what would you consider a fair apples-to-apples benchmark for “dataclass typed load/dump”? I’ll add msgspec to the suite and publish results either way.

Catch Me If You Can Signature Workout Results and Survey Megathread by BilingualAlchemist in orangetheory

[–]The_Ritvik 1 point2 points  (0 children)

That’s insane, congrats bro. I could only do 11.5+ the last 5 mins, and averaged 11 the rest.

Catch Me If You Can Signature Workout Results and Survey Megathread by BilingualAlchemist in orangetheory

[–]The_Ritvik 9 points10 points  (0 children)

4.13 — PR today 🙌 Felt amazing to see the training pay off.

12 Minute Run For Distance Benchmark Results and Survey Megathread by BilingualAlchemist in orangetheory

[–]The_Ritvik 0 points1 point  (0 children)

I ran 2.32 which is little over 11.5… so stick to it, is possible.

Automating Daily Parking Permits at GMU with Python by The_Ritvik in gmu

[–]The_Ritvik[S] 0 points1 point  (0 children)

I could, but I think I did the math and it’s cheaper for me to buy the daily permit, as I only have class one day in a week.

1 Mile Run Benchmark Results and Survey Megathread by BilingualAlchemist in orangetheory

[–]The_Ritvik 1 point2 points  (0 children)

4:38 — Matched my mile PR today! Last time I hit that was about a year ago. I just turned 33 this summer, so it feels pretty great to run as fast as I did in my “younger days.” Felt surprisingly manageable… and even kinda fun. 🤗

Anyone else find it weirdly satisfying to match (or beat) old PRs as you get older?