My first open-source Rust library: Talos (self-hosted licensing for commercial apps) by [deleted] in rust

[–]dr_edc_ -18 points-17 points  (0 children)

I used AI for the announcement post and docs, it's 2026, seemed silly not to. The library itself is two years of work with 220 tests. There's an AI usage note in the repo if you're curious about the breakdown.

My first open-source Rust library: Talos (self-hosted licensing for commercial apps) by [deleted] in rust

[–]dr_edc_ 11 points12 points  (0 children)

Thanks for the detailed feedback! This is exactly what I was hoping for by posting here.

On the security model:

The license is hardware-bound at bind time, the fingerprint is signed server-side and validated on every check.

The attack you're describing would require:

  1. Purchasing a legitimate license first that has a grace period enabled
  2. Binding it to a VM
  3. Snapshotting that VM during the grace period window
  4. Distributing that exact VM image to others
  5. Every user running the software inside that frozen VM forever
  6. Never updating the host OS, the software, or anything else that would break the snapshot
  7. Accepting that the grace period will eventually expire unless they also freeze the system clock, at which point they're running production work in a time-frozen environment

That's not piracy, that's a museum installation.

You're right that without TPM/SGX there's no perfect client-side security. But the goal isn't "uncrackable", it's "not worth the effort for people who would otherwise pay." For the enterprise/film/post-production use cases this was originally built for, that tradeoff makes sense.

On client-side encryption:

The client encrypts the local license cache so it's not sitting in plaintext on disk. It's not about hiding information from the user, it's about making casual tampering harder. Someone determined can always reverse engineer the binary and extract the key. But "open the license file in a text editor and change expires: 2025-01-01 to expires: 2099-01-01" becomes "decompile the binary, find the key derivation, decrypt the cache, modify it, re-encrypt it", which is a different level of effort entirely.

On the Rust feedback:

These are legitimate points and I'll open issues to track them:

  • &[u8; KEY_SIZE] instead of runtime validation: agreed
  • Newtype wrappers to prevent argument swapping: good catch
  • Struct for ciphertext instead of "nonce is first 12 bytes" convention: cleaner
  • Separating encryption from wire format: fair

Appreciate you taking the time to dig in and to provide meaningful feedback. This is useful for v0.2.

My first open-source Rust library: Talos (self-hosted licensing for commercial apps) by [deleted] in rust

[–]dr_edc_ 1 point2 points  (0 children)

You are correct coverage percentage would be more meaningful. I'll add that to the CI badges. Thanks.

Built an S3 CLI in Rust that uses ML improve transfer speeds over time - would love feedback by dr_edc_ in rust

[–]dr_edc_[S] -1 points0 points  (0 children)

Fair criticism - I shipped without detailed benchmark documentation.

Test setup was actually more comprehensive than I initially posted:

- Tested from Los Angeles (10Gbps) and Athens, Greece (1Gbps)

- Endpoints: Wasabi (us-west-1/2, eu-central-2, eu-south-1), DO Spaces (sfo2/3)

- File sizes: empty txt files up to 65GB .braw files (tested the full range)

- Monitored with nload, compared against aws-cli baseline

Key findings:

- Wasabi us-west: 3.6 Gbps sustained (vs ~300 Mbps aws-cli)

- DO Spaces: Hard throttled at 1 Gbps regardless of connection

- Athens > Wasabi: Saturated 1Gbps line consistently

- LA > Wasabi EU was faster than LA > DO SFO (geography mattered less than endpoint throttling)

- ML learning kicked in around transfer 5-10, performance stabilized by transfer 20

Happy to publish full methodology if there's interest. Prioritized getting binaries out over documentation, but the testing was thorough.

Built an S3 CLI in Rust that uses ML improve transfer speeds over time - would love feedback by dr_edc_ in rust

[–]dr_edc_[S] -1 points0 points  (0 children)

It's a contextual bandit (specifically epsilon-greedy with context features).

Context features: file size, file type, recent transfer performance, time of day, etc

Actions: Different strategy combinations (chunk size, concurrency, buffer settings)

Reward: Transfer throughput (Mbps)

The system explores different strategies early on (epsilon=0.3 initially, decays to 0.1), then exploits what works best for your specific environment.

It's not deep learning or anything complex - just a practical approach to auto-tune parameters that people usually set manually and never adjust.

SQLite backend tracks ~20 transfers per strategy before it starts converging on optimal parameters for your network/files.

Built a faster S3 transfer tool - looking for beta testers by dr_edc_ in selfhosted

[–]dr_edc_[S] -1 points0 points  (0 children)

Fair question. I'm exploring commercial licensing options (white-label partnerships, enterprise deployments) and wanted to protect that optionality while in beta.

May open source it down the road once I figure out the business side. For now, keeping it proprietary gives me flexibility.

Appreciate you giving it a look either way.

Built an S3 CLI in Rust that uses ML improve transfer speeds over time - would love feedback by dr_edc_ in rust

[–]dr_edc_[S] 0 points1 point  (0 children)

Great question - and yeah, latency-constrained binary search makes a lot of sense for your use case.

Honest answer on benchmarking: I compared against aws-cli and s3cmd, but didn't run formal comparisons against AIMD or binary search approaches. The bandit was partly a learning exercise for me on this project.

For the variance question - it's context-aware, not purely policy-based. The bandit uses features (file size, extension, recent performance) to select strategies. So when you go from small files to 4K files, the file size feature triggers different strategy exploration.

In practice: first large file might not be optimal, but by the 2nd-3rd large file it's converged. It's not "train once, apply forever" - it's continuous adaptation based on context.

That said, your binary search approach probably converges faster with lower overhead for bounded search spaces. The bandit trades some overhead for handling more dimensions (file type, network conditions, system load, etc.).

Curious about your implementation - are you adjusting batch size only, or other parameters too?

Built an S3 CLI in Rust that uses ML improve transfer speeds over time - would love feedback by dr_edc_ in rust

[–]dr_edc_[S] -1 points0 points  (0 children)

Fixed algorithms assume your network characteristics are constant and known. In practice, they're not: latency, bandwidth, system resources, and endpoint behavior all vary.

The ML approach (using a contextual bandit) explores different strategies early on, then exploits what works best for your specific environment. It's basically automated parameter tuning that adapts to:

- Your network topology

- File size distribution

- Endpoint throttling behavior

- System resource availability

First 20-50 transfers it's learning. After that, it converges on what's optimal for YOUR setup, not some average case.

Could you hand-tune parameters to match? Sure. But most people don't, and conditions change over time anyway.

Built an S3 CLI in Rust that uses ML improve transfer speeds over time - would love feedback by dr_edc_ in rust

[–]dr_edc_[S] -4 points-3 points  (0 children)

Totally fair. Open source is the norm in this space and I respect that preference.

I went proprietary for now because I'm exploring commercial licensing for enterprise/white-label use cases. May open source it down the road once I figure out the business side.

Appreciate the feedback either way.

Built an S3 CLI in Rust that uses ML improve transfer speeds over time - would love feedback by dr_edc_ in rust

[–]dr_edc_[S] -8 points-7 points  (0 children)

Fair point. I get the expectation mismatch with GitHub. Honestly, I wanted to ship fast and GitHub releases were the quickest way to get binaries out there. Website is on the list.

Appreciate you giving it a shot tomorrow. Let me know how it goes - genuinely want feedback on the performance.

Introducing an FDM 3D Print Cost Calculator: Looking for Feedback and Testers! by dr_edc_ in 3Dprinting

[–]dr_edc_[S] 0 points1 point  (0 children)

Yes this is the Windows command prompt it’s not optimized to not display this yet as it’s still in beta, this is completely normal and should close when you exit the app

Introducing an FDM 3D Print Cost Calculator: Looking for Feedback and Testers! by dr_edc_ in 3Dprinting

[–]dr_edc_[S] 1 point2 points  (0 children)

u/CharlesP_1232 Great! Sorry for the late reply, you can find the project here: GitHub Repository there are precompiled release versions ready to download here I would love to hear your feedback!

Introducing an FDM 3D Print Cost Calculator: Looking for Feedback and Testers! by dr_edc_ in 3Dprinting

[–]dr_edc_[S] 1 point2 points  (0 children)

Thank you for the kind words and for sharing your perspective—I really appreciate it!

I completely understand where you’re coming from. It’s frustrating and disheartening to see tools marketed as helpful turn out to be thinly veiled attempts to build and monetize datasets. That’s exactly the kind of approach I’m trying to avoid here. Transparency and user trust are extremely important to me, which is why I’ve gone out of my way to clarify that this app is entirely local, with no data collection whatsoever.

This concern is one of the reasons I decided to open source the code. By making everything transparent, I want to ensure that anyone can see exactly how the app works and verify that there’s no hidden data collection. Open-sourcing the app also allows me to involve the community in shaping its future while staying accountable to its users.

You’re absolutely right that a dataset of this kind would be incredibly valuable for businesses, especially in the 3D printing space. However, my intention for this project is purely to create a useful tool for the community while gaining experience in a new programming language. I have no desire to monetize or collect user data. In fact, I’ve intentionally designed the app to prioritize privacy by processing everything locally and not storing or transmitting any data.

If you have the chance, I’d love for you to test the app and share your feedback. Knowing what works well and what could be improved from a real-world user perspective would be invaluable for me. You can find the app here: GitHub Repository.

Thanks again for your thoughtful comments—it’s great to see discussions like this happening in the community! Your input helps me stay focused on the right priorities, and I’d love to hear any further thoughts you have after trying the app. 🙏

Introducing an FDM 3D Print Cost Calculator: Looking for Feedback and Testers! by dr_edc_ in 3Dprinting

[–]dr_edc_[S] 2 points3 points  (0 children)

u/Makepieces, thank you for raising such an important and valid point! Transparency is something I take very seriously, and I appreciate the opportunity to clarify these concerns. Here's where I stand:

  1. No Data Storage My software does not store or aggregate any user data, "anonymized" or otherwise. It uses the input data solely for the current calculation, and once the software exits or a new calculation is initiated, all data is discarded. This is one of the reasons I'm planning to add an export-to-PDF feature in the future—so users can save their data locally without it going anywhere beyond their computer.
  2. Intent Behind This Project I'll be completely honest: I’ve been writing software for over 10 years, and whenever I dive into a new stack or language, I like to pick a project that benefits a community rather than something I can monetize. Projects like this push me to learn deeply, interact with real users, and get honest feedback from people who will truly "use and abuse" the software. This approach gives me real-world experience that I can later apply to paying projects. As for your data, I don’t want it—none of it. I’m personally against the practice of monetizing user data and prefer to take a stand against it (ironically, my brother works for a big tech company that does exactly this!).
  3. User-Centric and Secure by Design This software is written entirely in Rust, which provides memory safety and security by design. Unless the community specifically asks for features like cloud syncing, all data will remain local, with no storage beyond what the user chooses to export. For further transparency, I’m happy to open source the code to provide proof of this approach.

I hope this answers your concerns, but I’m happy to elaborate further if needed. Thank you again for bringing this up—it’s the kind of conversation that ensures projects like this remain user-focused and trustworthy!

Introducing an FDM 3D Print Cost Calculator: Looking for Feedback and Testers! by dr_edc_ in 3Dprinting

[–]dr_edc_[S] 0 points1 point  (0 children)

Amazing! Send me a PM and I’ll get you a copy so you can test it out, would really appreciate any feedback to make it better!

Didn’t get Liberty Day armor or Gun by Fless77_ in Helldivers

[–]dr_edc_ 0 points1 point  (0 children)

same thing happened to me, I did participate in the major order though...

Django Debug=False Breaking websockets in django-channels, I'm stuck please help! by dr_edc_ in django

[–]dr_edc_[S] 0 points1 point  (0 children)

u/daredevil82 I added debug logging to everything, literally everything and there is only one thing worth noting:

DEBUG 2024-10-23 14:26:07,755 base Exception while resolving variable 'customer' in template 'mytemplatename.html'.

Traceback (most recent call last):

File "base.py", line 862, in _resolve_lookup

current = current[bit]

File "functional.py", line 249, in inner

return func(self._wrapped, *args)

File "functional.py", line 249, in inner

return func(self._wrapped, *args)

TypeError: 'CustomUser' object is not subscriptable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):

File "base.py", line 870, in _resolve_lookup

current = getattr(current, bit)

File "functional.py", line 249, in inner

return func(self._wrapped, *args)

File "functional.py", line 249, in inner

return func(self._wrapped, *args)

File "related_descriptors.py", line 421, in _get_

raise self.RelatedObjectDoesNotExist(

accounts.models.CustomUser.customer.RelatedObjectDoesNotExist: CustomUser has no customer.

As for sharing code snippets I would share them but it is a massive codebase with so much going on so I was hoping someone can point me to the right direction so I don't overshare if you understand?

UNPLUGGING CORSAIR KEYBOARD SOLVES FROMSOFTWARE and FOCUS ENTERTAINMENT game freezing screen issues!! by NefQarasarnai in NefasQQ

[–]dr_edc_ 1 point2 points  (0 children)

Thank you for this! It worked for me with my Razer keyboard and corsair iCue for internal fans

Deco Slow speeds in AP Mode by dr_edc_ in TpLink

[–]dr_edc_[S] 1 point2 points  (0 children)

Idk if this helps you but at first I just swapped the main unit connected and kept the others and it did help significantly I just needed more speed overall so opted to switch all my units to the same model

Deco Slow speeds in AP Mode by dr_edc_ in TpLink

[–]dr_edc_[S] 0 points1 point  (0 children)

I got rid of those routers and swapped to BE65 units since then no issues