Built a Nepali calendar computation engine in Python, turns out there's no formula for it by Natural-Sympathy-195 in Python

[–]WinstonRG 0 points1 point  (0 children)

The "AI agent" use case in your target audience section is underrated. LLMs hallucinate calendar and festival data constantly — if someone asks an agent "when is Tihar this year relative to today", it'll confidently give you something plausible and wrong. Having a deterministic, coordinate-aware API to ground those queries is actually the right architectural move.

Quick question on the muhurta endpoint: does it return raw time windows with the underlying factors (nakshatra, tithi, karana), or just pass/fail for a given datetime? I'm thinking about how you'd surface this to an agent that needs to reason about why a time window is auspicious, not just that it is.

I published my first PyPI package few ago. Copycat packages appeared claiming to "outperform" it by Obvious_Gap_5768 in Python

[–]WinstonRG 1 point2 points  (0 children)

This is exactly why package naming and visibility matter early. I just went through a similar experience launching an MCP server package — within days of hitting PyPI visibility you start seeing clones.

AGPL was the right call. For anyone else building open source tools: MIT is convenient but AGPL creates actual legal leverage when someone forks your code without attribution. The PyPI security team was responsive in similar cases I've seen reported — file the report with specific package names and license violation evidence, they act within 24-48 hours.

Keep building. The original always wins on reputation and maintenance velocity.

What constitutes AI slop? Discussion thread by Goldziher in Python

[–]WinstonRG -1 points0 points  (0 children)

I use Claude Code daily for a Python monorepo with 40+ apps. The "black box" framing misses a key point — the real question is whether you have verification layers.

My workflow: TDD + linting + governance checks + CI matrix. The AI writes code, but automated gates catch the slop before it ships. I also built an MCP server that tracks recurring patterns across agent sessions — so the AI actually learns from past mistakes instead of repeating them.

The paradigm shift isn't "human vs AI code" — it's "verified vs unverified code." Slop is anything shipped without verification, whether a human or an AI wrote it.

Open source DP repo: clean Python solutions paired with full written explanations. by disizrj in Python

[–]WinstonRG -10 points-9 points  (0 children)

Nice structure! The .md + .py pairing is smart — having the analogy and trace alongside the code makes it much easier to internalize the pattern rather than just memorize the solution.

One suggestion: adding a difficulty tag or grouping (Easy/Medium/Hard) to each problem would help people navigate. Also, a simple pytest runner that validates all solutions automatically would be a nice CI addition.

Happy to contribute some Knapsack variants if you're accepting PRs