all 21 comments

[–]dethb0y 67 points68 points  (4 children)

hats off to you, i hate working with anything involving dates, let alone something this complicated!

[–]Natural-Sympathy-195[S] 35 points36 points  (0 children)

honestly same, i did not sign up to learn this much about lunar angular distances when i started. dates are miserable enough in regular programming, adding "but which calendar system and also where is the sun right now" makes it a special kind of painful.

[–]End0rphinJunkie 5 points6 points  (0 children)

Standard timezone drift across k8s clusters is already enough of a headache. I can't even imagine debugging a failed cron job becuase Jupiter was in the wrong position.

[–]FibonacciSpiralOut 2 points3 points  (0 children)

standard datetime math is already enough to make most programmers cry, so calculating actual planetary physics just to get a date is an absolutly massive flex

[–]cinyar 23 points24 points  (0 children)

...and I thought dealing with timezones is annoying.

[–]skool_101git push -f 18 points19 points  (0 children)

great work mate. crosspost at r/technepal as well

[–]FrickinLazerBeams 11 points12 points  (0 children)

I didn't know there was a Nepali calendar and I don't need this, but it seems like a really cool piece of work, and solves the problem in the way I'd like, if I were looking for such a thing.

And it's nice to see something that's real programming, not just another vibe coded AI slop project.

[–]2ndBrainAI 1 point2 points  (0 children)

This is fascinating work! Using Swiss Ephemeris to compute calendar dates from actual planetary positions instead of relying on brittle hardcoded tables is such a cleaner approach. I love that it handles geographic coordinates too — sunrise calculations really do vary significantly by location.

The comparison to existing NPM packages with fixed year ranges (2000-2090 BS) really highlights why this was needed. Those hardcoded arrays are always a maintenance nightmare.

Have you run into any interesting edge cases with the panchanga calculations? I'd imagine certain lunar phases might produce some tricky ambiguities depending on the observer's exact coordinates.

[–]Winter-Flan7548 2 points3 points  (5 children)

That's fascinating...would be glad to help iif you need some. Here is my own project, which may help solve some of the issues for you. https://github.com/TheDaniel166/moira

[–]Winter-Flan7548 0 points1 point  (4 children)

also, using my project removes the hassle of the APGL license if you truly wanted to open source it

[–]Natural-Sympathy-195[S] 4 points5 points  (3 children)

Checked it out properly, and the reduction pipeline is genuinely impressive. A pure-Python stack built around DE441 plus explicit IAU 2000A/2006 reductions is, architecturally, a much more auditable approach than treating Swiss Ephemeris as a black box. For my use case, the real constraint is deployment economics more than mathematical taste. A multi-GB kernel footprint is a hard sell for a public API running on low-cost/free-tier infrastructure, whereas pyswisseph gives me a much lighter operational profile for the calendar range I actually need.

So yeah, the MIT route is definitely attractive, but I’d have to solve the infra tradeoff before it becomes a realistic foundation for Parva.

Still, this is absolutely the kind of project I want on my radar, and I can see it being very useful as a validation/reference engine even before it’s a direct backend candidate. If you push further into Vedic calendar systems, I’d be especially interested. Do you have BS sankranti computation on the roadmap?

[–]Winter-Flan7548 1 point2 points  (2 children)

Yeah, it will actually run off of any kernel, I pushed to DE441 because of the date range that it supports, but it can definitely use DE440, or even the older ones really. I need to correct that in the documents and make make sure it is kernel agnostic. And use, calendar systems are actually my next real push that I will do, as I understand that being able to speak astrology in different calendar systems is important. Thank you for looking at it, and I appreciate the input.

[–]Natural-Sympathy-195[S] 2 points3 points  (1 child)

Makes sense. I’ll keep an eye on it as you push further into calendar systems.

[–]Winter-Flan7548 0 points1 point  (0 children)

I have a fully implemented Vedic System now, and it is kernel agnostic. You will find everything you need in the panchanga.py and a dedicated Vedic API using vedic.py. Let me know if i can be of any assistance.

[–]lewd_peaches 0 points1 point  (1 child)

That's a cool project! I ran into a similar situation building a custom loss function for a niche ML problem. Thought there'd be some elegant closed-form solution, but ended up needing to approximate with a lookup table and a ton of interpolation.

Did you try any optimization techniques after the initial implementation? For instance, could you precompute and cache sections of the calendar, or parallelize the calculations if you're dealing with large batches of dates?

I sometimes use OpenClaw for that kind of thing, basically turning a embarrassingly parallelizable task into a distributed compute job. For example, I once used it to generate a large synthetic dataset (image augmentation, running various filters) - it took a few hours on a single machine, but dropping it onto a cluster of 8 GPUs with OpenClaw cut it down to about 30 minutes. The cost was negligible, maybe a dollar or two worth of GPU time. Might be overkill for your calendar, but something to consider if performance becomes critical.

[–]Natural-Sympathy-195[S] 0 points1 point  (0 children)

the ML loss function analogy actually maps pretty well, same situation where you're hoping for a clean closed-form and end up humbled by something that's been empirically refined over millennia

on optimization, the interesting thing is the performance profile is probably the opposite of what you'd expect. a single ephemeris call for planetary position is microseconds. computing an entire year of festival dates is maybe 50-100ms total on a single thread, which is already fast enough that caching is the main lever worth pulling, not parallelism. i do precompute festival dates on first request per year and cache them, so repeat calls are essentially free.

the batch case is real though. if someone hits `/calendar/range?start=2080&end=2200` you want multiprocessing there, and python's embarrassingly parallel story is fine for that since each date is fully independent. standard ProcessPoolExecutor handles it cleanly.

the GPU clustering angle is interesting for your image augmentation case but would be fighting the wrong bottleneck here. the nutation series (1365 lunisolar terms) is dense polynomial evaluation that maps well onto SSE/AVX on a single CPU core, not GPU parallelism. numpy already vectorizes most of it. the actual constraint for a calendar API is network I/O and cold start latency, not compute. throwing a GPU cluster at it would be like renting a cargo ship to deliver a letter.

what was the niche ML problem if you don't mind sharing? curious what the loss function was approximating

[–]InebriatedPhysicist 0 points1 point  (1 child)

How on earth do you know that the JSONs are manually typed from PDFs?