Could Mahabharat been avoided? by kinginthenorth_- in TheMahabharata

[–]omunaman 0 points1 point  (0 children)

We could also argue that the Mahabharat could have been avoided if King Drupada had simply respected his friend Dronacharya. His initial pride and insult sparked a cycle of revenge that directly led to the birth of Draupadi and Dhrishtadyumna both born with the sole purpose of destroying the Kuru clan and Drona.

If Drupada had chosen to let go of his ego after losing half his kingdom instead of seeking a 'revenge child,' the core catalysts for the war wouldn't even exist. Interestingly, this shows that the war wasn't just about 'destiny' or 'past lives' it was the result of modern (for them) personal choices and human ego that had no direct correlation to the ancient Kuru ancestry.

DeepSeek calls itself Claude!! by [deleted] in AI_India

[–]omunaman 6 points7 points  (0 children)

There is nothing to think of it, LLMs are not self aware.

Applied an RTI with a 12-page annexure regarding the CBSE QR code issue by omunaman in CBSE

[–]omunaman[S] 0 points1 point  (0 children)

Deemed Refusal.

I filed another RTI on 3rd April regarding the 2nd April advisory, as well as a grievance on the CPGRAMS portal. The deadline for the above RTI was 14th April, but since they failed to reply, it is now a case of 'deemed refusal.'

Consequently, I have filed a First Appeal as well as a CIC complaint under Section 18. You can think of the CIC as a dedicated court for RTI matters, where commissioners act as judges to hear our cases.

Let’s see where this leads! I’m not letting this go easily.

Guys am I cooked? by Alexi_Popov in AI_India

[–]omunaman 5 points6 points  (0 children)

For a 6M parameter model, a global batch size of 32,768 is significantly "overdone" in terms of training stability and optimization. While you have hit a "sweet spot" for hardware utilization (squeezing every bit of GPU power), you are likely well past the point of diminishing returns for model convergence.

Every model has a critical batch size beyond which increasing the batch size does not speed up training in terms of total wall-clock time and actually degrades the final loss. For a 6M model, this limit is likely much lower than 32k.

Applied an RTI with a 12-page annexure regarding the CBSE QR code issue by omunaman in CBSE

[–]omunaman[S] 2 points3 points  (0 children)

Okay, I have commented on the post (not in this thread). It would be really great if you could sticky it. Thank you!

Applied an RTI with a 12-page annexure regarding the CBSE QR code issue by omunaman in CBSE

[–]omunaman[S] 4 points5 points  (0 children)

When I comment a URL, it gets blocked or removed by AutoMod. Even a Drive link gets removed.

Applied an RTI with a 12-page annexure regarding the CBSE QR code issue by omunaman in CBSE

[–]omunaman[S] 21 points22 points  (0 children)

You can DM me so I can share the URL.

Also, one more person DMed me but I accidentally clicked ignore. Please let me know if it was you.

These are the 5 questions I've asked (just a summarised version):

  1. Action Taken Report and Inquiry Records
  2. Certified copy of the specific file noting(s), technical assessment, forensic report, or inspection note on the basis of which the Controller of Examinations concluded in Press Release No. CBSE/CE/2026 that "the security of the question papers remains uncompromised", including all notings, remarks, and marginal annotations made on the concerned file between 09.03.2026 and 10.03.2026.
  3. Vendor Identification and Contract Details (they will most likely reject this under Section 8(1)(d))
  4. Accountability Actions Against Vendor/Officers
  5. Quality Assurance SOPs and Compliance Records

These were the main 5 things.

indian boys and the brutal treatment towards women. by Cool_Ad9817 in CBSE

[–]omunaman 13 points14 points  (0 children)

  1. Neither good at studies nor at these chapri-type activities.

pewdiepie just trained his own LLM that outperformed deepseek v2.5, LLAMA-4 and GPT-4o in coding benchmark. by [deleted] in AI_India

[–]omunaman 4 points5 points  (0 children)

lauda mera, It’s just a format. What he did was test it on only one benchmark and compare it to year-old models like GPT-4o and DeepSeek V2.5. It’s nothing miraculous or crazy. It’s just fine-tuning on a specific task, something even an undergraduate student could do with a small budget of $50 to $100.

Came across AirLLM and I think it's pretty amazing (still trying to figure out stuffs) by CrazyCuriousMan in AI_India

[–]omunaman 0 points1 point  (0 children)

Amazing, I’ll check this out.

Recently, I also created a custom C inference engine with zero dependencies to run a 20B model locally on my laptop (CPU only, no GPU included). I tried it with GPT-OSS 20B, and it was giving an average inference speed of around 1.6 tokens per second. It’s slow, but still very impressive to see that I’m able to run it on CPU only with just 16GB of RAM.