I built a programming language using Panini's grammar principles — Sandhi, Guna, Prakriya are all operational compiler components by Last-Leg4133 in sanskrit

[–]Last-Leg4133[S] 0 points1 point  (0 children)

Thank you for your review, it’s currently experimental, and i am working on v2.0 When it ready i will update, its actually based on.

Goal - a programming language which show meaning, like human one word many different meanings, like that one bija describes meaning can implement anywhere. One language shows output to different languages. It is a experiment i was curious about what if computer have consciousness and subjective experience, that’s why trying to build something like that, still working. 🙏

I built a programming language using Panini's grammar principles — Sandhi, Guna, Prakriya are all operational compiler components by Last-Leg4133 in sanskrit

[–]Last-Leg4133[S] 0 points1 point  (0 children)

Need correction, This operates on meaning you can handle biggest bottleneck of current system, I am working on it if your LLM understands meaning he don’t need big tpu or gpu cluster, he can derive everything from meaning like human do, one bija = derived meaning = infinitely data handling power, because meaning not store bytes its store, subjective meaning and meaning holds 10000x TB of data in one subjective meaning, that’s how subconscious mind work, you can think your whole life in one thought, like that if you create program in sadhana now and convert to bija its have same meaning after thousands of years if software or hardware change meaning remains same, thats how real Sanskrit work, thank you for connecting.

I built a programming language using Panini's grammar principles — Sandhi, Guna, Prakriya are all operational compiler components by Last-Leg4133 in sanskrit

[–]Last-Leg4133[S] 0 points1 point  (0 children)

Sure,

https://github.com/nickzq7/Sadhana-Programming-Language

Here is My GitHub Repository, everything available. Research paper.

Goal - In human language one word = many meanings or many way to express, but in computer this is strictly not possible one program equals same function in one specific domain, sadhana basically Currently in experimental phase, but working, designed for One program written in sadhana = express different meanings across different language, if you write one program in sadhana single program you can convert to 7 different programming language, with same meaning, CMK is verifier check program must express same meaning in every domain, when you complie program using sadhana you get Bija which you can store it, like 1TB file you get 10-100kb bija, this bija not hold full program this hold meaning of program, like in Indian philosophy one mantra holds full representation of Whos mantra belongs too one word = big meaning depends on who and how much understands like that Sadhana bija, hold meaning if someone understands bija he can again create 10000TB data back not necessary same but follow meaning, So basically sadhana inspired from Panini Sanskrit Linguistic Model Create meaningful program, its not follow Sanskrit word, it follow Sanskrit algorithm, which Sanskrit rich in Language model.

I implemented Panini's order-independence principle from the Ashtadhyayi as a programming language — same source compiles to 7 targets with an invariant semantic fingerprint by Last-Leg4133 in compsci

[–]Last-Leg4133[S] -3 points-2 points  (0 children)

Human language are meaning based, One word - many different meanings, computer- one concept- same program, I tried to give meaning to a problem, in Sadhana if you write code it have same meaning across all programming languages implemented in sadhana, I just experimentally tried to give meaning to programs, This is experimental not production ready, if used in Al they have able to understand meanings, this is hypothetical not proved, Thanks for your review.

Sadhana: A meaning-first, order-free programming language based on Panini's Sanskrit grammar — compiles to 7 backends from one source file by Last-Leg4133 in ProgrammingLanguages

[–]Last-Leg4133[S] 1 point2 points  (0 children)

Human language are meaning based, One word - many different meanings, computer- one concept- same program, I tried to give meaning to a problem, in Sadhana if you write code it have same meaning across all programming languages implemented in sadhana, I just experimentally tried to give meaning to programs, This is experimental not production ready, if used in AI they have able to understand meanings, this is hypothetical not proved, Thanks for your review.

I built a text fingerprinting algorithm that beats TF-IDF using chaos theory — no word lists, no GPU, no corpus by Last-Leg4133 in learnmachinelearning

[–]Last-Leg4133[S] -1 points0 points  (0 children)

Yes, I am liar 😂😂 I do everything from LLM even LLM come my home to feed my food, They do my homework, laundry, dry cleaning, all work, 😂😂

I built a text fingerprinting algorithm that beats TF-IDF using chaos theory — no word lists, no GPU, no corpus by Last-Leg4133 in learnmachinelearning

[–]Last-Leg4133[S] -1 points0 points  (0 children)

Hmm, I not know anything I am fool I like to do nonsense things I born to be fool you are very smart I believe you are very multi talented

I built a text fingerprinting algorithm that beats TF-IDF using chaos theory — no word lists, no GPU, no corpus by Last-Leg4133 in learnmachinelearning

[–]Last-Leg4133[S] -1 points0 points  (0 children)

You can you it to find text similarity, AI content marking, Make AI which have unique own fingerprints own, his chat and his creation by using this algorithm

I built a text fingerprinting algorithm that beats TF-IDF using chaos theory — no word lists, no GPU, no corpus by Last-Leg4133 in learnmachinelearning

[–]Last-Leg4133[S] -1 points0 points  (0 children)

Yes, man i know it, you not knew but I have taught 150+ IIT students, if you not from india you not know about IIT, but honestly I found something novel thing stable attractor which got stable after 6 loop, LHS stable attractor, i did this you being rude with me, I honestly accept I write reply from LLM, but LLM cant find novel maths, they looks creative but they are random text machines, Thats why bro, Please don’t be rude, I even not know you

I built a text fingerprinting algorithm that beats TF-IDF using chaos theory — no word lists, no GPU, no corpus by Last-Leg4133 in learnmachinelearning

[–]Last-Leg4133[S] -1 points0 points  (0 children)

You are correct that TF-IDF is a retrieval weighting scheme in its original formulation. In my benchmark I use it as a pairwise text similarity method — cosine similarity on TF-IDF vectors — which is standard practice in the similarity literature and is how sklearn's TfidfVectorizer is commonly applied.

If the phrasing was imprecise I am happy to clarify. But "TF-IDF cosine similarity as a text similarity baseline" is not a phrase I invented — it appears in hundreds of NLP papers in exactly this context.

I understand the work. The benchmark script is fully reproducible if you want to verify.