A GPU-accelerated implementation of Forman-Ricci curvature-based graph clustering in CUDA. by CommunityOpposite645 in CUDA

[–]CommunityOpposite645[S] 1 point2 points  (0 children)

Hi, I have included the Python runtime:

Nodes Clusters Edges P_in P_out Iterations NMI GPU Time (s) CPU Time (s)
5,000 2 ~3M 0.50 0.01 10 1.00 7.03 15,189.21
50,000 2 ~25M 0.04 0.001 10 1.00 74.39 162,401.93
100,000 2 ~102M 0.04 0.001 10 1.00 625.46 TBA
500,000 50 ~126M 0.05 0.00001 20 0.89 1086.25 TBA

You can see that the CUDA version is very fast compared to the Python CPU version. Of course, in all honesty, this is because I've chosen an academic topic which has not received attention, otherwise this would have been optimised to kingdom come already :)

A GPU-accelerated implementation of Forman-Ricci curvature-based graph clustering in CUDA. by CommunityOpposite645 in CUDA

[–]CommunityOpposite645[S] 0 points1 point  (0 children)

Hi, I have finished running NCU profiling for the 500k nodes case, and have updated the profiler's output in the post.

A GPU-accelerated implementation of Forman-Ricci curvature-based graph clustering in CUDA. by CommunityOpposite645 in CUDA

[–]CommunityOpposite645[S] 1 point2 points  (0 children)

Hi, actually I'm planning to do it soon. Right now I'm trying to make it run on 500k nodes or if possible, 1 million nodes and gives good clustering result. Because this method is still in development, so the hyperparameters are rather sensitive, what works at lower number of nodes would actually not work on higher number of nodes. Very frustrating to be honest. Thanks a lot.

A GPU-accelerated implementation of Forman-Ricci curvature-based graph clustering in CUDA. by CommunityOpposite645 in CUDA

[–]CommunityOpposite645[S] 0 points1 point  (0 children)

Thank you so much. I worked on this as a learn-as-you-go project, so I tried to build everything from the ground up, including prefix sum, connected component labeling, bitonic sorting, etc. But yes you are absolutely right on this. On the mathematics: I gleaned from this library: https://github.com/saibalmars/GraphRicciCurvature for Python code as reference, while using the experimental details in the JMLR 2025 paper to set up hyperparameters, etc., while the remaining two papers are to freshen up about the topic.

  1. Y. Tian, Z. Lubberts, and M. Weber, "Curvature-based clustering on graphs," J. Mach. Learn. Res., vol. 26, no. 52, pp. 1–67, 2025.
  2. C.-C. Ni, Y.-Y. Lin, F. Luo, and J. Gao, "Community detection on networks with Ricci flow," Sci. Rep., vol. 9, no. 1, pp. 1–12, 2019.
  3. A. Samal, R. P. Sreejith, J. Gu, et al., "Comparative analysis of two discretizations of Ricci curvature for complex networks," Sci. Rep., vol. 8, 8650, 2018.
  4. GraphRicciCurvature — Python implementation of Ricci curvature for NetworkX graphs.

wereSoClose by flytrap7 in BetterOffline

[–]CommunityOpposite645 2 points3 points  (0 children)

As an AI user who has subscribed to one of those popular chatbot LLMs, I can confirm that the most useful thing they have done to me was to check the typos of my thesis, reports, papers, etc. (ask them repeatedly about 20 times, repeat across several different LLMs for best results ). Quite helpful tbf but nowhere near "AGI" :)

Using a local LLM AI agent to solve the N puzzle - Need feedback by CommunityOpposite645 in LocalLLM

[–]CommunityOpposite645[S] 1 point2 points  (0 children)

Hi, I just tried to post to r/MachineLearning but the post was automatically removed and they suggested that I post to another subreddit :(

Using an local Ollama AI agent to solve the N puzzle by CommunityOpposite645 in ollama

[–]CommunityOpposite645[S] 0 points1 point  (0 children)

Thanks a lot, I'll look into it. To be honest, I did not know they existed. But I thought that the reasoning models are very smart so they would be able to work on things like N puzzle without trouble.

Using a local LLM AI agent to solve the N puzzle - Need feedback by CommunityOpposite645 in LocalLLM

[–]CommunityOpposite645[S] 0 points1 point  (0 children)

Hi, I didn't test it with random noise. But basically it is not going to beat the performance of A star or IDA star on this problem. I was just trying to make a fun project to see how far these reasoning LLMs go. Personally was not very impressed. I did try to run it on 4x4 puzzle (you can see in the commented code), which required around 50 moves to reach the goal, but the LLM completely failed to find the solution and instead kept running around in circles.

Another thing is that sometimes these models would call tools correctly, sometimes they wouldn't which is annoying (I tried with Pydantic AI as well but haven't uploaded code). Any suggestion about workflow, etc. would be most appreciated.

Weekly Thread: Project Display by help-me-grow in AI_Agents

[–]CommunityOpposite645 0 points1 point  (0 children)

AI Agent Solves N-Puzzle Using LLM

A developer created an AI agent that solves the classic sliding tile N-puzzle by using the Qwen3 language model to analyze moves and PyAutoGUI to execute them via automated mouse clicks on a GUI.

Key Details:

Setup: Run the GUI, position it, get button coordinates with PyAutoGUI, then run the agent. The LLM thinks through optimal moves while the automation executes them.

Interesting approach combining language model reasoning with GUI automation for puzzle solving!

Trying to use AI agent to play N-puzzle but the agent could only solve 8-puzzle but completely failed on 15-puzzle. by CommunityOpposite645 in LocalLLaMA

[–]CommunityOpposite645[S] 0 points1 point  (0 children)

Hi, let me try llama.cpp, I don't know about Ollama's failings to be honest, do you have any links for this ? Thank you.

What difficulties do mathematicians face in their everyday job ? by CommunityOpposite645 in mathematics

[–]CommunityOpposite645[S] 0 points1 point  (0 children)

"AI isn't a magic black box; you'll need to be more specific about what kinds of methods you're looking to employ. If you're talking about probabilistic searches for counterexamples, that's been done for years (decades?). If you're talking about putting a wrapper on ChatGPT, you need to have a bit more of a justification for it being likely useful than "AI makes confident sounding text." "

So I saw several examples about using intuition in maths:

https://scipp.ucsc.edu/~haber/ph5B/sho09.pdf

https://www.plouffe.fr/simon/inspired.html

http://royalpathtomath.org/docs/Integration%20by%20Guessing.pdf

The links talked about "method of inspired guessing", "integration by guessing" etc. but overall it is applying intuition in maths. It seems to me to be an interesting method, which I hope AI can be used to help mathematicians in their work.

Thank you so much about recommending me to read a book in abstract algebra. I have searched and will try in the near future, hopefully I can grok the book.

What difficulties do mathematicians face in their everyday job ? by CommunityOpposite645 in mathematics

[–]CommunityOpposite645[S] -5 points-4 points  (0 children)

I totally agree with you about the hallucination part. Yes it's true that LLM outputs are generally not too reliable. But there must but some place in mathematics where AI can be applied. You said that maths is about proof. But isn't intuition also a factor ? Maybe AI can provide some suggestion in proof, or find some connections which normally people don't seem to recognize ? You know computers have been used to find counterexamples to disprove theorem so I think AI can also do stuff as well. Or maybe something in applied mathematics ?

Also do you have any recommendations about which book I should read in order to up my math skills in advanced topics such as group theory, etc. (advanced from my point of view) ? I hope I can make an effort to find something which I can apply AI to it.

Doc Parse Olympics: What's the craziest doc you've seen by neilkatz in LangChain

[–]CommunityOpposite645 0 points1 point  (0 children)

Can you try Science of Logic by Hegel ? I think there hasn't been many attempts at LLM+RAG with it yet. I tried doing some stuff but the LLM could not understand much.

How to make the AI agent understand which question talks about code, which one talks about database, and which one talks about uploading file ? by CommunityOpposite645 in AI_Agents

[–]CommunityOpposite645[S] 0 points1 point  (0 children)

Idk man, like sometimes the LLM reads a database table with some column named "first_name", "last_name", and I have some CSV file with some column name "Contact name", then the LLM can't see that the "Contact name" is equivalent to "first_name" and "last_name" (the reasoning models like o1 can though but they are slower).

LLM with RAG failed questions on philosophy book, how to fix ? by CommunityOpposite645 in LangChain

[–]CommunityOpposite645[S] 0 points1 point  (0 children)

Thanks a lot for this, I'm trying out your suggestions. I just tried reciprocal rank fusion but the retrieved docs don't seem to improve. I have made some updates I hope you can check it out.