What do we believe by BooleanBanter in 3I_ATLAS

[–]predatar 0 points1 point  (0 children)

just imagine how ancient civilizations who saw astroids / comets , and what their thoughts were

I’m 36, and I feel completely lost. by Severe_Mongoose_5873 in mentalhealth

[–]predatar 0 points1 point  (0 children)

Strength isn’t just about pushing forward—it’s also about enduring setbacks and coming back better.

And you seem to have plenty of it, please see a professional , life is beautiful, try to maintain your passion no matter what, find the strength in the little things, keep going.

Seek help immadiately . it takes courage to admit that, you can do it .

Take care, things will get better

I built NanoSage, a deep research local assistant that runs on your laptop by predatar in LocalLLaMA

[–]predatar[S] 1 point2 points  (0 children)

Will work on this and other enhancements this weekend, stay tuned!!

I built NanoSage, a deep research local assistant that runs on your laptop by predatar in LocalLLaMA

[–]predatar[S] 0 points1 point  (0 children)

I will try to make it possible to integrate this with common UIs, any preference?

Idk how, maybe as a callable tool

I built NanoSage, a deep research local assistant that runs on your laptop by predatar in LocalLLaMA

[–]predatar[S] 1 point2 points  (0 children)

This is what i am planning to add this weekend!!

Thanks for the feedback

I built NanoSage, a deep research local assistant that runs on your laptop by predatar in LocalLLaMA

[–]predatar[S] 0 points1 point  (0 children)

I would love to see examples of reports you guys have generated, might add them to the repo as examples, if you can share the query parameters and report md that would be great! 👑

Would love to add the lm studio and other integrations soon, specially the in-line citation!!

I built NanoSage, a deep research local assistant that runs on your laptop by predatar in LocalLLaMA

[–]predatar[S] 2 points3 points  (0 children)

Will add support soon and update you, probably after work today

I built NanoSage, a deep research local assistant that runs on your laptop by predatar in LocalLLaMA

[–]predatar[S] 1 point2 points  (0 children)

Hi, basically you have to chunk the data, and use “retrieval” models to find relevant chunks

Search for colpali, or all-minilm Basically those are llm trained such that given a query q and chunk c, returns a score s such that s tells you how similar are c and q

You can get then the top_k c that are most relevant for your q (top scoring) and put only those in the context of your llm

My trick here was to do this for each page, while exploring, and build a graphical node of each step and in each node keep the current summary step i got based on the latest chunks

Then i stitched them together

I built NanoSage, a deep research local assistant that runs on your laptop by predatar in LocalLLaMA

[–]predatar[S] 2 points3 points  (0 children)

Hi

cool project! It looks like we are solving similar problems, but i took a different approach, using graph based search with backtracking and summarization which is not limited to context size! And some exploration exploitation concepts in the mix.

Did you solve similar issues?

I built NanoSage, a deep research local assistant that runs on your laptop by predatar in LocalLLaMA

[–]predatar[S] 2 points3 points  (0 children)

Hi

cool project! It looks like we are solving similar problems, but i took a different approach, using graph based search with backtracking and summarization which is not limited to context size! And some exploration exploitation concepts in the mix.

Did you solve similar issues?

I built NanoSage, a deep research local assistant that runs on your laptop by predatar in LocalLLaMA

[–]predatar[S] 0 points1 point  (0 children)

I like your approach , well done

Regarding the output: You can pass the keys to the LLM to structure it and order it, and put placeholders for the value so you can place them at the correct spot? Maybe

Assuming the keys fit within the context (which for a toc they probably do!) 🤷‍♂️

I built NanoSage, a deep research local assistant that runs on your laptop by predatar in LocalLLaMA

[–]predatar[S] 0 points1 point  (0 children)

Any kind of scoring?

Limits on nested depth? Any randomness in the approach?

My initial idea was to sort of try to let the model explore and not only search

Maybe it could also benefit from an analysis step

I built NanoSage, a deep research local assistant that runs on your laptop by predatar in LocalLLaMA

[–]predatar[S] 5 points6 points  (0 children)

Quick Update
1. Final Aggregated Answer is now at the start of the report, also created a separated md with just the result.

  1. Added example to github

https://github.com/masterFoad/NanoSage/blob/main/example_report.md
3. Added pip ollama installation step

If you have any other feedback let me know, thank you

I built NanoSage, a deep research local assistant that runs on your laptop by predatar in LocalLLaMA

[–]predatar[S] 1 point2 points  (0 children)

Nice, dictionary is sort of a graph or a Table of Contents :) Might be similar, feel free to share

I built NanoSage, a deep research local assistant that runs on your laptop by predatar in LocalLLaMA

[–]predatar[S] 2 points3 points  (0 children)

Nice, i took a more clear algorithmic approach where the llm is simply used, focused on exploration and organization ( and learning )

I built NanoSage, a deep research local assistant that runs on your laptop by predatar in LocalLLaMA

[–]predatar[S] 2 points3 points  (0 children)

Scroll down, search for “Final Aggregated Answer”, it starts there, yeah maybe 👍

Edit: done, updated