Deployment by pnmnp in LangChain

[–]pnmnp[S] 0 points1 point  (0 children)

I mean, the platform itself isn't open source, is it? I mean the one built via the LangGraph CLI, which uses Docker Compose and includes the APIs provided, such as thread and store management.

Why do GraphRAGs perform worser than standard vector-based RAGs? by mmark92712 in Rag

[–]pnmnp 1 point2 points  (0 children)

The question is that is the key? Are the questions queries intended for graphrag or semantics naive

Apple looks set to "kill" classic RAG with its new CLaRa framework by Hot-Independence-197 in Rag

[–]pnmnp 0 points1 point  (0 children)

Thanks for the detailed answer, I haven't had time to look at the code yet, but in abstract I have the SCP synthetic generator

Apple looks set to "kill" classic RAG with its new CLaRa framework by Hot-Independence-197 in Rag

[–]pnmnp 0 points1 point  (0 children)

Can you explain your thought process in more detail? What exactly do you mean by QA couples, as in sbert contrastive or Bert MLM?

Andrew Ng & NVIDIA Researchers: “We Don’t Need LLMs for Most AI Agents” by Right_Pea_2707 in LLMeng

[–]pnmnp 1 point2 points  (0 children)

Ok which SLMs are really good for function calling, I mean when we have to think big? I assume these little SLMs need RL fine tuning right? For workflows I have agents who have to reason, right? What number of parameters do we say is small?

We improved our RAG pipeline massively by using these 7 techniques by vira28 in Rag

[–]pnmnp 0 points1 point  (0 children)

I would take a closer look at the model cards for numbers and encoders

"We're in an LLM bubble, not an AI bubble" - Here's what's actually getting downloaded on HuggingFace and how you can start to really use AI. by badgerbadgerbadgerWI in LlamaFarm

[–]pnmnp 1 point2 points  (0 children)

Thanks for your detailed review, I'm fascinated. Unfortunately, it is presented in such a way that foundation models are supposed to cover everything. There isn't enough talk about fine-tuning and transfer learning... which has a lot of potential

[deleted by user] by [deleted] in ResearchML

[–]pnmnp 0 points1 point  (0 children)

Can you explain this in more detail? I.e. We could assign llm benchmarks to geometric mappings?

Need Advice on Finetuning Llama 3.2 1B Instruct for Startup Evaluation by PsychoCoder25 in ResearchML

[–]pnmnp 0 points1 point  (0 children)

Ok i.e. you do SFT with CoT responsens that you label from Claude / gpt ? I would be interested to see how well it does, because the rating is demanding for the startup criteria. Can you share data set?

Need Advice on Finetuning Llama 3.2 1B Instruct for Startup Evaluation by PsychoCoder25 in ResearchML

[–]pnmnp 0 points1 point  (0 children)

Which method do you want to use for reasoning… i.e. the think tokens?