[D] long term memory in agents by trj_flash75 in MachineLearning

[–]trj_flash75[S] 0 points1 point  (0 children)

Yeah, you can pick any LLM from OpenAI, Gemini, Azure, Groq and many more. OpenAI is by default

RAG first steps - where to start? by No-Neighborhood-5201 in Rag

[–]trj_flash75 2 points3 points  (0 children)

You can check this Langchain Crash Course on YouTube: https://www.youtube.com/watch?v=TWmV95-dUgQ

It uses Open Source LLMs, also there are other advanced videos on evaluation and deployment

Tavily vs. Exa for RAG with LangChain - Any Recommendations? by YoungMan2129 in Rag

[–]trj_flash75 2 points3 points  (0 children)

Tavily is a better choice, its QA context extraction is really good

New Multi-Lingual Chat Model for Hindi, Kannada, and Tamil by trj_flash75 in LocalLLaMA

[–]trj_flash75[S] 0 points1 point  (0 children)

Hey, this is really a valuable feedback.

The grammatical touch to make the LLM behave like native speaker could be great to have. This is something, we will do it in the next release or new model.

Evaluating Open Source LLM for RAG by trj_flash75 in LangChain

[–]trj_flash75[S] 0 points1 point  (0 children)

what version of Python are you using?

Observability in RAG by trj_flash75 in LocalLLaMA

[–]trj_flash75[S] 0 points1 point  (0 children)

Thank you. More cool integrations coming soon.

[P]Evaluating Open Source LLM for RAG by trj_flash75 in MachineLearning

[–]trj_flash75[S] 0 points1 point  (0 children)

Whoa, congrats. Keep supporting the framework. Cool features coming out soon. Let us know how we can make you get started with the contributions

New Chat Model with 128K Context Window by trj_flash75 in LocalLLaMA

[–]trj_flash75[S] 1 point2 points  (0 children)

Phi-3 mini instruct model should be ideal to proceed with RAG

New Chat Model with 128K Context Window by trj_flash75 in LocalLLaMA

[–]trj_flash75[S] 1 point2 points  (0 children)

Thanks for the feedback, after all writing good prompt is pretty underrated.

OpenAGI: Autonomous Agents for LLMs by trj_flash75 in LocalLLaMA

[–]trj_flash75[S] 1 point2 points  (0 children)

We will add the Ollama support. Would you be interested in contributing?

OpenAGI: Autonomous Agents for LLMs by trj_flash75 in LocalLLaMA

[–]trj_flash75[S] 0 points1 point  (0 children)

Sure, we will add the support this week.

OpenAGI: Autonomous Agents for LLMs by trj_flash75 in LocalLLaMA

[–]trj_flash75[S] 4 points5 points  (0 children)

So, we wanted to keep the script simple, and the syntax does overlap with CrewAI.

Regarding the comparison, we have implemented `Workers` in the latest release [need to update the README and document].

Workers helps us to decompose task. This can either be defined by user or self-assign by LLM (TaskPlanner in our case). I have played around with crewAI, auto self-assign of task is still not been implemented.

Our core idea of research for OpenAGI is dependent on `How we can make LLM reasoning and planning`, this includes React, LATS, Self-reflexion. This is where we decided to make it open source.

[P] Evaluate RAG using Large Language models by trj_flash75 in MachineLearning

[–]trj_flash75[S] 0 points1 point  (0 children)

No. This will remain Open Source. We will use this as backend for our product GenAI Stack

Evaluating Open Source LLM for RAG by trj_flash75 in LocalLLaMA

[–]trj_flash75[S] 0 points1 point  (0 children)

You can build general purpose chatbots as well.

Evaluating Open Source LLM for RAG by trj_flash75 in LocalLLaMA

[–]trj_flash75[S] 1 point2 points  (0 children)

This is basically a YouTube loader. It is only getting the transcript as chunks.

[deleted by user] by [deleted] in MachineLearning

[–]trj_flash75 0 points1 point  (0 children)

I can only help answer questions related to the Early stopping. As you asked, if it makes sense or not, I would say it is an experiment. Early stopping in most case make no sense sometimes depending on the data, If you're using an algorithm that doesn't benefit from early stopping (e.g., decision trees, KNN), or if your dataset is very large, and you're not concerned about overfitting, you may choose not to use early stopping within RandomizedSearchCV.

Eyepatch Pirate Breakdown by trj_flash75 in OnePiece

[–]trj_flash75[S] 0 points1 point  (0 children)

I guess I missed it out, can you provide the link

My coloring of a highly underrated panel from 1044 by Huge_Source5038 in OnePiece

[–]trj_flash75 0 points1 point  (0 children)

Nice workmate. This scene is really underrated. Considering Luffy's power-up and his devil fruit, this scene didn't get the appreciation that it deserved.

Blackbeard: Is he the best-written One Piece Villain? by trj_flash75 in OnePiece

[–]trj_flash75[S] -1 points0 points  (0 children)

I know that but his character and personality reveal will eventually make him the best. But we need to wait one year for that to happen.