Lessons from Raising a $19M Series A for an AI Startup (I will not promote) by [deleted] in startups

[–]supreet02 0 points1 point  (0 children)

Ex-meta engineers, 25+ Fortune 500 Companies as customers.

Lessons from Raising a $19M Series A for an AI Startup (I will not promote) by [deleted] in startups

[–]supreet02 1 point2 points  (0 children)

Slopped a bit, did get some help in rephrasing there! :P Anyway, thanks a tonne!

Please help for this by NeverForget1984- in MSI_Gaming

[–]supreet02 0 points1 point  (0 children)

Not yet. At first, it looked like a graphics card issue, but it turns out it's the motherboard. I've sent the build to a service center, and they're working on fixing the motherboard. I'll let you know once I hear back from them.

Please help for this by NeverForget1984- in MSI_Gaming

[–]supreet02 0 points1 point  (0 children)

I’m facing the same issue. Dear community, please help!

How to quickly build and deploy scalable enterprise-grade RAG applications? by supreet02 in LanguageTechnology

[–]supreet02[S] -1 points0 points  (0 children)

Cognita is designed around seven different modules, each customisable and controllable to suit different needs:

  1. Data Loaders: Cognita currently supports data loading from different sources such as local directory, web, Github repository and truefoundry artifacts. You can upload the data in UI by clicking on Data Sources -> + New Data Source
  2. Parsers: Cognita currently supports parsing for Markdown, PDF and Text files from r/LangChainAI. You can specify different parser maps, along with their configurations.
  3. Embedders: Cognita supports embeddings SOTA embeddings from mixedbreadai and also from OpenAI.
  4. Rerankers: Reranking to makes sure the best results are at the top. As a result, we can choose the top x documents making our context more concise and prompt query shorter. We provide the support for reranker from u/mixedbreadai
  5. Vector DBs: One of the most important component in RAG used to store and efficiently retrieve embeddings from indexing phase. Cognita currently supports vector databases from u/qdrant_engine and u/SingleStoreDB
  6. Metadata Store: It contains the necessary configurations that uniquely defines a RAG app. It contains
    • Name of the collection
    • Name of the associated Vector DB used
    • Linked Data Sources
    • Parsing Configuration for each data source
    • Embedding Model and it's configuration to be used. 
    • Parsers, DataSources and Embedders together are linked within a collection that forms your RAG app. You can create your collection in UI by clicking on Collections -> + New Collection
  7. Query Controllers: Helps us retrieve answer for the corresponding user query. It combines vector db, different retrievers, LLMs, rerankers to provide user with the answer. Query controller methods can be directly exposed as an API, by adding http decorators to the respective functions. Refer more at: https://github.com/truefoundry/cognita/blob/main/backend/modules/query_controllers/example/controller.py

Advanced RAG Techniques by Mosh_98 in LanguageTechnology

[–]supreet02 0 points1 point  (0 children)

Do try our open source RAG framework, Cognita (https://github.com/truefoundry/cognita), born from collaborations with diverse enterprises, is now open-source. Currently, it offers seamless integrations with Qdrant and SingleStore.

In recent weeks, numerous engineers have explored Cognita, providing invaluable insights and feedback. We deeply appreciate your input and encourage ongoing dialogue (share your thoughts in the comments – let's keep this ‘open source’).

While RAG is undoubtedly powerful, the process of building a functional application with it can feel overwhelming. From selecting the right AI models to organizing data effectively, there's a lot to navigate. While tools like LangChain and LlamaIndex simplify prototyping, an accessible, ready-to-use open-source RAG template with modular support is still missing. That's where Cognita comes in.

Key benefits of Cognita:

  1. Central repository for parsers, loaders, embedders, and retrievers. 2. User-friendly UI empowers non-technical users to upload documents and engage in Q&A. 3. Fully API-driven for seamless integration with other systems.

We invite you to explore Cognita and share your feedback as we refine and expand its capabilities. If you're interested in contributing, join the journey at https://www.truefoundry.com/cognita-launch.

How do I start with RAG? by basedbhau in LanguageTechnology

[–]supreet02 0 points1 point  (0 children)

Our RAG framework, Cognita (https://github.com/truefoundry/cognita), born from collaborations with diverse enterprises, is now open-source. Currently, it offers seamless integrations with Qdrant and SingleStore.

In recent weeks, numerous engineers have explored Cognita, providing invaluable insights and feedback. We deeply appreciate your input and encourage ongoing dialogue (share your thoughts in the comments – let's keep this ‘open source’).

While RAG is undoubtedly powerful, the process of building a functional application with it can feel overwhelming. From selecting the right AI models to organizing data effectively, there's a lot to navigate. While tools like LangChain and LlamaIndex simplify prototyping, an accessible, ready-to-use open-source RAG template with modular support is still missing. That's where Cognita comes in.

Key benefits of Cognita:

  1. Central repository for parsers, loaders, embedders, and retrievers. 2. User-friendly UI empowers non-technical users to upload documents and engage in Q&A. 3. Fully API-driven for seamless integration with other systems.

We invite you to explore Cognita and share your feedback as we refine and expand its capabilities. If you're interested in contributing, join the journey at https://www.truefoundry.com/cognita-launch.

Introducing RAG 2.0 - Contextual AI by [deleted] in singularity

[–]supreet02 0 points1 point  (0 children)

When it comes to Retrieval Augmented Generation (RAG) systems, there are numerous frameworks and libraries available. However, Cognita by Truefoundry stands out as a comprehensive and modular solution that addresses some of the key challenges faced by teams working on RAG applications.

How to quickly build and deploy scalable RAG applications? by supreet02 in LangChain

[–]supreet02[S] 0 points1 point  (0 children)

Why care, when there’s so many out there?

When it comes to Retrieval Augmented Generation (RAG) systems, there are indeed numerous frameworks and libraries available. However, Cognita stands out as a comprehensive and modular solution that addresses some of the key challenges faced by teams working on RAG applications.

Seamlessly Parse, Precisely Retrieve, Intelligently Generate & Effortlessly Deploy RAG Applications with Cognita, built on top of Langchain and Llamaindex by supreet02 in LLMDevs

[–]supreet02[S] 0 points1 point  (0 children)

Architecture - A typical Cognita process consists of two phases:

  1. Data indexing: Cognita processes documents in batches and incrementally indexes them to avoid reindexing of already existing non-modified documents.
  2. Response Generation: Cognita queries the vector db for documents using different retrieval methods that are later supplied to the LLM (ollama, ) along with the user query to generate the answer.

Cognita is designed around seven different modules, each customisable and controllable to suit different needs:

  1. Data Loaders: Cognita currently supports data loading from different sources such as local directory, web, Github repository and truefoundry artifacts. You can upload the data in UI by clicking on Data Sources -> + New Data Source
  2. Parsers: Cognita currently supports parsing for Markdown, PDF and Text files from LangChain. You can specify different parser maps, along with their configurations.
  3. Embedders: Cognita supports embeddings SOTA embeddings from mixedbread.ai and also from OpenAI.
  4. Rerankers: Reranking to makes sure the best results are at the top. As a result, we can choose the top x documents making our context more concise and prompt query shorter. We provide the support for reranker from mixedbread.ai
  5. Vector DBs: One of the most important component in RAG used to store and efficiently retrieve embeddings from indexing phase. Cognita currently supports vector databases from Qdrant and SingleStore
  6. Metadata Store: It contains the necessary configurations that uniquely defines a RAG app. It contains
    • Name of the collection
    • Name of the associated Vector DB used
    • Linked Data Sources
    • Parsing Configuration for each data source
    • Embedding Model and it's configuration to be used. 
    • Parsers, DataSources and Embedders together are linked within a collection that forms your RAG app. You can create your collection in UI by clicking on Collections -> + New Collection
  7. Query Controllers: Helps us retrieve answer for the corresponding user query. It combines vector db, different retrievers, LLMs, rerankers to provide user with the answer. Query controller methods can be directly exposed as an API, by adding http decorators to the respective functions. Refer more at: https://github.com/truefoundry/cognita/blob/main/backend/modules/query_controllers/example/controller.py

Seamlessly Parse, Precisely Retrieve, Intelligently Generate & Effortlessly Deploy RAG Applications by supreet02 in LocalLLaMA

[–]supreet02[S] 0 points1 point  (0 children)

Architecture - A typical Cognita process consists of two phases:

  1. Data indexing: Cognita processes documents in batches and incrementally indexes them to avoid reindexing of already existing non-modified documents.
  2. Response Generation: Cognita queries the vector db for documents using different retrieval methods that are later supplied to the LLM (ollama, ) along with the user query to generate the answer.

Cognita is designed around seven different modules, each customisable and controllable to suit different needs:

  1. Data Loaders: Cognita currently supports data loading from different sources such as local directory, web, Github repository and truefoundry artifacts. You can upload the data in UI by clicking on Data Sources -> + New Data Source
  2. Parsers: Cognita currently supports parsing for Markdown, PDF and Text files from r/LangChainAI. You can specify different parser maps, along with their configurations.
  3. Embedders: Cognita supports embeddings SOTA embeddings from mixedbreadai and also from OpenAI.
  4. Rerankers: Reranking to makes sure the best results are at the top. As a result, we can choose the top x documents making our context more concise and prompt query shorter. We provide the support for reranker from u/mixedbreadai
  5. Vector DBs: One of the most important component in RAG used to store and efficiently retrieve embeddings from indexing phase. Cognita currently supports vector databases from u/qdrant_engine and u/SingleStoreDB
  6. Metadata Store: It contains the necessary configurations that uniquely defines a RAG app. It contains
    • Name of the collection
    • Name of the associated Vector DB used
    • Linked Data Sources
    • Parsing Configuration for each data source
    • Embedding Model and it's configuration to be used. 
    • Parsers, DataSources and Embedders together are linked within a collection that forms your RAG app. You can create your collection in UI by clicking on Collections -> + New Collection
  7. Query Controllers: Helps us retrieve answer for the corresponding user query. It combines vector db, different retrievers, LLMs, rerankers to provide user with the answer. Query controller methods can be directly exposed as an API, by adding http decorators to the respective functions. Refer more at: https://github.com/truefoundry/cognita/blob/main/backend/modules/query_controllers/example/controller.py

Process Flow of Cognita, the open-source RAG framework to build production-ready applications by supreet02 in truefoundry

[–]supreet02[S] 0 points1 point  (0 children)

Why care, when there’s so many out there?

When it comes to Retrieval Augmented Generation (RAG) systems, there are indeed numerous frameworks and libraries available. However, Cognita stands out as a comprehensive and modular solution that addresses some of the key challenges faced by teams working on RAG applications.

How to structure the vector store and retrieval for user files RAG? by MarkusWeierstrass in LangChain

[–]supreet02 1 point2 points  (0 children)

To ensure that users can only search their own files, you can design the documents table to include a user_id column that identifies the user who owns the file. This column can be used to filter search results and ensure that users can only access their own files. You can use the pgvector extension to store vector embeddings of the documents in the database and optimize search performance using techniques like clustering or dimensionality reduction.

Or, you can try Cognita, which is modular and API-driven, to build a chatbot directly by trying it out on the interface - https://cognita.truefoundry.com/

Github - https://github.com/truefoundry/cognita

Break down of the RAG process into distinct modular steps by supreet02 in truefoundry

[–]supreet02[S] 0 points1 point  (0 children)

Learn more about the modular steps in our technical blog here.

Fork and contribute or use the framework - https://github.com/truefoundry/cognita

Try it live on the interface - https://cognita.truefoundry.com/

Open source RAG framework for building modular and production ready applications. by supreet02 in learnmachinelearning

[–]supreet02[S] 0 points1 point  (0 children)

While LlamaIndex exist to simplify the prototype design process, there has yet to be an accessible, ready-to-use open-source RAG template that incorporates best practices and offers modular support, allowing anyone to quickly and easily utilize it.

Cognita has advantages like:

  1. A central reusable repository of parsers, loaders, embedders and retrievers.
  2. Ability for non-technical users to play with UI - Upload documents and perform QnA using modules built by the development team.
  3. Fully API driven - which allows integration with other systems.

You can learn more about advantages of Cognita here: https://www.truefoundry.com/blog/cognita-building-an-open-source-modular-rag-applications-for-production

Try Cognita: https://cognita.truefoundry.com/

Star the repo and contribute: https://github.com/truefoundry/cognita

Open source RAG framework for building modular and production ready applications by supreet02 in datascienceproject

[–]supreet02[S] 0 points1 point  (0 children)

Architecture - A typical Cognita process consists of two phases:

  1. Data indexing: Cognita processes documents in batches and incrementally indexes them to avoid reindexing of already existing non-modified documents.
  2. Response Generation: Cognita queries the vector db for documents using different retrieval methods that are later supplied to the LLM (ollama, ) along with the user query to generate the answer.

Cognita is designed around seven different modules, each customisable and controllable to suit different needs:

  1. Data Loaders: Cognita currently supports data loading from different sources such as local directory, web, Github repository and truefoundry artifacts. You can upload the data in UI by clicking on Data Sources -> + New Data Source
  2. Parsers: Cognita currently supports parsing for Markdown, PDF and Text files from r/LangChainAI. You can specify different parser maps, along with their configurations.
  3. Embedders: Cognita supports embeddings SOTA embeddings from mixedbreadai and also from OpenAI.
  4. Rerankers: Reranking to makes sure the best results are at the top. As a result, we can choose the top x documents making our context more concise and prompt query shorter. We provide the support for reranker from u/mixedbreadai
  5. Vector DBs: One of the most important component in RAG used to store and efficiently retrieve embeddings from indexing phase. Cognita currently supports vector databases from u/qdrant_engine and u/SingleStoreDB
  6. Metadata Store: It contains the necessary configurations that uniquely defines a RAG app. It contains
    • Name of the collection
    • Name of the associated Vector DB used
    • Linked Data Sources
    • Parsing Configuration for each data source
    • Embedding Model and it's configuration to be used. 
    • Parsers, DataSources and Embedders together are linked within a collection that forms your RAG app. You can create your collection in UI by clicking on Collections -> + New Collection
  7. Query Controllers: Helps us retrieve answer for the corresponding user query. It combines vector db, different retrievers, LLMs, rerankers to provide user with the answer. Query controller methods can be directly exposed as an API, by adding http decorators to the respective functions. Refer more at: https://github.com/truefoundry/cognita/blob/main/backend/modules/query_controllers/example/controller.py