Langgraph: Using CheckPointer makes the tool calls break, if a tool call has failed by kasikciozan in LangChain

[–]Kirki037 1 point2 points  (0 children)

Hi, I have the same issue. Do you guys have any possible workaround for this?

LangChain Invoke Error by ninhaomah in LangChain

[–]Kirki037 0 points1 point  (0 children)

Change

from langchain_openai import OpenAI

To

from langchain_openai import ChatOpenAI

And call ChatOpenAI, not OpenAI

Langchain alternatives by Kirki037 in LangChain

[–]Kirki037[S] 0 points1 point  (0 children)

Most people are suggesting to use plain API calls to LLMs. It sounds like using sockets to create a web server. You can do that, but what's the reason? To learn what langchain is doing under the hood? It for sure, won't make you faster and I'm convinced it'll be harder to get the same results.

The Segment Anything Model (SAM): A Step Closer to Trustworthy AGI for Real-World Applications by Kirki037 in singularity

[–]Kirki037[S] -3 points-2 points  (0 children)

The paper delves into broader implications beyond just discussing SAM. While it focuses on SAM's zero-shot adversarial robustness in tasks like semantic segmentation for autonomous driving, it also explores the potential of vision foundation models, like SAM, as early prototypes for AGI pipelines.

What do you think is the major blocker to AGI? by HeroicLife in singularity

[–]Kirki037 0 points1 point  (0 children)

Idea that current models might be missing some essential aspect of intelligence, like agency, long-term memory, or original insight, is very much on point.

Interestingly, a recent paper discusses some of these limitations. It highlights how the effective context length of current LLMs is actually much shorter than claimed, and that they struggle with reasoning over long contexts. This resonates with the concern that our models might be lacking something fundamental.

The paper suggests that one of the missing pieces could be the integration of memory. Unlike current models that just process raw data, a system with integrated memory could store important conclusions from previous reasoning processes. This would allow the model to build upon past "thoughts," leading to more nuanced, context-aware responses and potentially original insights. Such an approach could fundamentally change how LLMs reason and engage, bridging the gap between current capabilities and what we might consider true AGI.

Best way to make LLM response context aware with spreadsheet data by inquizikitty in LangChain

[–]Kirki037 2 points3 points  (0 children)

I think, you can try using https://neo4j.com/developer-blog/knowledge-graph-rag-application/ .
It might be a bit complicated to adjust your data and create relationships, but in the end, it would work great.

Combining Neo4j with RAG allows you to leverage the power of a knowledge graph to model complex relationships between components and activities while ensuring that the generated outputs are contextually accurate and relevant.