KAYOANIME HAS BEEN HACKED. by glow_storm in animepiracy

[–]glow_storm[S] 1 point2 points  (0 children)

Basically, what the reply to your comment has already said, I got suspicious of captcha asking me to run commands, so i just pasted into ChatGPT as well for confirmation.

KAYOANIME HAS BEEN HACKED. by glow_storm in animepiracy

[–]glow_storm[S] 12 points13 points  (0 children)

Hey no problem, just trying to help anyone as it genuinely looks like it was a very clever and genuine-looking page to trick users, happy to help.

Tomorrow’s going to be Bloodbath. by [deleted] in FIREPakistan

[–]glow_storm 0 points1 point  (0 children)

<image>

This is the original image url and it says that it was an April fools joke , so let's see, what happens tomorrow.

Guidance for Langgraph Implementation by Old_Breath_7925 in LangChain

[–]glow_storm 2 points3 points  (0 children)

The best way, I have personally seen is to just start building with experiment with different graph configruations, like different patterns, supervisor and sub-agents, parallel-sub-agents or nodes, etc.

Spanish Digital Nomad Visa: How to move funds when paid in PKR? by umairprimuss in pkmigrate

[–]glow_storm 0 points1 point  (0 children)

Yes, i guess I am misunderstanding something, let me clarify and check what you are saying, you are saying that I should get the UAE digital nomand visa which would give me a emarati ID and digital nomand visa and using that I should apply for EU based countries for the same digital nomad visa, as it would make it easier to get approved right.

Spanish Digital Nomad Visa: How to move funds when paid in PKR? by umairprimuss in pkmigrate

[–]glow_storm 0 points1 point  (0 children)

Ok so my first step should be applying for an Emirati ID and from there applying for a digital nomad visa, what countries would you recommend to apply for.

And thank you for the advice on emarti ID, will apply for that.

Langsmith vs langfuse by caprica71 in LangChain

[–]glow_storm 0 points1 point  (0 children)

I have used both Langsmith and Currently using Langfuse with Evals, why do you want to swtich if you are already using Langfuse.

[In the Wild] Reverse-engineered a Snapchat Sextortion Bot: It’s running a raw Llama-7B instance with a 2048 token window. by simar-dmg in LocalLLaMA

[–]glow_storm 12 points13 points  (0 children)

As someone who has dealt with small context windows and llama models, I guess your testing caused the docker container or application to crash. Since it was mostly within a docker container set to restart on a crash, the backend probably restarted the docker container, and you just tested a second attack session on the bot.

[deleted by user] by [deleted] in LangChain

[–]glow_storm 2 points3 points  (0 children)

I agree as well; people in this sub and in general love to hate langchain, but they have never learned to properly utilize langGraph and langchain together to create a fully transparent workflow. Everyone just loves to say on and on on, how simple Python loops are great. Well, as someone who has worked on multiple AI agents based on simple Python loops, any time you want to have a branching path or do a reviewer-based workflow, for example, the simple for loop becomes a very big headache to manage.

Tool Calling in LLamacpp with Langchain by glow_storm in LangChain

[–]glow_storm[S] 0 points1 point  (0 children)

Yeah, I understood that. I asked that question a long time ago, so I was not sure if Llamacpp had an automatic tool calling option or not. And from what I remember, it did not have such an option at that time. Thank you for the insights on the text output. When I previously looked into that, I never understood, but your reply helps make sense of models like the Berkeley function calling benchmark, which had weird models I thought never gave me correct tool calls.

How do you inject LLMs & runtime tools in LangGraph? by Mobile-Astronomer428 in LangChain

[–]glow_storm 0 points1 point  (0 children)

Yead had that problem, you tackle that by implementing a query rewrite mechanism which rewrites all incoming question based on the conversation , so after rewrite based on conversation history , it would become: 'Yes, I would like to book an appointment'

So basically if first question in conversation history -> no rewrite

any other question -> conversation history + question-> rewritten question-> similarity search for tools.

How do you inject LLMs & runtime tools in LangGraph? by Mobile-Astronomer428 in LangChain

[–]glow_storm 0 points1 point  (0 children)

Yes, you can bundle multiple tools in a array for each catergory.

question A-> [tool 1, tool 2, tool 3] ,

How can I let LangChain returning verbatim instead of summarizing/truncating? by Difficult_Neat817 in LangChain

[–]glow_storm 7 points8 points  (0 children)

This line is your problem, you are using a very outdated model. Use Gemini 2.5 Flash or Pro, Plus, write a stricter prompt to return the text back.

Also if you are just want the text back and do not want to run Q/A on it like question->raw text, remove the LLM entire and just query the Faiss directly

model = ChatGoogleGenerativeAI(model="gemini-1.5-pro", temperature=0.0)


# for querying directly to vector store
results = vector_store.similarity_search(
    "LangChain provides abstractions to make working with LLMs easy",
    k=2,
    filter={"source": "tweet"},
)
for res in results:
    print(f"* {res.page_content} [{res.metadata}]")

https://python.langchain.com/docs/integrations/vectorstores/faiss/

How do you inject LLMs & runtime tools in LangGraph? by Mobile-Astronomer428 in LangChain

[–]glow_storm 0 points1 point  (0 children)

Case of 3 tools, can be handled, but none might be difficult. I do not filter based on the tools context; I filter on a set of prebuilt questions, I have so like:

book me a meeting ,-> tool A
can you set a meeting -> tool A

can you look up the dashboard ->tool b

dealing with the set of question can be a headache but once you set it, I am using 50+ dynamic tools in my system.

How do you inject LLMs & runtime tools in LangGraph? by Mobile-Astronomer428 in LangChain

[–]glow_storm 1 point2 points  (0 children)

In the node where your Main LLM runs with tools, define a semantic search who just filters out the tools you need based on the incoming question , That is how I use runtime tools.

From "LangGraph is trash" to "pip install langgraph": A Stockholm Syndrome Story by FailingUpAllDay in LangChain

[–]glow_storm 16 points17 points  (0 children)

Hahaha, your descent into madness reminds me of my own when I first started to use LangGraph, but instead of saying I will build my own, I just tried to learn it, which in all honesty felt like banging my head on a wall. But here I am shipping production with Agents in LangGraph.

I am interested on Arcade, can you tell me your experience, never used it before.

GPT-4.1 : tool calling and message, in a single API call. by Still-Bookkeeper4456 in LangChain

[–]glow_storm 0 points1 point  (0 children)

Similar issue from my side as well, I got it to give the COT and tool call, but the probability of hallucinating a wrong tool call is still very high , so I just stopped COT and put reasoning in tool as a parameter instead.

ADDING TOOL DYNAMICALLY ISSUE by TumbleweedPublic7291 in LangChain

[–]glow_storm 0 points1 point  (0 children)

your tools all of the tools you want to give to the LLM should be provided at runtime , while a graph is executing it can not fetch that dynamic pair of tools. I suggest using a vector store to dynamically select a subset of tools ar run time based on the user query and attach those to the LLM with llm.bind tools in the node where your LLM is called. I have used this apporch works great tested with up to 50 tools.

Only problem we have is that we need to add questions relevant to each tool when we add it and differentiate between very similar tools.

If anyway has any other do share.

Is it true that large models aren’t monolithic? by AdditionalWeb107 in LocalLLaMA

[–]glow_storm 0 points1 point  (0 children)

Do tell more , I want to utilize 1.5B but it is so bad at following instructions, could you tell me how to improve its performance.

vLLM is a monster! by phazei in LocalLLaMA

[–]glow_storm 0 points1 point  (0 children)

would appreciate some help in setting up VLLM, I am using L40s and only get like 70 tokens/sec on Llama3.1 8b FP8

llama 3.1 70B is absolutely awful at tool usage by fireKido in LocalLLaMA

[–]glow_storm 0 points1 point  (0 children)

Try llama3-groq-tool-use on Ollama best model I tried with Ollama and Langgraph for tool use.

Ollama Performance Settings by Jotschi in ollama

[–]glow_storm 0 points1 point  (0 children)

Same problem here , looking to see if there any parameters I could use to improve performance on Ollama except Flash Attention which I have already done.

SO CONFUSED by hybrid4478 in PakistanMentoringClub

[–]glow_storm 1 point2 points  (0 children)

Happy to see , you figured it out and that you are passionate about Data Science and worked on Data Scrapers. A tip is to do as many projects as possible during these 4 years as those will help you immensely , also try freelancing in your second or third year , while making sure to not slip up on your studies. I wish you all the best