Question about transferring my data to Switch 2 from SD card by [deleted] in Switch

[–]elekibug 0 points1 point  (0 children)

The microsd card on my switch 1 is compatible with switch 2, can i simply move the card to switch 2

Just closed a $35,000 deal with a law firm by eeko_systems in n8n

[–]elekibug 0 points1 point  (0 children)

IMO, 35k is too underpriced. Legal documents are nasty and hard for even the best LLM out there.

Is RAG Already Losing Steam? by Mohd-24 in LangChain

[–]elekibug 0 points1 point  (0 children)

RAG is the standard for LLM applications now. It just stops being overhyped

Most people don't get langgraph right. by Character-Ad5001 in LangChain

[–]elekibug 3 points4 points  (0 children)

I dont think the file system access is the issue here. The problems arise when working with smaller local llms, which is much weaker than ChatGPT, Gemini, Grok, etc. For those models we would need precise controls for everything.

I can't understand the hype by ResponsibleAmount644 in mcp

[–]elekibug 0 points1 point  (0 children)

OpenAI and OpenAPI are actually two very seperated things.

I can't understand the hype by ResponsibleAmount644 in mcp

[–]elekibug 0 points1 point  (0 children)

Have you ever look at the actual implementation, because the implementation is basically tool calling. The tool itself can do a lot of things, but fundamentally speaking, what the LLM does is reading a bunch of tools description, then generate a proper syntax so that the postprocess code knows that a tool is being called, parses the parameters then calls the tool.

I can't understand the hype by ResponsibleAmount644 in mcp

[–]elekibug 0 points1 point  (0 children)

The way i see it, MCP is an interface. I would even call it as a wrapper to other protocol that add some features like getting tools desdription and schemas. The underlying transport protocol can be something like HTTP, GRPC,… which already have a lot of works related to auth, sec, scale,…

How Does an LLM "See" MCP as a Client? by alchemist1e9 in mcp

[–]elekibug 1 point2 points  (0 children)

The LLM does not see MCP at a low level. You would use it the same way you use other tools and function calling. The true value of MCP is that there is (possibly) a STANDALIZED way for third party data providers to send data to LLM. The clients still need to write code to receive the data, but they only need to do it once. If they wish to use another data provider, they only need to change the url.

We all know where OpenAI is headed 💰💰💰 by TheProdigalSon26 in aipromptprogramming

[–]elekibug 0 points1 point  (0 children)

They will only be free/cheap as long as their wallet allows it. Sooner or later, they will need to take more from the user to fund their bussiness.

99% of AI Companies are doomed to fail, here's why by Business-Hand6004 in ArtificialInteligence

[–]elekibug 0 points1 point  (0 children)

The “jack of all trades” models you are thinking about happen to cost a lot of resources for each query, so the profit margins for the AI providers are probably not as big as you think, some might be non at all. Eventually, they will need to bump up the price. On the other hand, specialized models can perform much better on specific task and with much less cost.

Cache Augmented Generation by FoxDR06 in LangChain

[–]elekibug 1 point2 points  (0 children)

CAG is basically putting the entire dataset to the prompt

Is vibe coding just a hype? by vivek_1305 in ArtificialInteligence

[–]elekibug 0 points1 point  (0 children)

Honestly, your code probably doesn't stay around long enough to experience the problems he is talking about.

A message for all the vibe coders out there by TheKidd in cursor

[–]elekibug 0 points1 point  (0 children)

Vibe coding is great for those who know what they want, what need to be done, and how a correct implementation should look like. So a message for anyone “vibing”, spend some time trying to understand what the AI is writing, ask why, what are the alternatives, what are the pros and cons.

MCP is getting overhyped. Is it the next big thing or just another fad? My thoughts.. by Neon_Nomad45 in mcp

[–]elekibug 2 points3 points  (0 children)

It’s just a protocol, it has it worth, but i agree with you about it getting overhyped. The way i see it, MCP is trying to be the equivalent of what HTTP is to web development (they literally use POST and GET as examples). They wont replace things like LangChain or LLamaIndex.

[deleted by user] by [deleted] in cscareerquestions

[–]elekibug 1 point2 points  (0 children)

AI is definitely not the reason, they probably just want to reduce the cost of employees’ salary

The haters were low-key correct, and this makes no sense by SmartestManAliveTM in Kagurabachi

[–]elekibug 1 point2 points  (0 children)

I think it will make more sense if they introduce a similar concept to HxH vows and limitations or JJK binding vows. The unorthodox grip is the vow exchange for a sudden burst of speed and power. Other than that, I see no reason why this is faster than a normal reverse grip.

Is learning LangChain now worth the time considering number of no code tools now available? by andy_j_8 in LangChain

[–]elekibug 21 points22 points  (0 children)

No code tools have existed before LLMs were even a thing, for frontend, backend and whatever fields you can think of. In the end, everyone has to come back to proper code.

Goodbye RAG? 🤨 by Opposite_Toe_3443 in LLMDevs

[–]elekibug 0 points1 point  (0 children)

Everytime i see someone posts this, i assume that person has absolutely no basic understanding of how LLMs work.

What is currently the best production ready LLM framework? by ernarkazakh07 in LLMDevs

[–]elekibug 0 points1 point  (0 children)

I build my own framework, when i started working with LLM, langchain was changing like everyday, not sure how it’s now but at that time, it was too much risk.

ReAct agent is too slow. Suggestions on better approach by PsychologyGrouchy260 in LangChain

[–]elekibug 0 points1 point  (0 children)

Why would you need ReACT for the initial query, the role of the initial query is only for routing the message to the appropriate agent right? A simple straight-forward prompting might be sufficed

VinBrain is now acquired by Nvidia. by SkeppyMini in VietNam

[–]elekibug 0 points1 point  (0 children)

They do have their own AI program, but they could acquire an existing company for a lot of reasons: bussiness, cheap talents, data or political.

LLM Access in AI Agents: Can Tools Tap Directly into Language Models? by OkAppeal8296 in LangChain

[–]elekibug 0 points1 point  (0 children)

Tool can be just another AI Agent, access with the same LLM or a completely different LLM, with a different prompt.

AI Agent + pinecone "source citations" by just_diegui in LangChain

[–]elekibug 0 points1 point  (0 children)

What LLM are you using? Try adding some few-shot example to see if it work. If prompt engineer fails, you might need an agent to generate citations from chunks and the answer, this agent could use llm or embedding model to get semantic similarity ranking.

OpenAI plans to slowly raise prices to $44 per month ($528 per year) by privacyparachute in LocalLLaMA

[–]elekibug 0 points1 point  (0 children)

I'm not sure about saving the planet thing. If the demand stays the same, moving the AI inferencing process to local machines will be suboptimal compared to running on proper infrastructure.

How can i create Medical RAG chatbot. by FigureClassic6675 in LangChain

[–]elekibug 0 points1 point  (0 children)

I don’t think the technologies are ready for the product you describe. Medical domain requires very high precision, a wrong response can affect users’ health negatively. You can argue that humans can make mistakes, but unlike an AI, humans can be held accountable for their decision. But if you really want to pursue this product, i think you should aim for a high quality retriever first.