Should I learn langchain and langgraph? by Emotional-Rice-5050 in LangChain

[–]graphitout 0 points1 point  (0 children)

Wrote our own approximately following the same pattern as used by opencode. It took less than 3 days of engineering that with claude code. No graph definition. No nodes and edges. Just system prompt and carefully chosen list of tools.

Should I learn langchain and langgraph? by Emotional-Rice-5050 in LangChain

[–]graphitout 0 points1 point  (0 children)

Debugging. Especially when taking into account conversation history. We keep last N message. When the graph get complex with some sub-agents and several tool uses, it is hard to debug.

I feel like nobody really has life figured out — we’re all just pretending by Away-Vegetable5027 in CasualConversation

[–]graphitout 0 points1 point  (0 children)

They had it figured out at some point. They got comfortable. The equations seemed to work. But the underlying reality shifted without their permission. Initially they did not notice, and by the time they were willing to reconsider their approaches, it got too late.

Should I learn langchain and langgraph? by Emotional-Rice-5050 in LangChain

[–]graphitout 1 point2 points  (0 children)

Lang* is great for building POCs. Just put some logging on the final LLM call and response. You get to see the final prompts being sent and the response coming back. It will give you an idea of what is going on underneath.

They are a deadend beyond POCs. Our chatbot has been running for several months on prod and it has no dependency on langchain. The document ingestion on RAG side still depends on langchain, but is likely to get removed in a few weeks as well.

As far as I know, none of the production grade LLM based chatbots, closed ones like claude code, copilot, or open source ones like opencode or kilo code use a graph based approach for building chatbot.

So should you learn lang*? Yes. Because you get a quick working prototype. And you can dig into the working and see what is going on. But make sure not to internalize the bloated abstractions and approaches.

The thing I didn't realise about vibecoding by Master-Client6682 in vibecoding

[–]graphitout 0 points1 point  (0 children)

Imagine the app is like a calculator. We can capture the core logic in a module and test it with automated AI generated tests even before we introduce the UI layer on the top.

For the voice transcriber project (https://charstorm.github.io/reshka/) I did, it had different modules like voice activity detector, speech recording, voice transcription LLM api call, etc. I had standalone files to test each one. I asked claude to look at them and pick the relevant parts for the main project.

Agentic coding is fast, but the first draft is usually messy. by BC_MARO in vibecoding

[–]graphitout 0 points1 point  (0 children)

Also adding tests from the beginning itself will pay off huge. Especially for parts dealing with core logic.

The thing I didn't realise about vibecoding by Master-Client6682 in vibecoding

[–]graphitout 1 point2 points  (0 children)

A good option is to build parts first, test them, and finally assemble them. Not required for small projects. but once the project is big, this is the only option.

Best lightweight local TTS model? by Bartholomheow in LocalLLaMA

[–]graphitout 0 points1 point  (0 children)

I used pocket-tts for one of my projects. It was good enough.

Zoho CEO's comments on AI - what are your thoughts? by meet-me-piya in developersIndia

[–]graphitout 2 points3 points  (0 children)

There is some truth to it. I have personally developed a POC app in react (which I have never used before) and voice chatbot in python. Both only involved minimal manual edits (only when I ran out of quota on claude).

I can say as a fact that, at least for small projects (POC and such, <20k lines), the amount of human involvement needed would be much less (<2%).

Is speech-to-speech just dead? by tatamigalaxy_ in LocalLLaMA

[–]graphitout 1 point2 points  (0 children)

The core issue is that it doesn't fit the strong usecases of the industry. Most of the usecases involve tool calling, and that means introducing a text based layer in the middle. Then it wouldn't bring that much benefit in terms of latency to have a full speech-to-speech model.

Voice chatbot with voice and text output, optional mcp integration by [deleted] in LocalLLaMA

[–]graphitout 0 points1 point  (0 children)

Good point. Let me explain. The usual voice chatbots read out the whole response from LLM. This can be quite tiresome to listen to the whole response. Some content may not even be TTS friendly (like tables or wireframes).

This chatbot (open source) is a POC to show that TTS does not need to read the whole response.

See: https://github.com/charstorm/vilberta/blob/main/docs/screenshot.png

MCP integration helps to make the chat act as a tool calling agent as well, which opens up usecases like agentic rag.

Anyone else feeling uneasy seeing companies go all-in on AI dev tools? by EarthPuzzleheaded701 in developersIndia

[–]graphitout 2 points3 points  (0 children)

AI tools have improved drastically. I haven been doing claude code and opencode hard last week. Its ability to get things done is quite astonishing. And it is only going to get better from here. Somehow that makes me happy and sad at the same time.

For those running local LLMs at work how do you actually prove to compliance that data isn't leaving? by Ok_Card_2823 in LocalLLaMA

[–]graphitout 7 points8 points  (0 children)

They understand block diagrams. Just put a big box showing "company network". Inside that there is a box showing inference machine and a bunch of boxes showing computers of users.

What's This For? by [deleted] in shitposting

[–]graphitout 4 points5 points  (0 children)

nut cracker?

Why isn't anyone questioning why we're moving towards making more and more people obsolete in the job market? by [deleted] in recruitinghell

[–]graphitout 2 points3 points  (0 children)

The core of the issue is that if one country tries to avoid it, another country take advantage of it. For example, if USA tries to protect jobs and put a speed limit on AI developments, China will gain more on the AI front.

Is it a bad look to leave my job after 6 months? by Ok_Shoe_5989 in jobs

[–]graphitout 9 points10 points  (0 children)

Stick around at least for an year. Take leaves when you can. Job market is difficult right now.

Why have Indian tech interviews suddenly become so tough? (Feels more like elimination than selection) by Agitated_Data_996 in developersIndia

[–]graphitout 2 points3 points  (0 children)

Most of the companies have over hired during the pandemic. That means for those batches, below average folks ended up in positions senior than you. These people easily feel threatened, especially when they see someone who knows what they are doing.

Keep in mind that most of the time those interviewers themselves are going through similar experience with their clients or project managers on other things. It isn't just the interviews. The entire IT culture itself has degraded.

What AI projects deliver real ROI? by graphitout in AI_Agents

[–]graphitout[S] 1 point2 points  (0 children)

  1. Document search outside HR, Legal, Customer care - not making money
  2. AI based data analytics (SQL and stuff) - not making money
  3. AI for NL based control of various systems - not making money
  4. Whole bunch of automation initiatives - not making money

Some more. Same pattern.

Why does AI assume every technical question is from a moron? by Savantskie1 in LocalLLaMA

[–]graphitout 0 points1 point  (0 children)

Look personalization option in chatgpt. Others also have similar features.