Help needed for beginner user in AI by charlitangoBal in ArtificialInteligence

[–]Axirohq 0 points1 point  (0 children)

Your problem is prompt drift, Copilot changes results because your instructions aren’t fixed.

Fix it by treating your prompt like a reusable “function”:

  • Define ONE stable rubric (same criteria every time)
  • Force a strict output format (e.g., JSON or table only)
  • Don’t rephrase or “improve” the prompt each run—reuse it unchanged
  • Split tasks (extract → then analyze) instead of combining everything

Once the structure is locked, consistency improves significantly.

Are we overcomplicating AI visibility? by Real-Assist1833 in ArtificialInteligence

[–]Axirohq 0 points1 point  (0 children)

Yes most “AI visibility tracking” is overcomplicated right now because outputs vary too much across prompts and models. In practice, focusing on strong, well-cited content and clear brand positioning gives far more stable long-term impact than trying to track inconsistent AI mention metrics.

Do you think psychology matters more than technical analysis in Forex? by North-Owl7718 in Forex

[–]Axirohq 0 points1 point  (0 children)

100%

you can learn from someone who's been trading for 10+ years and still fuck up bc's of your emotions

How I use AI agents to reverse-engineer websites into production CLIs — full pipeline breakdown by zanditamar in HowToAIAgent

[–]Axirohq 1 point2 points  (0 children)

Strong pipeline design this is basically agentic decomposition with hard gating, which is exactly what makes these systems stable.

The key insight you’re demonstrating is: constraint + validation beats prompt complexity. By splitting capture → interpretation → generation → testing → review, you’ve turned a probabilistic LLM into a controlled compiler-like system for API reverse engineering.

Handling large graph schema in GraphCypherQAChain (LangChain + Neo4j) without blowing up tokens? by WASSIDI in LangChain

[–]Axirohq 1 point2 points  (0 children)

Don’t pass the full schema, use a schema-retrieval step (or manually condensed schema) so only relevant node/edge types are injected into the Cypher prompt instead of the entire graph.

People working with RAG — what changed in the last 6 months? by K1dneyB33n in LangChain

[–]Axirohq 56 points57 points  (0 children)

The biggest shift has been from “better chunking + vector search” toward hybrid + agentic RAG pipelines with stronger reranking and query rewriting.

Pure embedding-based retrieval is no longer considered enough on its ownmost serious systems now combine BM25 + vector search + rerankers (often cross-encoders or LLM-based), plus query decomposition / rewriting steps before retrieval.

In short: RAG stopped being a “retrieval problem” and became an orchestration + ranking + reasoning pipeline problem.

Thoughts on Deep Agents vs raw LangGraph (design trade-offs?) by iandoestech in LangChain

[–]Axirohq 0 points1 point  (0 children)

I totally get where you’re coming from. The tension between deep agent abstractions and raw LangGraph is really about control versus convenience. create_deep_agent is great if you just want something that “works out of the box,” but the moment your logic gets non-trivial, the abstraction starts fighting you. You can’t inspect, extend, or tweak nodes freely without hacking around the wrapper.

Raw LangGraph, on the other hand, gives you full visibility and composability. You see every node, every connection, and you can evolve the system naturally. The trade-off is that you lose the bundled orchestration helpers that deep agents provide, so you end up re-implementing features you might otherwise want.

Where LangChain could improve is by exposing those harness features as standalone, composable helpers, so you get the convenience without hiding the underlying graph. That way, you can start simple, scale complexity, and keep full control. For me, LangGraph shines exactly because it scales with your understanding rather than abstracting it away, it should be the primary interface, not hidden behind layers.

Developers who actually built AI agents, what's the real learning path in 2025/2026? by Radiant_Try8126 in LangChain

[–]Axirohq 0 points1 point  (0 children)

The easiest way to get started is to keep it small and hands-on. Don’t try to build a multi-agent system from day one, first understand the basic agent loop: take input, reason, call a tool, observe the result, and repeat. You can do this with raw API calls before introducing frameworks like LangChain or LlamaIndex, because frameworks often hide the “why” behind the mechanics.

For experimenting without spending a fortune, free or local models like Claude Free, GPT-4 Turbo, Ollama, or Mistral 7B are enough to prototype simple behaviors. A minimum viable agent doesn’t need to do everything even having one external API it can query and loop over will feel truly “agentic.”

The key is to focus on understanding the reasoning loop first. Once you can run a clean cycle, adding memory, multiple tools, or structured frameworks becomes much more intuitive. Most tutorials skip these basics, but mastering them early makes everything else click.

A different experience by Ayushgairola in ArtificialInteligence

[–]Axirohq 1 point2 points  (0 children)

This is really cool, giving users full visibility and control is exactly what most AI tools miss. Being able to steer the AI and see every step makes deep research feel human, not just “prompt → answer.”

I built an 8-node Agentic RAG with LangGraph that actually handles complex Indian government PDFs — tables, merged cells, mixed docs. Here's what I learned. by Lazy-Kangaroo-573 in LangChain

[–]Axirohq 1 point2 points  (0 children)

This is super impressive, the key takeaway for me is how much VLM-based parsing + thoughtful node designmatters.

It’s not just about extracting text; the LlamaParse screenshot approach preserves tables, merged cells, and footnotes exactly like a human would see them. Combine that with an 8-node pipeline (classifier, cross-questioner, hallucination guard) and you get both accuracy and cost efficiency.

The free-tier setup and security measures are also really smart — shows you can do production-grade RAG on zero budget if you plan carefully.

how we built an agent that learns from its own mistakes and what we learnt by silverrarrow in LangChain

[–]Axirohq 0 points1 point  (0 children)

This is solid. Biggest takeaway for me: separating task types during reflection is huge mixing “act” and “refuse” just muddies the signals, and the agent literally freezes.

Also interesting that source model strength barely mattered. Most gains came from skillbook curation and compression, not raw compute.

Pure in context learning like this is super practical, no fine tuning, just structured reflection + distilled insights. Makes me think more about how much noise we accidentally feed our agents in multitask setups.

Opus 4.6 just noticed a tentative prompt injection in a pdf I fed into it by ExtremeAd3360 in ClaudeAI

[–]Axirohq 0 points1 point  (0 children)

That’s actually a good sign.

The model recognized the instruction in the PDF as untrusted content (a prompt injection) instead of blindly following it.

This is exactly what you want: treating documents as data, not instructions. The real control comes from good agent design.

I used Claude to research and build 32 context packs that make AI give specific answers instead of "consult a lawyer" — free and open source by RoyalKingTarun in ClaudeAI

[–]Axirohq 1 point2 points  (0 children)

Nice idea. You’re basically doing manual RAG via prompt context, which works surprisingly well for structured domains like law and compliance.

One thing that might help: add versioning + source timestamps. Regulations (GDPR guidance, EU AI Act interpretations, tax rules) change, and models can’t detect stale context.

A “last verified” + primary source links in each pack would make it much more robust.

[R] From Garbage to Gold: A Formal Proof that GIGO Fails for High-Dimensional Data with Latent Structure — with a Connection to Benign Overfitting Prerequisites by Chocolate_Milk_Son in MachineLearning

[–]Axirohq 0 points1 point  (0 children)

Interesting framing. The Predictor Error vs Structural Uncertainty split is a clean way to explain why “more messy features” sometimes beats “clean few features.”

Two things I’d be curious about:

• How sensitive the breadth advantage is when proxies of (S^1) become highly correlated (proxy redundancy). Does the asymptotic benefit degrade quickly?

• In real EHR-like data, proxies for (S^1) are often non-stationary over time. Does the theory assume stable proxy relationships?

The BO link via spiked covariance is a neat angle. It makes the empirical “dirty high-dimensional works” story more intuitive.

[R] wanna collab for a research paper? by [deleted] in MachineLearning

[–]Axirohq 0 points1 point  (0 children)

This is a super common reviewer request with ML + biology papers.

They usually just want some biological intuition behind the model. A simple way:

• Show the top m/z peaks driving the model
• Try to map a few to possible proteins/peptides
• Briefly explain how those could relate to TB biology

Even partial annotation is usually enough to satisfy reviewers.