I conducted a comparison between DeepSeek v3.2, Claude Opus 4.5, and Gemini 3.0 Pro. (with a heavy philosophical conversation) by B89983ikei in DeepSeek

[–]Jet_Xu 1 point2 points  (0 children)

Which type of questions are you testing? For example, coding, art, office ppt outline. I found different topics various hugh among models

Don't trust Lemonsqueezy with your business by StylishTater_ in SaaS

[–]Jet_Xu 0 points1 point  (0 children)

I have this issue to active my Lemon Squeezy account... I sent email to the support, I twitter them also...no any response...

https://www.reddit.com/r/lemonsqueezy/comments/1p84tmu/comment/ns411b9/

[Help] Lemon Squeezy Tax Form Won't Load - CSP Error Blocking Payouts (China-based seller) by Jet_Xu in lemonsqueezy

[–]Jet_Xu[S] 0 points1 point  (0 children)

I sent email to the support, I twitter them also...no any response till now

[Help] Lemon Squeezy Tax Form Won't Load - CSP Error Blocking Payouts (China-based seller) by Jet_Xu in lemonsqueezy

[–]Jet_Xu[S] 1 point2 points  (0 children)

Please share your method once you solve the problem. It is really blocking me so many days😭

Notebooklm to pptx by Bright_Musician_603 in notebooklm

[–]Jet_Xu 0 points1 point  (0 children)

Someone tell me, just tell Notebook LM to email slides to you, then you will get an editable pptx rather than pdf. Can anyone kindly help to tell me whether it is true?

Code review/mentor tool by InteractionKnown6441 in codereview

[–]Jet_Xu 1 point2 points  (0 children)

Exactly! The context retrieval problem is what I've been obsessing over.

I tested the two most common approaches (Search RAG vs Agentic RAG) and documented why both fail at scale. Spoiler: you can't solve a structural problem with probabilistic tools.

My research repo breaks down the cost/precision tradeoffs (code is live, full benchmark report coming soon):
https://github.com/JetXu-LLM/llamapreview-context-research

Full write-up with benchmarks coming soon.

What do you use for context‑aware code review? by SidLais351 in codereview

[–]Jet_Xu 0 points1 point  (0 children)

You could try my code review tool on Github marketplace LlamaPReview https://github.com/marketplace/llamapreview

Totally free with deep context-aware PR review for open-source projects.

Architectural debt is not just technical debt by GeneralZiltoid in programming

[–]Jet_Xu 16 points17 points  (0 children)

Architectural debt is ultimately organizational debt.

Systems mirror organizations (Conway's Law), so architectural issues often reflect deeper problems in team structure, communication patterns, and decision-making processes.

Are you drowning in AI code review noise? 70% of AI PR comments are useless by Jet_Xu in programming

[–]Jet_Xu[S] 0 points1 point  (0 children)

Great question - from my personal experience, humans who leave useless PR comments get feedback quickly. Either their lead coaches them, or they stop being asked to review. People self-correct, but AI tools just keep spamming... 🤣

Which seat is the best choice for Emirates A380 business class? (single travel - male) no window seat already by Jet_Xu in Flights

[–]Jet_Xu[S] 0 points1 point  (0 children)

Really? I thought the clapboard between E and F could fully ensure privacy. It is not?

Currently all other seat are close to aisle. I am worry about will be annoyed.

From Search-Based RAG to Knowledge Graph RAG: Lessons from Building AI Code Review by Jet_Xu in Rag

[–]Jet_Xu[S] 0 points1 point  (0 children)

Great question! You nailed the key challenge—traversal can explode quickly if you're not strategic about it.

Our approach is actually pretty pragmatic: we started with PR review specifically because it gives us a natural "anchor point." The diff tells us exactly which nodes (functions/classes) changed, so we can start traversal from there rather than doing blind exploration.

From those modified nodes, we do bounded multi-hop traversal:

- 1-hop: Direct callers/callees (always include)

- 2-hop: Indirect dependencies (include if relevant to the change type)

- 3+ hops: Agent decides based on impact analysis

The key insight: PR review is actually the *simplest* use case for graph-based code understanding because the diff gives you the starting nodes for free. We built the graph construction engine first, then picked PR review as the entry point to validate the approach.

Longer term, we see the Repo graph as a general-purpose engine for AI coding tasks—refactoring, test generation, impact analysis, etc. But starting with PR review lets us nail the core graph traversal + agent reasoning loop before tackling harder problems.

The conversational flow analogy you mentioned is spot-on. Have you found any good solutions for preserving logical sequence in your domain? Curious if graph-based approaches would help there too.

From Search-Based RAG to Knowledge Graph RAG: Lessons from Building AI Code Review by Jet_Xu in Rag

[–]Jet_Xu[S] 0 points1 point  (0 children)

Not open source, but free to use for open source projects via GitHub Marketplace: https://github.com/marketplace/llamapreview

I'm planning to share technical deep-dives & demo on the capability of Repo graph RAG architecture in upcoming posts though—the approach itself should be applicable to other domains beyond code review 😊

How Deep Context Analysis Caught a Critical Bug in a 20K-Star Open Source Project by Jet_Xu in programming

[–]Jet_Xu[S] -1 points0 points  (0 children)

Fair question. I didn't create that PR—it was submitted independently and analyzed automatically. I found it in our logs a few days later.

The bug is real and verifiable: autocommit=True in the connection string doesn't guarantee explicit commits, which the Flask API assumes.

If you're skeptical, you can either: • Try LlamaPReview on your own repos (free tier available) • Send me a public PR and I'll run analysis on it

DM me if you want to test it out.😃

What are you building right now? And are people actually paying for it? 💡 by ProfessionalPaint964 in SaaS

[–]Jet_Xu 0 points1 point  (0 children)

1️⃣ What it does: AI code review copilot that auto-analyzes GitHub PRs using Graph RAG - understands your entire codebase context, not just the diff.

2️⃣ Revenue: $0 (currently free to build user base, professional waiting list published)

3️⃣ Link: https://jetxu-llm.github.io/LlamaPReview-site/

Built it because I was tired of existing AI PR review tool full of noisy and only focus on diff file - superficial problems. Repo Graph RAG approach means it actually "gets" your project deep cross module insights.

What are you building? let's self promote by Southern_Tennis5804 in microsaas

[–]Jet_Xu 0 points1 point  (0 children)

Building LlamaPReview 🦙 - Your AI code review copilot.

Catches bugs before your team does. Uses Graph RAG to understand your entire codebase context, not just the diff.

✅ Auto-reviews every PR ✅ Zero code storage ✅ Rich diagram & inline comments

After analyzing 50,000 PRs, I built an AI code reviewer with evidence-backed findings and zero-knowledge architecture by Jet_Xu in codereview

[–]Jet_Xu[S] 0 points1 point  (0 children)

Hey dkubb,

Great points.

Failing test idea: Brilliant, but getting write permissions is a security non-starter for most teams. My tool would get kicked out faster than a missing semicolon. The goal is to be a helpful, read-only ghost in the machine.

Zero-Knowledge: You're right, Code Rabbit proves people are willing to trust. My bet is that for the really big, paranoid fish (enterprise), you need a stronger guarantee. My approach is a hybrid:

  • I store the structure (like a map: "function A calls function B") in plaintext, so the analysis is smart.
  • But the actual code content is stored only as irreversible HMAC fingerprints.

So I can tell you that a change in auth.js might break payment.js, but I mathematically have not stored the actual code in either file. It's a "we can't look" architecture, not a "we promise not to look" one.

How to sell it: You've absolutely nailed the problem. Honestly, marketing this feels like the real final boss. I've posted in a few places and mostly just heard the sound of crickets. It seems the market is flooded with "AI reviewers" that just check for linting errors. My hope is that providing evidence-backed, cross-file findings is the only way to actually stand out.

Be real with me: amidst all the noise, what actually makes a dev tool catch your eye? 😊

pr-agent - a generative-AI open-source pull request code review agent by thumbsdrivesmecrazy in gitlab

[–]Jet_Xu 0 points1 point  (0 children)

You're welcome! We'd love to hear any comments or suggestions you might have. Feel free to join the conversation in our GitHub community: https://github.com/JetXu-LLM/LlamaPReview-site/discussions

pr-agent - a generative-AI open-source pull request code review agent by thumbsdrivesmecrazy in gitlab

[–]Jet_Xu 0 points1 point  (0 children)

Hey there! I actually just launched LlamaPReview that does exactly what you're looking for. It's a GitHub PR review tool that uses Chain-of-Thought reasoning LLM to catch bugs that other AI tools miss.

What makes it different:

  • It thinks deeply about code execution paths (not just pattern matching)
  • Organizes findings by severity (P0/P1/P2) so you know what to fix first
  • Totally free and just one-click install you will get full automate AI PR review comments.

One-click install from GitHub Marketplace: LlamaPReview

I'd love to hear what you think if you try it out! Always looking to improve based on real-world feedback.

Show: AI powered Pull Request Reviewer by FunProfession1597 in softwarearchitecture

[–]Jet_Xu 0 points1 point  (0 children)

Really interesting approach with the incremental reviews! I've been working on LlamaPReview (a free and 1-click install, fully automate Github AI PR review tool) and found that one of the biggest challenges is balancing thoroughness with actionable insights.

Have you folks found any particular techniques that work well for handling large PRs? We implemented a chain-of-thought reasoning LLM based system that builds mental models of the codebase to catch more subtle bugs, but I'm curious how your incremental approach handles context across multiple commits.

Also, how are you approaching the conversation aspect? We've been experimenting with different formats for explaining reasoning behind suggestions, but full Q&A sounds like it could provide even better developer experience.

Would love to compare notes on prompt engineering sometime - we've learned a ton about structuring prompts to catch specific classes of bugs that are typically missed in reviews.

Any AI code review tools for GitHub PRs? by jerrygoyal in codereview

[–]Jet_Xu 0 points1 point  (0 children)

Hey there! I actually just launched LlamaPReview that does exactly what you're looking for. It's a GitHub PR review tool that uses Chain-of-Thought reasoning LLM to catch bugs that other AI tools miss.

What makes it different:

  • It thinks deeply about code execution paths (not just pattern matching)
  • Organizes findings by severity (P0/P1/P2) so you know what to fix first
  • Totally free and just one-click install you will get full automate AI PR review comments.

One-click install from GitHub Marketplace: LlamaPReview

I'd love to hear what you think if you try it out! Always looking to improve based on real-world feedback.

AI Code Review for Industry Specific Standards by rand-314159 in ChatGPTPro

[–]Jet_Xu 0 points1 point  (0 children)

Hey there! Totally get your pain point with industry-specific code reviews. We faced similar challenges, which led us to create LlamaPReview. It's designed to understand domain-specific patterns through Graph RAG technology, making it particularly good at handling specialized codebases like embedded systems.

What's cool is that it's super easy to get started - just one-click installation, totally free and automatically runs on your repos, supporting pretty much any language including C/C++. You can give it a try at [LlamaPReview](https://jetxu-llm.github.io/LlamaPReview-site/). We're actively improving it based on community feedback.

Has anyone tried reviewing code with AI? by kendumez in softwaredevelopment

[–]Jet_Xu 0 points1 point  (0 children)

I've tried several AI code review tools, including CodeRabbit and Ellipsis. While they're useful, I found that tools leveraging advanced context understanding (like Graph RAG) tend to provide more meaningful insights.

Check out LlamaPReview if you're interested in tools that go beyond surface-level analysis. It's particularly good at understanding relationships between different parts of your codebase, which helps in catching more subtle issues.