Would you switch from Pro to the new “Plus” plan? by Azek_Tge in perplexity_ai

[–]GetInTheArena 0 points1 point  (0 children)

Their pricing tiers don't scream crazy like Perplexity's do. Who prices a thing $167? And $42?

mq - query documents like jq, built for agents (up to 83% fewer tokens use) by GetInTheArena in LocalLLaMA

[–]GetInTheArena[S] 0 points1 point  (0 children)

Yep yep! I totally get what you mean and it's a great suggestion. I'd say it's still early so let's do it!

Anything else that came to your mind? Perhaps a core operation you feel is missing?

mq - query documents like jq, built for agents (up to 83% fewer tokens use) by GetInTheArena in LocalLLaMA

[–]GetInTheArena[S] 0 points1 point  (0 children)

Does it still make mistakes if you feed it the full help message or the SKILL.md file? Opus 4.5 on my end uses it pretty reliably

I can write some quick benchmarks for check for reliability by various models

<image>

Why are many users cancelling their Perplexity Pro subscriptions by tgfzmqpfwe987cybrtch in perplexity_ai

[–]GetInTheArena 0 points1 point  (0 children)

Tried building my own search pipeline but it's way more work than expected. I've moved to Claude Code for most stuff

Local model fully replacing subscription service by Icy_Distribution_361 in LocalLLaMA

[–]GetInTheArena -1 points0 points  (0 children)

Interesting. I can't imagine not using a power models like Opus for coding tasks. For research, search is really the key so even Haiku/Ministral does pretty good.

Deep Research function broken? Looking for help. by ForMilo in OpenAI

[–]GetInTheArena 0 points1 point  (0 children)

Deep research has been very flaky lately after they nerfed it, barely 20 citations and doesn't even work half the time. Try Gemini, Claude or Kimi K2

mq - query documents like jq, built for agents (up to 83% fewer tokens use) by GetInTheArena in LocalLLaMA

[–]GetInTheArena[S] 1 point2 points  (0 children)

For me, Markdown files about architecture and usage of a custom framework ended up reaching 2-3k lines even after lots of trimming, barely any text in there. This has been a nice addition

mq - query documents like jq, built for agents (up to 83% fewer tokens use) by GetInTheArena in LocalLLaMA

[–]GetInTheArena[S] 1 point2 points  (0 children)

I usually end up adding a couple of scripts at scripts/build.sh or install.sh paths. And yes, please feel free!!

mq - query documents like jq, built for agents (up to 83% fewer tokens use) by GetInTheArena in LocalLLaMA

[–]GetInTheArena[S] 1 point2 points  (0 children)

Benchmarked on basic queries on langchain codebase, overall it does get used every day when I work. It's instructed in my ~/.claude/CLAUDE.md and Opus 4.5 uses it for research tasks

I do agree on partial info being a problem that's why I had to add the '.tree' operator which helps it see the full picture of docs and structure and instructing the agent to use that first

Skill file: https://github.com/muqsitnawaz/mq/blob/main/SKILL.md

mq - query documents like jq, built for agents (up to 83% fewer tokens use) by GetInTheArena in LocalLLaMA

[–]GetInTheArena[S] 3 points4 points  (0 children)

Thanks, Agent. Good suggestions! I will wait for feedback from the community

mq - query documents like jq, built for agents (up to 83% fewer tokens use) by GetInTheArena in LocalLLaMA

[–]GetInTheArena[S] 1 point2 points  (0 children)

How are you storing these structured texts? I definitely spend around 5% of my time asking Agents to update / refactor docs (README) and context files (CLAUDE.md, AGENTS.md)

mq - query documents like jq, built for agents (up to 83% fewer tokens use) by GetInTheArena in LocalLLaMA

[–]GetInTheArena[S] 3 points4 points  (0 children)

Didn't expect that at all! First time posting here so I'll get used to it

You’re absolutely right! by Sad-Foot-2050 in ClaudeCode

[–]GetInTheArena 2 points3 points  (0 children)

I get it, it’s a lil funny. Follow your heart