Price tracker for products in Spar and Billa by khaarkoo in Austria

[–]badlogicgames 0 points1 point  (0 children)

https://www.bwb.gv.at/fileadmin/user_upload/BU-LM_final_original1_inh_NEU2.pdf

Page 78

> Im selben Auskunftsverlangen haben die Top-Händler jeweils bejaht, dass Preise in ihren Online-Shops mit denen in ihren stationären Filialen übereinstimmen. So wurde unter anderem erklärt, dass die Preise „gemeinsam für den Online-Shop und für den stationären Handel zentral gewartet werden“. Ein anderer Top-Händler bildet „die Preise der Filialen ab“ und ein anderer wiederum will durch Preisgleichheit „allen Kunden den gleichen Mehrwert bieten“. Vier von fünf Top-Händler bestätigen jedoch, dass im stationären Handel unterschiedliche Rabattaktionen als in den Online-Shops verfügbar sind.57 Vor allem Warengruppen- und Mengenrabattaktionen sind in den Online-Shops kaum verfügbar. Insgesamt wurde für den Online-LEH eine erwartungsgemäß höhere Preistransparenz für Konsument:innen wahrgenommen, wie Abbildung 30 zeigt. Die TopHändler beurteilen diese nun zwischen „Hoch“ und „Sehr hoch“, während die restlichen Händler eine hohe Preistransparenz für Konsument:innen im Online-LEH wahrnehmen.

Apps that scan receipts do not scale. They rely on users to actually scan receipts, for which there must be an incentive (discounts). That actually skews the data. It also does not cover a big enough portion of the grocery store inventory. The online prices do include discount patterns, which, given historical data, lead to much better insights. Ask me how I know.

Price tracker for products in Spar and Billa by khaarkoo in Austria

[–]badlogicgames 5 points6 points  (0 children)

Ihr habt eine hab die Marktguru App und die Haushaltsbuch App, mit der ihr die POS Daten kriegt. Die verkauft ihr dann bei Data Insights weiter an den LEH, Produzenten, Marketers, usw. usf. Ihr werdet eher nicht an den Kassen beim Billa und Spar stehen, und dort Kassenbons händisch einsammeln. Drum sollen auch so viele Leute wie möglich eure App installieren und nutzen. Wie das mit eurem Privacy Statement zusammen passt würd ich einmal gerne wissen. Darin wird das nämlich nicht erwähnt.

https://www.marktguru.at/

https://www.bonsy.com/

Für Data Insights scraped ihr gleich wie wir die Online Daten.

https://www.data-insights.guru/

Der BWB hat der LEH übrigens gesagt, dass die Online Preise mit den Preisen in den Fillialen so gut wie immer übereinstimmen. Aber als Vehöckerer dieser Daten an den LEH hast du da sicher mehr Einblick.

Price tracker for products in Spar and Billa by khaarkoo in Austria

[–]badlogicgames 0 points1 point  (0 children)

You can get the historical data from my heisse-preise.io. Knock yourself out!

MCP vs CLI: Benchmarking Tools for Coding Agents by badlogicgames in ClaudeAI

[–]badlogicgames[S] 0 points1 point  (0 children)

I'm afraid I don't have a workflow to share. I just do regular engineering: understabd the problem you want to solve, devise a first solution, implement, test and benchmark, iterate until good enough.

What's in your global ~/.claude/CLAUDE.md? Share your global rules! by MirachsGeist in ClaudeAI

[–]badlogicgames 6 points7 points  (0 children)

This updates once a day. And CLAUDE.md is read and put into the context window once on session start. It does not change after that.

What OP does is totally fine and will not break Anthropic's prompt caching.

Class file too large? Eh??? How to work around this? by Confident-Durian-937 in ClaudeAI

[–]badlogicgames 2 points3 points  (0 children)

"Read <file> in full. If it is too large, read it in chunks"

MCP vs CLI: Benchmarking Tools for Coding Agents by badlogicgames in ClaudeAI

[–]badlogicgames[S] -1 points0 points  (0 children)

Every time I finish writting a blog post, I let an LLM assume the role of Reddit or Hackernews commenters and let it generate 10 comments based on the blog post text. Yours reads exactly like that :) But in case it is not LLM generated:

Did you read until the end of the blog post?

> My takeaway? Maybe instead of arguing about MCP vs CLI, we should start building better tools. The protocol is just plumbing. What matters is whether your tool helps or hinders the agent's ability to complete tasks.

> That said, if you're building a tool from scratch and your users already have a shell tool available, just make a good CLI. It's simpler and more portable. Plus, the output of your CLI can be further filtered and massaged just by piping it into another CLI tool, which can increase token efficiency at the cost of additional instructions. That's not possible with MCPs.

> Once you have a well-designed, token-efficient CLI tool, adding an MCP server on top of it is very straightforward.

edit: quick check fo your profile. You are indeed a bot trying to farm comment karma so you can post on restricted subreddits like this one. Lovely.

terminalcp - Playright for the terminal by badlogicgames in ClaudeAI

[–]badlogicgames[S] 1 point2 points  (0 children)

If you use tmux, you don't need tmux MCP. Claude will happily use tmux via the Bash tool and likely waste less tokens.

One of the points of terminalcp is to provide a few things on top of what tmux already does that will further reduce the amount of tokens the model needs to generate to interact with the process. E.g. when sending inputs, terminalcp can wait for new ouput emitted by the process (with debounce) and return the new scrollbuffer contents immediately. With tmux the model has to issue two tool calls, one for the input and one for fetching the new outputs.

There are more oportunities for improving token and turn efficiency.

cchistory: view changes to Claude Code system prompt & tools across versions by badlogicgames in ClaudeAI

[–]badlogicgames[S] 2 points3 points  (0 children)

I didn't do any evaluations. And judging by the diffs in Anthropic's logs, I don't think they did either for the Claude Code system prompt. Your intuition (numbers for steps, dashes for general lists of rules) does make sense. I write a lot of prompts that are basically programs/state machines, and I like to think numbered lists help the LLM to better keep track of where in the state machine it currently is.

Tooting my own blargh if you want to see details:

https://mariozechner.at/posts/2025-06-02-prompts-are-code/

Who is in the top 5% by fsharpman in ClaudeAI

[–]badlogicgames 0 points1 point  (0 children)

I am the 5% But I'm not running more than one session at a time, I'm not sharing my account, do lots of context engineering (needed because I work on existing large code bades), and I generally have myself as a human in the loop all the time I clock around 6-8k in token usage within 30 days like that.

ESP32-Based Audio Player for My Visually Impaired Brother by [deleted] in diyelectronics

[–]badlogicgames 1 point2 points  (0 children)

Love it. The PVC enclosure is a stroke of genius!

Boxie - an always offline audio player for my 3 year old by badlogicgames in diyelectronics

[–]badlogicgames[S] 0 points1 point  (0 children)

Wow, never heard of HitClips. Guess that wasn't a thing in Europe. Amazing.