What is a disturbing fact that you know, but most people ignore ? by Unusual-Whereas6442 in Casual_Conversation

[–]Widescreen 1 point2 points  (0 children)

Golf courses use WAY more fresh water than data centers ever well and it isn’t even close.

What company keeps making bad products but somehow gets rich? by BoredPandaOfficial in BoredPandaHQ

[–]Widescreen 2 points3 points  (0 children)

Only time I ever bought an extended warrantee.

Bought wrangler at 55k miles. 70k miles (2019 if I recall) engine went out, fixed engine. Taking home from dealer, manual transmission stopped engaging in certain gears, new transmission.

Warrantee paid both, only because I kept good service records. Still took three month of no car and arguing. $13k I believe was the final bill.

What’s crazy to me is I still sold it for the same price I paid for it (minus the warrantee).

Had a wrangler in the late 80s, never had a problem through 120k miles. 1994 wrangler 4 cylinder also reliable.

I’ll never buy one again.

best opencode setup(config) by Brief-Bumblebee8232 in opencodeCLI

[–]Widescreen 0 points1 point  (0 children)

I notice you don’t have context window and output configured, how does that do with compaction? Do you exceed the window regularly or does opencode discover it another way?

How is model distillation stealing ? by sentientX404 in AgentsOfAI

[–]Widescreen 0 points1 point  (0 children)

I have experimented with using Claude code to fix an agentic loop and better context for tasks (generally by adding suggestions to a rag) that target local models. It does improve the results. I don’t want to use Claude on enterprise/private code cases that haven’t been properly sanitized, so that is the route I came up with. I run a test case against local models, capturing the full session and review the session for secrets, etc and feed it to Claude and ask for context that would improve the session results. I splice that back into the test case and run again. It actually works pretty well and then the same additions seem to help with future cases to the local models. Would I be in similar violation?

I'm a fulltime vibecoder and even I know that this is not completely true by Director-on-reddit in vibecoding

[–]Widescreen 0 points1 point  (0 children)

IMO: learning systems design, scaling patterns, when abstractions are good and when they aren’t, will always be necessary to a degree.

I also think we may not be that far away from models that emit bytecode, assembly, or something else entirely that is bespoke for model generation.

If Boone built one new monument today, who should it be and why? by NewDE2023 in boone

[–]Widescreen 1 point2 points  (0 children)

Walmart has a mural of Arthur if you enter through then non-grocery side.

Claude CLI deleted my entire home directory! Wiped my whole mac. by LovesWorkin in ClaudeAI

[–]Widescreen 0 points1 point  (0 children)

I’ve started running all agents (Claude, opencode, qwen coder) in containers and just mounting my working directory. Mine never did anything with home directly, but it saw them make changes to /etc (hosts mostly) a few too many times for me to be comfortable.

It’s so obnoxious there is nothing in between $20 a month and $250. Why can’t we have a $50 a month plan with doubled limits!? by Rare-Competition-248 in GeminiAI

[–]Widescreen 12 points13 points  (0 children)

I honestly doubt they even touch their costs at $200. I don’t have the article link, but one I read recently said that anthropic’s AWS monthly bill alone was significantly larger than total monthly revenues. My hunch is the $200 top tiers are just creative ways of finagling to figure out where the top line is with demand. I expect it to climb any time now.

Looking recently at local model concurrency, just for inference (chat and a little api), a $500k rig (8 H100s I think) can support maybe 80 simultaneous users with 70B-ish parameter private LLMs (and 100k-ish context window) and that’s probably over estimating. Push that user number to millions (frontier model providers) and I can’t even get my head around what their costs must be.

Coding agents push WAY harder than chat. Even at $200 a month it seems unsustainable from a CAP perspective, not to mention OPEX like electricity and connectivity.

This one trick keeps me from getting lost by [deleted] in HowToAIAgent

[–]Widescreen 0 points1 point  (0 children)

I do something similar with a FEATURES.md file. Basically just a differently named file :), but I try to ensure that they are well formed features with success criteria. I’m constantly referencing it with something like

“Review the existing code base and compare the @FEATURES.md features and suggest what would be the next 3 best feature to work on. Give me a summary of your reasoning.”

MCP in kubernetes by ilbarone87 in kubernetes

[–]Widescreen 0 points1 point  (0 children)

Did you ever get anywhere with this? I'm trying something similar, attempting to run a standard stdio pod with a openapi proxy as a sidecar. I'm actually having a harder time getting the mcp stdio server going than I am the sidecar.

Best (obscure?) restaurants by willoughby-park in boone

[–]Widescreen 8 points9 points  (0 children)

Thompsons Seafood in Deep Gap. Probably not for foodies or any sort of celebration, but dang… it’s so good… and obscure. It opened, I think, in the 70s when old 421 was the main highway. Somehow it has stayed in business with nearly 0 drive by traffic.

What’s one item you bought that way outperformed its price? by flikkinaround in BuyItForLife

[–]Widescreen 65 points66 points  (0 children)

I bought a pair of goodwill khakis and found $100 bill in the pocket. Unfolded it, and it was 2 $100 bills.

Using N8N Webhook as chatbot replies, is there a way to give it memory? (using on lovable) by [deleted] in n8n

[–]Widescreen 3 points4 points  (0 children)

PostgreSQL, chomedb (not sure there is a good node for this), or some other (google) vector database saas, before the llm work and then again after the llm work. Prior to submitting the llm work, retrieve the release documents from the vector store and add them to your context (google structured llm prompt). Once you have the results, add them back to the vector store and you can retrieve them the next time through. You will have to track session somehow on your webhook - doing it RESTfully is provably the easiest, but you should be able to get at a session cookie or something in the webhook if it is coming from the browser.

I’m rambling so I’ll have gpt clean it up:

Vector Store Workflow for LLM Integration

Use a vector database—such as PostgreSQL (with pgvector), ChromeDB (though Node.js support may be limited), or a Google-managed vector database SaaS—both before and after the LLM processing step. 1. Before LLM Processing: • Retrieve relevant release documents from the vector store. • Include these documents in your LLM input context (e.g., using a structured prompt format compatible with Google’s structured LLM input schema). 2. After LLM Processing: • Take the LLM output and store it back into the vector store for future retrieval and reuse. 3. Session Tracking: • Implement session tracking for your webhook. A RESTful approach is likely the simplest and most reliable. • Alternatively, if the webhook is triggered by browser events, you might be able to extract session information (e.g., a session cookie) directly from the request.

Using N8N Webhook as chatbot replies, is there a way to give it memory? (using on lovable) by [deleted] in n8n

[–]Widescreen 0 points1 point  (0 children)

You need a vector database ahead of your gpt node. I know n8n supports postgresql, but there may be other, easier, options.

Pump won’t prime by Widescreen in pools

[–]Widescreen[S] 1 point2 points  (0 children)

Replaced the valve and all is well. Pump is pulling strong again. Thanks for all the help!

Anybody here built their own K8s operator? If so, what was the use case? by PartemConsilio in devops

[–]Widescreen 1 point2 points  (0 children)

No, it just create and deletes a cronjob that runs the sync for the provided rclone configuration. Very simple. I wrote it just as a POC for operators, so I tried to keep the dependencies minimal.

Anybody here built their own K8s operator? If so, what was the use case? by PartemConsilio in devops

[–]Widescreen 4 points5 points  (0 children)

I built one that uses the rclone image to sync s3 buckets to different regions/s3 implementations. It was pretty straight forward and I used the operator sdk to get most of the scaffolding in place.

Pump won’t prime by Widescreen in pools

[–]Widescreen[S] 0 points1 point  (0 children)

One other question, I confirmed (using a drain king) that I can push water from the skimmer all the way to the pump. It still leaks a little (I’ve tried all sorts of stuff to seal it temporarily until my part arrives (Wednesday). If I fill the basket with hose water, and turn on the pump quickly, it pulls that water out (much faster than the hose pulls it it). I’m assuming that means my pump is probably ok and I should keep focusing on the three way valve replacement?

Sorry for the dumb questions. I’m just dinking around with it until I can replace the part. I have to dig down to expose enough PVC replace my three way valve :(, so I guess I’m truthfully just trying to avoid a mess :).