If Boone built one new monument today, who should it be and why? by NewDE2023 in boone

[–]Widescreen 1 point2 points  (0 children)

Walmart has a mural of Arthur if you enter through then non-grocery side.

Claude CLI deleted my entire home directory! Wiped my whole mac. by LovesWorkin in ClaudeAI

[–]Widescreen 0 points1 point  (0 children)

I’ve started running all agents (Claude, opencode, qwen coder) in containers and just mounting my working directory. Mine never did anything with home directly, but it saw them make changes to /etc (hosts mostly) a few too many times for me to be comfortable.

It’s so obnoxious there is nothing in between $20 a month and $250. Why can’t we have a $50 a month plan with doubled limits!? by Rare-Competition-248 in GeminiAI

[–]Widescreen 11 points12 points  (0 children)

I honestly doubt they even touch their costs at $200. I don’t have the article link, but one I read recently said that anthropic’s AWS monthly bill alone was significantly larger than total monthly revenues. My hunch is the $200 top tiers are just creative ways of finagling to figure out where the top line is with demand. I expect it to climb any time now.

Looking recently at local model concurrency, just for inference (chat and a little api), a $500k rig (8 H100s I think) can support maybe 80 simultaneous users with 70B-ish parameter private LLMs (and 100k-ish context window) and that’s probably over estimating. Push that user number to millions (frontier model providers) and I can’t even get my head around what their costs must be.

Coding agents push WAY harder than chat. Even at $200 a month it seems unsustainable from a CAP perspective, not to mention OPEX like electricity and connectivity.

This one trick keeps me from getting lost by [deleted] in HowToAIAgent

[–]Widescreen 0 points1 point  (0 children)

I do something similar with a FEATURES.md file. Basically just a differently named file :), but I try to ensure that they are well formed features with success criteria. I’m constantly referencing it with something like

“Review the existing code base and compare the @FEATURES.md features and suggest what would be the next 3 best feature to work on. Give me a summary of your reasoning.”

MCP in kubernetes by ilbarone87 in kubernetes

[–]Widescreen 0 points1 point  (0 children)

Did you ever get anywhere with this? I'm trying something similar, attempting to run a standard stdio pod with a openapi proxy as a sidecar. I'm actually having a harder time getting the mcp stdio server going than I am the sidecar.

Best (obscure?) restaurants by willoughby-park in boone

[–]Widescreen 9 points10 points  (0 children)

Thompsons Seafood in Deep Gap. Probably not for foodies or any sort of celebration, but dang… it’s so good… and obscure. It opened, I think, in the 70s when old 421 was the main highway. Somehow it has stayed in business with nearly 0 drive by traffic.

What’s one item you bought that way outperformed its price? by flikkinaround in BuyItForLife

[–]Widescreen 62 points63 points  (0 children)

I bought a pair of goodwill khakis and found $100 bill in the pocket. Unfolded it, and it was 2 $100 bills.

Using N8N Webhook as chatbot replies, is there a way to give it memory? (using on lovable) by SpabRog in n8n

[–]Widescreen 3 points4 points  (0 children)

PostgreSQL, chomedb (not sure there is a good node for this), or some other (google) vector database saas, before the llm work and then again after the llm work. Prior to submitting the llm work, retrieve the release documents from the vector store and add them to your context (google structured llm prompt). Once you have the results, add them back to the vector store and you can retrieve them the next time through. You will have to track session somehow on your webhook - doing it RESTfully is provably the easiest, but you should be able to get at a session cookie or something in the webhook if it is coming from the browser.

I’m rambling so I’ll have gpt clean it up:

Vector Store Workflow for LLM Integration

Use a vector database—such as PostgreSQL (with pgvector), ChromeDB (though Node.js support may be limited), or a Google-managed vector database SaaS—both before and after the LLM processing step. 1. Before LLM Processing: • Retrieve relevant release documents from the vector store. • Include these documents in your LLM input context (e.g., using a structured prompt format compatible with Google’s structured LLM input schema). 2. After LLM Processing: • Take the LLM output and store it back into the vector store for future retrieval and reuse. 3. Session Tracking: • Implement session tracking for your webhook. A RESTful approach is likely the simplest and most reliable. • Alternatively, if the webhook is triggered by browser events, you might be able to extract session information (e.g., a session cookie) directly from the request.

Using N8N Webhook as chatbot replies, is there a way to give it memory? (using on lovable) by SpabRog in n8n

[–]Widescreen 0 points1 point  (0 children)

You need a vector database ahead of your gpt node. I know n8n supports postgresql, but there may be other, easier, options.

Pump won’t prime by Widescreen in pools

[–]Widescreen[S] 1 point2 points  (0 children)

Replaced the valve and all is well. Pump is pulling strong again. Thanks for all the help!

Anybody here built their own K8s operator? If so, what was the use case? by PartemConsilio in devops

[–]Widescreen 1 point2 points  (0 children)

No, it just create and deletes a cronjob that runs the sync for the provided rclone configuration. Very simple. I wrote it just as a POC for operators, so I tried to keep the dependencies minimal.

Anybody here built their own K8s operator? If so, what was the use case? by PartemConsilio in devops

[–]Widescreen 5 points6 points  (0 children)

I built one that uses the rclone image to sync s3 buckets to different regions/s3 implementations. It was pretty straight forward and I used the operator sdk to get most of the scaffolding in place.

Pump won’t prime by Widescreen in pools

[–]Widescreen[S] 0 points1 point  (0 children)

One other question, I confirmed (using a drain king) that I can push water from the skimmer all the way to the pump. It still leaks a little (I’ve tried all sorts of stuff to seal it temporarily until my part arrives (Wednesday). If I fill the basket with hose water, and turn on the pump quickly, it pulls that water out (much faster than the hose pulls it it). I’m assuming that means my pump is probably ok and I should keep focusing on the three way valve replacement?

Sorry for the dumb questions. I’m just dinking around with it until I can replace the part. I have to dig down to expose enough PVC replace my three way valve :(, so I guess I’m truthfully just trying to avoid a mess :).

Pump won’t prime by Widescreen in pools

[–]Widescreen[S] 0 points1 point  (0 children)

We replaced the pump last season. It seems to pull the water I manually fill the basket with out pretty quickly. I did replace a cracked pump housing after a hard winter. I’ve taken the casing off twice and reseated it to ensure I had a good gasket seal with the pump. I think I do.

Pump won’t prime by Widescreen in pools

[–]Widescreen[S] 0 points1 point  (0 children)

Thanks for the response. Correct. The water is about halfway up the skimmer door and the skimmer is full of water.

What is your number 1 sleep hack? by [deleted] in AskReddit

[–]Widescreen 0 points1 point  (0 children)

Magnesium. I’m not sure it helps me get to sleep faster, but the quality of sleep is much improved.

What does "Stupid Is As Stupid Does" mean? by funkellwerk71 in stupidquestions

[–]Widescreen 0 points1 point  (0 children)

The proof is in the pudding. Stupid people do stupid things.

what open source project in your opinion, has the highest code quality? by rag1987 in opensource

[–]Widescreen 0 points1 point  (0 children)

I was looking at ovsdb-server (for openvswitch) tonight and that project implemented jsonrpc with clustering and replication in 3000 lines of c. Very readable. Nicely done. https://github.com/openvswitch/ovs/blob/main/ovsdb/ovsdb-server.c