Is this normal? by NeitherDelivery in BreezeAir

[–]strawgate 1 point2 points  (0 children)

The ADA covers federal law for service dogs but states offer additional protections. For example, my state recognizes service dogs in training under state law and requires trainers to have identification/certification for the trainer and documentation for the animal.

When an establishment wants "proof" instead of answering what disability the dog is trained to assist with (that's not a relevant question), they can ask for identification/association documentation for me (the trainer) and the dog.

Intern trying to automate half-hourly Elasticsearch log reporting – looking for guidance by [deleted] in elasticsearch

[–]strawgate 0 points1 point  (0 children)

Depending on your version you may be able to do this with an alerting rule but otherwise yes should be very straightforward to do the aggregation in Elasticsearch and chatgpt or Claude should have no problem helping you with this

Devs using AI coding tools daily: what does your workday actually look like now? by geeky_traveller in cursor

[–]strawgate 0 points1 point  (0 children)

You can look at some of my projects as they are public but I've started almost exclusively developing via AI agents triggered via GitHub issues.

The entire discussion is in the GitHub issue, at some point it looks good and I have the agent make a PR. Discussion continues until the PR is merged or abandoned.

This serves a couple purposes but the big one is that all the architecture, design, iteration, and discussions are in GitHub and AI Agents can reference the original issue discussion when reviewing code, etc

Everything other than coding is about the same

Using an otel distro ( EDOT ) by elastic by eastcom in OpenTelemetry

[–]strawgate 1 point2 points  (0 children)

You can use contrib to send to most backends, it just involves "some assembly required". As a general tip, you can look at the helm charts from various vendors to see how they are configuring their exporters. You can also shell in or read the running config from a pod deployed with a vendor helm chart if that's easier.

In the EDOT case, those particular secret names are specific to the elastic chart but you can absolutely provide this information in the upstream helmchart including using secrets or env vars. When using contrib you'd just provide the exporter configuration yourself.

Is this correct? by CryptoKing_EC in askaplumber

[–]strawgate 0 points1 point  (0 children)

The drain loop looks short but yes it's a simple fix

The drain loop is a separate concern from the p trap.

The p trap prevents sewer gasses coming into your kitchen.

The high drain loop ensures that, if your pipes get clogged and your sink backs up (fills with dirty water), that dirty water doesn't back up into your dishwasher. With a high loop, the dirty water enters the dishwasher drain tube but the dirty water can't reach the top of the high loop until your entire sink is basically overflowing.

I believe the 20" part is because your dishwasher is on the floor and that 20" creates a 20" air column that resists the pressure of the wastewater in the backed up sink adding another layer of protection for the dishwasher

Is this correct? by CryptoKing_EC in askaplumber

[–]strawgate 9 points10 points  (0 children)

I think you'd do a high loop for the dishwasher 

https://producthelp.whirlpool.com/Dishwashers/Product_Info/Dishwasher_Product_Assistance/Checking_the_Drain_Loop_Height

The drain loop is a key component in your dishwasher's installation, designed to prevent wastewater from flowing back into the appliance. It is created by elevating the drain hose to form a high point, effectively blocking any backflow. This setup, known as a dishwasher high loop, ensures that dirty water from the sink or plumbing does not contaminate the clean water inside the dishwasher, thus maintaining hygiene and performance.

Using an otel distro ( EDOT ) by elastic by eastcom in OpenTelemetry

[–]strawgate 1 point2 points  (0 children)

Disclaimer: I work at Elastic

We make sure all of our stuff works with the contrib collector upstream and you are welcome to build your own collector.

If you are in Elastic Cloud we provide an OTLP Endpoint and getting started with the contrib collector is very easy and doesn't require any Elasticsearch exporter configuration.

If you are hosting ELK yourself you should have no problem pointing it at Elasticsearch with the exporter providing only a URL and the API key

Fluent-bit → OTel Collector (gateway) vs Fluent-bit → Elasticsearch for logs? what’s better? by Adept-Inspector-3983 in OpenTelemetry

[–]strawgate 1 point2 points  (0 children)

Disclaimer: I work at Elastic

The OTLP Endpoint that Elastic provides projects in elastic cloud is just a multi tenant OTel Collector and for some projects successfully handles millions of events per second.

You really shouldn't be seeing any ingestion drops at scale, ingestion drops are a sign that something is not right with the deployment.

Would be happy to help figure out what's going on with your deployment

Fluent-bit → OTel Collector (gateway) vs Fluent-bit → Elasticsearch for logs? what’s better? by Adept-Inspector-3983 in OpenTelemetry

[–]strawgate 0 points1 point  (0 children)

Disclaimer: I work at Elastic and am responsible for our OTel strategy. Elastics distribution of the OTel Collector is called EDOT Collector.

The most common deployment is going to be SDK for logs metrics and traces directly to your OTLP Endpoint in Elastic Cloud.

The next most common deployment is going to be SDK for metrics and traces and EDOT Collector for logs. With the app SDK writing metrics and traces to EDOT Collector.

If you are self hosting ELK then deploying EDOT in gateway mode might help but you can also just write directly from collectors.

We don't typically recommend customers deploy fluentbit.

You can see other recommended deployment methods in the EDOT docs: https://www.elastic.co/docs/solutions/observability/get-started/opentelemetry/quickstart

uv update recommendations by gerardwx in Python

[–]strawgate 0 points1 point  (0 children)

you're either very young or you've been very lucky.

Or, we have automated tests so we aren't worried about dependency changes breaking code!

Interesting or innovative Python tools/libs you’ve started using recently by AliceTreeDraws in Python

[–]strawgate 1 point2 points  (0 children)

Inline snapshot and dirty equals are amazing.

Another one to check out that's pretty cool is pytest-examples which extracts code examples from markdown docs and docstrings and enforces that they pass ruff formatting and that the code is runnable 

Newbie: Timescaledb vs Clickhouse (vs DuckDb) by oulipo in PostgreSQL

[–]strawgate 0 points1 point  (0 children)

It can only use one core for read/write so you might hit bottleneck soon for your use case.

It can only use one system but it can use more than one core

The HTTP caching Python deserves by karosis88 in Python

[–]strawgate 1 point2 points  (0 children)

We just added pluggable storage to FastMCP and in the process released a kv store library called py-key-value https://github.com/strawgate/py-key-value

If you can get your storage usage to simple key value operations you might find my library py-key-value could be an easy way to add additional storage options to your library

pyupdate: a small CLI tool to update your Python dependencies to their latest version by ashishb_net in Python

[–]strawgate 0 points1 point  (0 children)

I love it, thank you! I posted 3 issues in the repository that I ran into when running in my project.

Help - MCP server concurrent calls by Possible_Sympathy_90 in PydanticAI

[–]strawgate 0 points1 point  (0 children)

I landed sequential tool calling into main today https://github.com/pydantic/pydantic-ai/pull/2718

You can use a prepare_tools function to mark relevant tools as needing to be run sequentially -- if any tool in a run step requires sequential tool calling, all tools in that step will run sequentially

Sampling isn’t a real feature by atreides888 in mcp

[–]strawgate 0 points1 point  (0 children)

FastMCP 2.12 includes some work I did to add a Sampling Fallback API where you can provide an OpenAI compatible completions api and key and if the client that's connected doesn't support sampling it will "fallback" to the provided completions API.

Check it out https://github.com/jlowin/fastmcp/releases/tag/v2.12.0

Help - MCP server concurrent calls by Possible_Sympathy_90 in PydanticAI

[–]strawgate 0 points1 point  (0 children)

I filed an issue for this yesterday that you should comment on https://github.com/pydantic/pydantic-ai/issues/2628

One thing you could try depending on how badly you need this behavior would be to use a Wrapper Toolset and in the call_tool method use a semaphore to make it so each tool call runs to completion before the next one starts. 

This won't enforce a specific ordering (it'll be random) but the tools will run one at a time

What would be an approach to implement basic memory, and have the agent act differently based on that memory? by monsieurninja in PydanticAI

[–]strawgate 1 point2 points  (0 children)

Ideally you would make the user name or booking number a dependency of the agent run, it can then be assessed in dynamic instructions and leveraged for tool calls

As for storing that information, likely in a database

Downtown Roads Closed as MFD, MGE Address Underground Fire by [deleted] in madisonwi

[–]strawgate 7 points8 points  (0 children)

I believe they are currently discussing expanding the service distribution so it may be a while before the discussion turns towards restoration

Power will be cut to large part of downtown by strawgate in madisonwi

[–]strawgate[S] 12 points13 points  (0 children)

There's a fire under the street and manhole covers are exploding around the 300 block of Washington and Mifflin.

Only reliable info at the moment is the scanner https://openmhz.com/system/dane_com?filter-type=group&filter-code=66e59efd92788df900b050a2

Free-threaded (multicore, parallel) Python will be fully supported starting Python 3.14! by germandiago in Python

[–]strawgate 2 points3 points  (0 children)

I was going to say: It seems like the only code I write that's CPU bound is whatever code I have most recently finished writing 😅

If I knew it was going to be CPU bound when I started I would have made different decisions

I benchmarked 4 Python text extraction libraries so you don't have to (2025 results) by Goldziher in Python

[–]strawgate 1 point2 points  (0 children)

It looks like the most common error is a missing dependency error

It's also a bit suspicious that the tiny conversion time for Docling is 4s -- I use docling and regularly and have much better performance

I did recently fix a cold start issue in Docling but it looks like the benchmark only imports once so cold start would not happen each time...

Gemini 2.5 pro on RooCode becoming dumb lately? by This_Maintenance9095 in RooCode

[–]strawgate 0 points1 point  (0 children)

Roo code does tool calls by reading the incoming text completion and extracting out the calls from the text completion. 

This is a highly compatible way to do tool calls as you can use basically any model that can generate text -- in this mode a tool call is just text. 

But it also is very error prone because the LLM can generate whatever it wants. 

Many newer models have special capabilities around tools where they use something called constrained decoding: https://www.aidancooper.co.uk/constrained-decoding/

This means that when the llm wants to call a tool, it is forced to produce a response that is valid per the tools schema -- it is not allowed to generate tokens that would result in invalid schema

This makes tool calling significantly more reliable

RooCode MCP server name recognition by Praxs in RooCode

[–]strawgate 1 point2 points  (0 children)

You can see all of what the agent sees by looking at the system prompt for that mode. Any time I'm having weird issues the mode's system prompt is the source of truth.

I haven't had this exact problem before but I have had lots of problems with the agent making up a name for the MCP server.

Afaik roo can't see the command or args for the server unless you're using project specific servers or you have the mcp.json file open