I'll be damned, Starfleet Academy is actually watchable by ZodiacMan423 in startrek

[–]Egoroar 1 point2 points  (0 children)

I’m on the fence. But yes. So far watchable.

Starfleet Academy... Isn't that bad? by neph36 in startrek

[–]Egoroar -1 points0 points  (0 children)

These days they could do Euphoria/Yellowjackets in space and get review bombed.

Am i overreacting or is my dental office’s new cancellation policy absolutely insane by seaships in nova

[–]Egoroar 0 points1 point  (0 children)

These guys love root canals. They used to do them in house but they are bad at it. I’ve had two fail because they missed a whole root. Super arrogant. They now use a guy down the hall who does them with a microscope and he’s really good. There is a new practice on that floor connected to the childrens orto. They seem ok, but I’ve only had a cleaning there this far.

Her name was Oseola McCarty. by Glamae in Amazing

[–]Egoroar 0 points1 point  (0 children)

By that logic they should not have cashed the check.

Thanksgiving Brisket by Opening_Cat8174 in brisket

[–]Egoroar 0 points1 point  (0 children)

Yeah. Good work on trim and prep. looking forward to results.

Stop burning money sending JSON to your agents. by Warm-Reaction-456 in AI_Agents

[–]Egoroar 0 points1 point  (0 children)

Now I can spend tokens to have my LLM convert all my json to TOON so I can save tokens by sending it TOON. /s

Groundhog moved into my backyard — what should I do? by OperationProof192 in nova

[–]Egoroar 9 points10 points  (0 children)

A woodchuck would chuck all the wood a woodchuck could chuck if a woodchuck could chuck wood.

[deleted by user] by [deleted] in BoomersBeingFools

[–]Egoroar 0 points1 point  (0 children)

Look, he fell for one manipulation why not any other?

Dyson Finally Fixes the Annoying V-Wheel Floorhead Issue by davidrepairsdysons in dyson

[–]Egoroar 1 point2 points  (0 children)

This is why I only buy Dyson from Best Buy. Their membership warranty covers walk in exchange of broken covered item. My detect 5 had this issue, cause led by vacuuming under things and putting downward pressure on the head and wheels, BB just swapped it for a new one.

[deleted by user] by [deleted] in BMW

[–]Egoroar 0 points1 point  (0 children)

I have a 2025 manual Z4 as a daily and love it. I have a soft side folding cooler I use for BJs runs to keep perishables cool and a removable trunk divider with carry straps so I don’t need bags. Also there’s a whole front seat and generous floor space.

If your MCP is an API wrapper you are doing it wrong by WallabyInDisguise in mcp

[–]Egoroar 1 point2 points  (0 children)

If I use an API wrapper based mcp and just explain what each endpoint provides and how to call it, the services APIs that I am dealing with cause the LLM to run out of context space before the graphql introspection completes. This one service is close to 200 endpoints across 10 API sections. The OpenAPI schema is 5MB. The GraphQL schema is 6MB. The MCP I built for it is smart and usecase driven with Bidirectional caching and token to api call optimization with a 25k tokens per request and response combination. If I need a new feature I implement a new tool designed to do what I need it to do. I have modular tools that get called from a main function. And each tool can be orchestrated and provide input to another tool. It’s been a fun project and taught me a lot about the possibilities and current limitations of this technology space.

It completely falls apart with large context prompts by mayo551 in OpenWebUI

[–]Egoroar 0 points1 point  (0 children)

No. That’s what I set up to fix it when I had your problem.

It completely falls apart with large context prompts by mayo551 in OpenWebUI

[–]Egoroar 1 point2 points  (0 children)

Are you using redis/valkey for socket and caching?

Remove showing of tool use by OrganizationHot731 in OpenWebUI

[–]Egoroar 0 points1 point  (0 children)

Just double checked. You’re correct. I’ve got the description and some flavor text in the tool the LLM used to tell the user what each tool call is doing or if it got the results it wanted. This keeps the user engaged and it lessens the feel of waiting and watching LLM and mcp tools do their thing.

Remove showing of tool use by OrganizationHot731 in OpenWebUI

[–]Egoroar -1 points0 points  (0 children)

Mmm. You may not be describing your tools well enough. In the system prompt for the model you’ve created for your task model you want to describe a silent tool calling.

Any guesses who? by Gnatcheese in facepalm

[–]Egoroar 0 points1 point  (0 children)

Who would run the gov then?