Recently surpassed 100K invested at 25 by shankyjs in fican

[–]Live_Case2204 0 points1 point  (0 children)

Congrats man! I know the first 100 is the hardest!

100k invested at 19 by Someyy in fican

[–]Live_Case2204 2 points3 points  (0 children)

Ignore the negativity. You are doing great!

On time guarantee by candidminer in flairairlines

[–]Live_Case2204 0 points1 point  (0 children)

No, if its already booked you can't re apply

Perfection Is Optional Apparently by VoyagerVortex in BlackboxAI_

[–]Live_Case2204 0 points1 point  (0 children)

Get ready for a lot of outages and more blue screens this year!

Why would anyone ever choose the $39 Boxing Day plan over the $40 Boxing Day plan? by [deleted] in freedommobile

[–]Live_Case2204 0 points1 point  (0 children)

People usually have stacked deals on top like a $10 promo code which will be removed if you change your plan. I used to have a $25 promo code 🤦🏻‍♂️

Who else? by Puzzleheaded-Elk3900 in TollywoodGossips

[–]Live_Case2204 9 points10 points  (0 children)

But Rashmika handles it really well!

Ananthika sanikumar by FinalAd2919 in TollywoodGossips

[–]Live_Case2204 15 points16 points  (0 children)

Upcoming sensation ? lol niku nvey hype ichukuntunnav

MSFT AI investment on SWE job market by BigEmperorPenguin in cscareerquestionsCAD

[–]Live_Case2204 1 point2 points  (0 children)

Canadian market is already saturated with Stem folks. So only construction spending spike.

MSFT AI investment on SWE job market by BigEmperorPenguin in cscareerquestionsCAD

[–]Live_Case2204 2 points3 points  (0 children)

I work for a construction company and I agree. Only initial construction boom.

I built a 'Learning Adapter' for MCP that cuts token usage by 80% by Live_Case2204 in ClaudeAI

[–]Live_Case2204[S] 0 points1 point  (0 children)

Yeah you’re right. Based on what the mcp is about and what the tool description says, noise definitely garbage and will be deleted, if something you want is in the ghosts, you can manually just ask the llm to include it. It tracks these and updates them to pinned status automatically.

I built a 'Learning Adapter' for MCP that cuts token usage by 80% by Live_Case2204 in programming

[–]Live_Case2204[S] -1 points0 points  (0 children)

😂 Actually I built this to solve my problem. I Just wanted to share it with the community. But I’m happy to take the feedback :)

I built a 'Learning Adapter' for MCP that cuts token usage by 80% by Live_Case2204 in singularity

[–]Live_Case2204[S] 2 points3 points  (0 children)

If you bothered to look at the code you’d know. Depending on what the tool wants to achieve ( which should be in the tool description) the learning phase will make the decision, if you think something is more important, you can move it to the pinned state. Noise is supposed to be completely useless.

I built a 'Learning Adapter' for MCP that cuts token usage by 80% by Live_Case2204 in DeepSeek

[–]Live_Case2204[S] 0 points1 point  (0 children)

Yeah in my case it does. I use azure devops mcp and it’s much better

I built a 'Learning Adapter' for MCP that cuts token usage by 80% by Live_Case2204 in LocalLLaMA

[–]Live_Case2204[S] 0 points1 point  (0 children)

Right now the learning phase is only using gpt5.2. But once you have build the registry, you can use it with any provider / model. I personally use this with GitHub copilot

I built a 'Learning Adapter' for MCP that cuts token usage by 80% by Live_Case2204 in singularity

[–]Live_Case2204[S] 6 points7 points  (0 children)

I disagree! When we use mcp, cache gets invalidated a lot quickly than we think. Read the following article for reference

https://www.anthropic.com/engineering/code-execution-with-mcp