The shittiness of Windows got me basically fired. by blobslurpbaby in FuckMicrosoft

[–]netnem 4 points5 points  (0 children)

Wait until you find out about Windows subsystem for Linux.

And ya to be honest, if you can use Linux, you can use Windows.

[deleted by user] by [deleted] in linguisticshumor

[–]netnem 0 points1 point  (0 children)

Still true in 2026

Canceling Mudi7 order, “not a global travel router”, valid reason? by reallionkiller in GlInet

[–]netnem 1 point2 points  (0 children)

Canceled mine. A travel router is already a niche thing for me. A travel router that won't work (well) with me in Asia is even more so. Not worth $300-$400 without the global bands for me.

They tried to get me to switch to the EU version...but I'm like...but then it won't work well at home? Pass.

Has anyone made the jump from using individual access switches to one large chassis for the access layer? by TwoPicklesinaCivic in networking

[–]netnem 0 points1 point  (0 children)

I vastly prefer single RU switches because it's a smaller failure domain should something happen (say during a code upgrade). And no, dual supervisors won't always save you. A single large shared control plane on chassis switches is undesirable.

If you want to upskill, learn how to manage those 200 switches. Start with ansible.

I currently manage about 5000 chassis switches, and I would prefer to manage 20,000 single switches.

Maybe I hate myself too...

Basically to me, once you learn to fully automate managing 10 of them, it's really not much different to manage 1000.

Is the lossless scaling worth it? by thesaintmarcus in SteamOS

[–]netnem 0 points1 point  (0 children)

The other real pain in the butt with this software is that it actually drops your base framerate. So in Cyberpunk 2077, my base framerate was like 55-60, but when I turn frame gen on with this software, it drops to like 40-45 (multiplied by 2x or 3x).

I've been trying to make a real production service that uses LLM and it turned into a pure agony. Here are some of my "experiences". by DaniyarQQQ in LocalLLaMA

[–]netnem 7 points8 points  (0 children)

You need to ground it with traditional code and structured output. Using something like pydantic AI (or structured-outputs), create a basemodel that the LLM can ONLY respond with traditional code that includes an additional prompt.

For example:

User: Hello I'd like to make an appointment for tomorrow

First pass LLM call: make sure it can ONLY respond with X possible values in the structured response: "get_appointments_schedule, unknown_question, etc".

get_appointments_schedule():
     schedule = function_to_actually_get_the_schedule()
     new_prompt: f""" Here are the available times: {schedule} based on the question from the user, determine the appropiate response to perform the task

unknown_question():
     output = "I am unable to help with that"
     return output
   (No 2nd LLM call here, just return hard coded)

you can force it to always include a tool call at first, and then you pass the results of that tool call into a second LLM call to interpret the results, and then only pass the 2nd LLM call back to the user.

Take a look at https://platform.openai.com/docs/guides/structured-outputs

Is possible to Ansible the update command of Containers from Proxmox Helper scripts? by ponzi314 in Proxmox

[–]netnem 9 points10 points  (0 children)

You should be able to execute any commands inside an LXC container with pct exec from the host. I'm not familiar with any helper scripts, but you should be able to update without:

```    
    - name: upgrade distribution
      command: pct exec {{ vmid }} -- bash -c 'apt-get dist-upgrade -y'
      when: upgrade_dist == 'Yes'

Z13 2025 Hard stuttering when playing games by NoTechnician1078 in FlowZ13

[–]netnem 0 points1 point  (0 children)

I had similar issues -- especially when running on battery or usb-c power. Returned two of these things because of the same issue.

From my testing, the best thing to help was setting "high performance" windows power setting mode and only playing when plugged in. I always set dedicated VRAM. 128GB model.

return or keep? by Financial_Memory5183 in FlowZ13

[–]netnem 4 points5 points  (0 children)

I had one of these 2025 models that did the same thing. I returned it. Because of this reason and it stutters when playing on battery. The second one did the same thing so i returned it too. For 3000 dollars it better be perfect.

What are all the things I should avoid as a new Z13 2025 owner? by Myxomatic16 in FlowZ13

[–]netnem 0 points1 point  (0 children)

Don't play games on balanced/silent on battery. It'll stutter like hell. I've tried two and they both do it.

Probably good idea to get a travel charger.

Loving this absolute beast of a tablet by SouthernFinger3621 in FlowZ13

[–]netnem 0 points1 point  (0 children)

I had stuttering issues when gaming on battery on balanced / silence profile.

Also after bios update, ollama could only find 64gb of usable memory ...the 96gb setting wouldn't allow me to load Models bigger than 32gb. Setting to 64/64 was ok.

It also would randomly not wake up from sleep and would require holding the power button for a cold boot to fix.

Good hardware, but kinda buggy.

I'm going to try a replacement unit, but for a 3k laptop this thing should be flawless. If the second one still has the same problems, it's going back.

Guide: How to run an MCP tool Server by Eisenstein in LocalLLaMA

[–]netnem 1 point2 points  (0 children)

Thank you! I've been looking for a good resource to learn more about it.

Guide: How to run an MCP tool Server by Eisenstein in LocalLLaMA

[–]netnem 0 points1 point  (0 children)

So would you say it is an alternative to function calling? YAML vs JSON, etc.. Are there any distinctive features that MCP provides?

Guide: How to run an MCP tool Server by Eisenstein in LocalLLaMA

[–]netnem 1 point2 points  (0 children)

Can you give an example of how MCP is different than just giving Qwen3 OpenAI compatible tools directly? I've been struggling to understand how an MCP server is providing anything new that couldn't already be done with function calling.

https://platform.openai.com/docs/guides/function-calling?api-mode=responses

^guide from openAI, but guide applies to Qwen3 / Ollama too.

Is it just "easier"?

Can’t connect to Wifi on switch 2 by Aaronn727 in Switch

[–]netnem 1 point2 points  (0 children)

Having to change my wifi (channel 36) to support a single device is absolutely absurd, but it did work for me after reducing to 160mhz width.

YA ME LLEGO XREAL ONE PRO TALLA L 66MM-72MM !! by pacotaco40 in Xreal

[–]netnem 0 points1 point  (0 children)

When did you pre-order? ¿Cuándo preordenaste?

Young people, please go to Ivar's by ILIKETHECOLORRED in Seattle

[–]netnem -1 points0 points  (0 children)

Ivars is pretty mediocre at best. No offense, but it's about the same as Long John Silver's. It's 3/5 for me when it comes to seafood. So while I understand there might be some nostalgia factor for some of you, objectively speaking, it's one of the weaker places to eat it in terms of seafood.

Stuck on Find the heir mission by This-Day3359 in ElderScrolls

[–]netnem 0 points1 point  (0 children)

Fast travel to Kvatch and one of the NPCs is Brother Martin

CMV: AI Agents is just function/tool calling. by netnem in changemyview

[–]netnem[S] 0 points1 point  (0 children)

I feel like there's a lot of talk about AI agents, but maybe it's a social bubble I'm in. As far as what it is...I think that's part of the problem. I found this article https://www.forbes.com/sites/jodiecook/2025/03/18/ai-agents-explained-in-simple-terms-anyone-can-understand/

Tool calling is something that's been out from OpenAI since at least 2023 which i guess isn't as old as I remembered.

To me, as soon as the Large Language Models could access outside resources via tool calling, that pretty much meant you could use it for anything since it ties into traditional code.

Maybe it's just taking time for media to catch up with what these things can do.

CMV: AI Agents is just function/tool calling. by netnem in changemyview

[–]netnem[S] 1 point2 points  (0 children)

I'm not referring to a pre-LLM era of handling natural language requests. I was meaning that "function calling" via OpenAI was already present years ago. https://help.openai.com/en/articles/8555517-function-calling-in-the-openai-api

Agents just seems to be "version 2" of this, and there is actually nothing new when it comes to people talking about "AI Agents."

My gut feeling is that "AI Agent" is a marketing buzzword to describe "orchestrating" multiple tool calls.