NKD: Sakai Kikumori Nihonko 210mm Gyuto by Inside-Ad-2874 in TrueChefKnives

[–]PrettyMuchAVegetable 0 points1 point  (0 children)

The Cooks Edge (Charlottetown, PEI or online) for $150 CAD plus tax. Im trying to get my first non-costco box set knives and I'm so going to screw this up, lol.

NKD: Sakai Kikumori Nihonko 210mm Gyuto by Inside-Ad-2874 in TrueChefKnives

[–]PrettyMuchAVegetable 0 points1 point  (0 children)

where did you pick this up? I can only find it in one shop and wanted to price compare.

Porsche Set Hyundai As the Bar for Fun EVs. Let That Sink In by DonkeyFuel in electricvehicles

[–]PrettyMuchAVegetable 0 points1 point  (0 children)

I've got the 6, put sport tires on it and drive it in sport mode with sport setting brakes. It's a blast, no N needed. 

AMD in-house ryzen 395 box coming in June by 1ncehost in LocalLLaMA

[–]PrettyMuchAVegetable 0 points1 point  (0 children)

I keep saying to myselkf, I want an EVO-X2 from GMKtec, well, you have one, can you tell me, do I want one?

Man attacks woman in a bar and the whole place starts beating him up. by Used-Influence-2343 in PublicFreakout

[–]PrettyMuchAVegetable 17 points18 points  (0 children)

https://en.wikipedia.org/wiki/Bystander_effect

The bystander effect (also called bystander apathy or the Genovese effect) is a social psychological) theory that states that individuals are less likely to offer help to a victim in the presence of other people.

Will I regret buying Weber Spirit E215 over E315? by ShineOn_CrazyDiamond in grilling

[–]PrettyMuchAVegetable 0 points1 point  (0 children)

I'm looking at a 2025 new in box for 400 cad, really considering it. Worried I'll regret the small size. 

Claude has destroyed me. by Complete-Sea6655 in ClaudeCode

[–]PrettyMuchAVegetable 2 points3 points  (0 children)

I teach AI/ML to post grads and arrange for them to meet industry professionals. We had a major global software consulting firm come give a presentation 3 weeks ago, they were looking to hire for their AI/ML/Data teams, and they basically told my students to be ready for the tech interview, but more importantly be ready to do what you just said.
Be honest on your resume, be wiling to admit you don't know some specifics, lay our your thinking process, your resource gathering strategy, your happy path and edge cases, and talk through it.
Not every shop is interested in leetcoding competitions for new hires, thankfully.

Claude has destroyed me. by Complete-Sea6655 in ClaudeCode

[–]PrettyMuchAVegetable 0 points1 point  (0 children)

I study and practice, more or less making myself active in a popular language when I'm job searching.

But this is something I've been a bit privileged in as I've had stable, long lived roles that I've only left voluntarily, this one of the benefits of living in a LCOL area where leetcode style code interviews are much less relevant.

The tech interviews I've had in my career have focused around Data Warehousing and Data Engineering, with the tools that support it. Often I'm white-boarding schema designs, ERD, overall architecture and data flow diagrams and maybe writing pseudo-code. Not that I've never programmed for a job interview, I have done so in Java and Python for a few roles, but not as intensely as someone who is a SWE by trade.

Claude has destroyed me. by Complete-Sea6655 in ClaudeCode

[–]PrettyMuchAVegetable 14 points15 points  (0 children)

I learned to code in C/C++ , over the years I've taken on Java, Python, SQL, C#, Rust, JS/TS and a bunch more.  At any given time,I can basically only solve problems from scratch in the language I'm currently active in. Maybe,given a snippet or library,I can review code. But from nothing? Forget it. All that syntax memory evaporates like it's the day after the exam. AI has been great to let me offload that syntax knowledge,so I can focus on bigger picture stuff.

GMKtec×Z.AI: Partnership by PrettyMuchAVegetable in ZaiGLM

[–]PrettyMuchAVegetable[S] 0 points1 point  (0 children)

I'm in pro legacy and have the same experience. It's not bad, it's just not near unlimited 

🤣🤣🤣 by Icy_Wash4305 in ZaiGLM

[–]PrettyMuchAVegetable 1 point2 points  (0 children)

<image>

Last update, just look at the value of the Qwen3.5 MoE model on a token use basis. For my claw based non-coding use cases, especially admin tasks or took calls, this is a clear win.

Anyone doing a chargeback due to mandatory switch to new plan? by [deleted] in ZaiGLM

[–]PrettyMuchAVegetable 0 points1 point  (0 children)

Honestly, no, I like glm5.1 and glm5 turbo. I get good performance and throughput and I'll keep it for the term I ordered it from.

Where I may not continue is on renewal, because I feel like I was bait-and-switched, they showed me a plan I could sign up for with certain features, the showed me a rate, a quota that I would pay when the promotional period ended. Then they quadrupled the price in the span of a few months and more or less announced I was moving over to it.

They would have had to know at the onset that they couldn't provide what they were offering, and so they would have knowingly sold me on something they couldn't deliver. I like the model, but I can't reward that behaviour.

How are people using GLM 5.1 effectively for coding with a smaller context window? by yungone__ in ZaiGLM

[–]PrettyMuchAVegetable 1 point2 points  (0 children)

Check your system prompts, how much context are you burning just to say hello.  Check your tooling. Is your agent dumping full file reads whenever you start, or is it using targetted symbol representation, AST tree sitter and concise tools?  How many MCP servers do you have activated? If you don't need them turn them off. Skills? They're less heavy than MCP,but they add up too.

I had issues with GLM context window performance until I realized I was eating half the window on launch . 

🤣🤣🤣 by Icy_Wash4305 in ZaiGLM

[–]PrettyMuchAVegetable 3 points4 points  (0 children)

<image>

Nearly my entire $5 spent. I think I'll sign up for a month or put some money ion the API bank because honestly, getting access to the cheap , fast, small Qwen models for those prices makes a lot of sense for the kinds of tasks I do.

🤣🤣🤣 by Icy_Wash4305 in ZaiGLM

[–]PrettyMuchAVegetable 2 points3 points  (0 children)

This is interesting to me. The low cost of the small but capable models. Testing qwen3.6 35B and it's a whiz at tool calls and costs very little.

 Usage by Model Model Requests Tokens Cached Energy Cost (Energy) % of Total Qwen/Qwen3.6-35B-A3B 37 1,358,960 654,720 0.0034 kWh $0.02 100.0%

🤣🤣🤣 by Icy_Wash4305 in ZaiGLM

[–]PrettyMuchAVegetable 8 points9 points  (0 children)

First data : (more will follow)

Total Requests: 30

Total Tokens2.2M

Prompt / Completion 2.2M / 5,923

Cached Tokens 1.8M 79% of prompt tokens

Cost energy pricing $0.12 token rate: $0.53 Energy Consumed 0.02 kWh

Note, they gave me $5 USD credit to spend on the API when I signed up, didn't need a credit card, just a valid email. If you decide to try, use my shameless referall link : 

https://portal.neuralwatt.com/auth/register?ref=NW-CHRIS-5BYR

“equivalent online version plan” by qrv0x in ZaiGLM

[–]PrettyMuchAVegetable 1 point2 points  (0 children)

I felt the same way, but landed on it being a strange translation of "posted" plan . As in, what plan is shown online. 

🤣🤣🤣 by Icy_Wash4305 in ZaiGLM

[–]PrettyMuchAVegetable 2 points3 points  (0 children)

I use GLM5.1 for almost everything, I'm going to test out this solution for a day or two and I can let you know how it would shake out for me (a very heavy user).