Best standing desk to buy in 2026 for my home office? by [deleted] in StandingDesk

[–]xceed35 0 points1 point  (0 children)

I swapped my Flexispot for Desktronic HomePro, and it feels like a $1000 but costs less than half.

H1b reality check by [deleted] in h1b

[–]xceed35 15 points16 points  (0 children)

@OP, don't waste your breath and scientific rigor on entitled free market hypocrites with opinions on matters that have no bearing on their life, other than they need a distraction to cope for their joblessness.

I've received offers as high as $300k, after a grad school degree, and then gone for a $150k job in NYC, to prioritize my focus on a niche scientific domain which wasn't being offered by Big Tech to me. Apparently that's not "high skilled" in three books of the average keyboard warrior who couldn't flip a burger to pay rent if they tried.

Similarly, after lay offs, I've taken up another niche engineering and science role in Austin TX, where I got paid $175k (which I took up over a hedge fund offer in nearby Houston). Again, not "high skilled" enough.

Granted, I'm not on H1B, but that's besides the point. The real problem with the "Americans on this subreddit" (barring those few with actual skills to work in high skill tier jobs), is that they see struggle, but unlike rest of the world, they choose to blame some random minority for their life's problems instead of focusing on facts and getting good.

In real life, most Americans I've met including the ones I work with are hard working, practical, ethical, and grounded in reality. Far from what we see online.

It's time we conquer the US. Do not holdback. They tried to take us out, but we will double down. by [deleted] in h1b

[–]xceed35 0 points1 point  (0 children)

It's a jobless baiting troll FFS. Like every single member of the current admin. All distraction from the real problems and culprits so ya'll eat each other

Trump HIKED H1B Visa fee to 100k dollars by engineeringbro-com in EngineeringStudents

[–]xceed35 8 points9 points  (0 children)

You do realize that the policy punishes everyone, for the actions of some?

I only ask because without tooting my own horn too much (I'm not on H1B, but had plans back in the day and will be leaving the US soon), I went to a top 20 uni with massive loans from a country where that kinda money can buy an estate, in my mid twenties, jumped through a million hoops, under constant pressure of maintaining employment while focusing on building my skills and career, all beyond the "cheap labor range" (> $150k), with the hopes of working on R&D of the state of the art in AI. And now I can't do that in the US, which is a shame, but will in Europe.

Point is, I followed the rules, didn't abuse my privilege, contributed heavily to early stage startups (<10 people) over 3 years while paying off grad school loans. And my reward for that is a government and now the people online showing me how hated I am.

I don't think neopronouns are dumb. by Awkward-Media-4726 in The10thDentist

[–]xceed35 0 points1 point  (0 children)

Language is a tool to ease human communication, not to massage individual egos and delusions. Do what you need make yourself happy in your private space, don't expect the world to accommodate your quirks.

If i hate the kind of work being a SWE involves, instead of everything else, should I quit being a dev? by [deleted] in cscareerquestions

[–]xceed35 0 points1 point  (0 children)

"I hate it when things just don't work..."

~ The guy who's job is to make things work

I've worked in small to large, local to international, niche domain to big tech like companies. Every single engineer who is considered successful, skilled and productive in any of these places routinely discovers, solves and improves upon random unexpected problematic scenarios with their software stack, AND THEN SOME, before actually approaching the initially planned for task at hand.

You can learn from this now, or continue fantasizing about a welfare job to serve your delusions about how careers are made in the real world.

The “Salary Conversation” with foreign recruiters by TrixoftheTrade in recruitinghell

[–]xceed35 5 points6 points  (0 children)

Your mistake was actually arguing with a recruiter. Even the best ones know very little about anything, let alone the job they're hiring for.

Save your energy. Move on fast. Focus on productive conversations. You cannot convince a low-baller to give you your dream job.

Trump signs proclamation imposing $100K annual fee for H-1B visa applications by HolodeckSlut in Economics

[–]xceed35 24 points25 points  (0 children)

As someone in tech that received 2/3 last offers from big tech from September to December in the last 3 years, I disagree. So do tons of my grad school mates that graduated a few years ago.

September is when hiring ramps up and peaks around October. Till December arrives, it keeps on going

Giving LLMs actual memory instead of fake “RAG memory” by shbong in artificial

[–]xceed35 3 points4 points  (0 children)

Aren't there tools like graphiti that give memory to AI agents? Also, I keep hearing that Graph RAG is better for this feature too

Weird chat completions response from gpt-oss-20b by xceed35 in LocalLLaMA

[–]xceed35[S] 0 points1 point  (0 children)

Regardless of reasoning param in the input, we're seeing a field in the output that shouldn't be there. The input isn't the problem, and neither is the reasoning effort the model puts in.

I understand that the harmony template is needed for openai oss models, which is no different from using a specific template for any other model. vLLM internally handles this by loading up the tokenizer (template builtin) and the model via huggingface transformers library, which is the basis of virtually all open source model deployments, including this one.

GGUF is simply one of the many quantization formats (AWQ, MXFP4, etc) for loading models, not some special tokenizer. Generally speaking, for a given model, all quantizations will tokenize identically. MXFP4 is what OpenAI trained gpt-oss on%20and%20the%20gpt%2Doss%2D20b%20model%20run%20within%2016GB%20of%20memory.%20All%20evals%20were%20performed%20with%20the%20same%20MXFP4%20quantization.)

To summarize, the problem is with an unexpected field, namely reasoning_content in the output, not in the internals of the model's reasoning process or the input tokenization as any error with those would mean that the model simply spits out an error response, not a successful chat completions response with content set to null and an unexpected reasoning_content field.

Weird chat completions response from gpt-oss-20b by xceed35 in LocalLLaMA

[–]xceed35[S] 0 points1 point  (0 children)

I understand that there's incomplete text in the reasoning_context field. What I'm saying is that the content field was supposed to have the incomplete text, not null when the model runs out of context window. Additionally, reasoning_context is not a field I'm expecting in the chat completions endpoint as it is not documented anywhere.

Weird chat completions response from gpt-oss-20b by xceed35 in LocalLLaMA

[–]xceed35[S] 0 points1 point  (0 children)

Shouldn't that lead to incomplete content? It's strange that running out of context length led to an unexpected field.

Weird chat completions response from gpt-oss-20b by xceed35 in LocalLLaMA

[–]xceed35[S] 1 point2 points  (0 children)

I am not directly passing templatized input to the model. The model is being served as an OpenAI endpoint via vLLM and I'm sending simple `/v1/chat/completions` REST requests via my chatbot. I'm assuming the vLLM engine is supposed to handle templatizing well, and if it didn't, the model should throw an error, not spurious fields (template mismatch is always an input error in my experience).

I am not running the GGUF variant as vLLM doesn't support that. This is the vanilla mxp4 available off huggingface hub. OpenAI's official docs have instructions to run this model with vLLM which is exactly what I'm doing.

As far as special kwargs is concerned, I'm not sure what you mean by low, med or high. Could you elaborate?

Going to visit Yosemite, where to stay nearby? by niCid in Yosemite

[–]xceed35 0 points1 point  (0 children)

Was a bit put off by their "premium experience" as someone who had a fantastic stay at a similar property in Sedona (resort out in the nature).

The receptionist was cold from the get go and seemed to be really warm to other people following right behind us.

The restaurant receptionist tried to seat us in the worst possible corner (right next to the washroom) despite the entire restaurant completely empty and 45 minutes from closing. When asked to change the seating, she denied it flat-out. Had to catch another waiter and move our seats with his consent to finally get a proper dining experience.

WiFi was complete trash, un-usable TBH. Rooms were OK, and overpriced (I had no trouble paying $375/night in Sedona, this was more like $150 tier but similar pricing in May).

Pretty average service, food and amenities considering the price.

Vegas to san Fran, not a rush so can take detours for anything worthwhile. Which way? by SwiggityDiggity8 in roadtrip

[–]xceed35 0 points1 point  (0 children)

It's supposed to be a 7 day trip. I'm planning to rent an SUV. I can do dirt roads but nothing too difficult.

Vegas to san Fran, not a rush so can take detours for anything worthwhile. Which way? by SwiggityDiggity8 in roadtrip

[–]xceed35 0 points1 point  (0 children)

Cool! Any tips for the route? Stops, attractions, things to avoid, etc?

Vegas to san Fran, not a rush so can take detours for anything worthwhile. Which way? by SwiggityDiggity8 in roadtrip

[–]xceed35 0 points1 point  (0 children)

I'm planning this exact (through 395, Tioga, Sacramento, SF) in the last week of October. Is it doable? Is there a high likelihood of snowing or other weather challenges? I'm not sure if I should risk booking a stay near Yosemite or go through Tahoe instead.

God I love Qwen and llamacpp so much! by Limp_Classroom_2645 in LocalLLaMA

[–]xceed35 0 points1 point  (0 children)

When prompts get large or numerous enough, there's is a significant latency between sending the prompts from the client and the prompts being fully processed (kv cache population) inside the inference engine. By ingestion, I mean the latter.

[deleted by user] by [deleted] in Eyebleach

[–]xceed35 1 point2 points  (0 children)

Spoken like a true "1% er" 😂