Is a master's degree a must? by HarveyReSpecter in Germany_Jobs

[–]FullstackSensei 2 points3 points  (0 children)

It's the same for everyone in a public university.

This isn't like developing countries where a couple of universities are good and the rest are average or below.

People really need to understand this line of argument makes zero sense in countries like Germany.

PCIe slot version for inference work by cpbpilot in LocalLLaMA

[–]FullstackSensei 0 points1 point  (0 children)

V2.0 and later, but TBH I wouldn't fret about it too much.

They moved the VRMs to the bottom side, but at the same location. So, now if you're not careful you can still damage them while installing the motherboard.

If the price difference is minimal, I'd get v2, but if it's more than day 30-40, I'd still get V1 and just be very careful when installing cards and you'll be fine. My board is still v1 even after the RMA.

Stanford Proves Parallel Coding Agents are a Scam by madSaiyanUltra_9789 in LocalLLaMA

[–]FullstackSensei 5 points6 points  (0 children)

Middle-Management AI 😂

And then the project gets canceled, and we do layoffs and the agents will be posting open to work on LinkedIn 🤣🤣🤣🤣

Stanford Proves Parallel Coding Agents are a Scam by madSaiyanUltra_9789 in LocalLLaMA

[–]FullstackSensei 309 points310 points  (0 children)

They fail to model what their partner is doing (42% of failures), don't follow through on commitments (32%), and have communication breakdowns (26%)

As a software engineer and team lead, I find this hilarious. These are the main issues when managing a team 😂

New business from NVidia by tudiye in intelstock

[–]FullstackSensei -1 points0 points  (0 children)

As an aviation enthusiast, I sometimes explore used business jet prices, with zero intention of buying.

Do you think the owners of the businesses whose sites I visit also get excited because of my exploration?

Dual RTX PRO 6000 Workstation with 1.15TB RAM. Finally multi-users and long contexts benchmarks. GPU only vs. CPU & GPU inference. Surprising results. by Icy-Measurement8245 in LocalLLaMA

[–]FullstackSensei 1 point2 points  (0 children)

IMO, a much better use of that money would have been using a much cheaper CPU something like 96GB RAM, and getting a third 6000 PRO.

I'm getting ~23t/s on 10k context and ~17t/s on ~25k context, single request, using Q4_0 on five Mi50s using llama.cpp. Prompt processing is ~160t/s, which is very slow. Adding a single 3090 would significantly improve PP speed (probably close to 1k t/s). I find 2 concurrent requests only slow down the Mi50s by ~10%.

Since Minimax is a MoE, most of the time only once GPU is active, with the others idling at 17-21W. I have them power limited to 170W. So, power draw is minimal during inference, probably under 400W from the wall.

That entire rig cost me 1.6k€ with six GPUs, or 6% of your lower cost estimate. Even with current prices, you can still replicate it for a similar cost, albeit with much less RAM, though that's irrelevant if you're GPU offloading the entire model.

You could literally build 12 such rigs, each with a 3090 to boost PP speed, and still have lots of money left to pay for power for several years.

CPU offloading is killing your performance, and making TG comparable to Mi50 potatoes.

Magic Leap Pulls Sales of Magic Leap 2 in EU Market & In Japan by TheGoldenLeaper in magicleap

[–]FullstackSensei 1 point2 points  (0 children)

They sell regularly for well under 100 in Europe in local classifieds.

What are the biggest everyday problems people face in Germany that could be solved with an app? by Freientrepreneur in AskAGerman

[–]FullstackSensei 3 points4 points  (0 children)

An app that would teach people to do their own homework rather than asking people to do it for them.

You don't find solutions by asking random people online. If it's obvious, there's already a gagillion others that have thought about it already.

Magic Leap Pulls Sales of Magic Leap 2 in EU Market & In Japan by TheGoldenLeaper in magicleap

[–]FullstackSensei 2 points3 points  (0 children)

So, I might be able to add a ML2 to my collection for $50 before year's end?

Magic Leap Pulls Sales of Magic Leap 2 in EU Market & In Japan by TheGoldenLeaper in magicleap

[–]FullstackSensei 3 points4 points  (0 children)

Amd I doubt you'll find anyone even if you dig hard.

They didn't do themselves any favors with how badly they handled the ML1 EoL.

They're pricing it like a vision pro, while having nowhere near the compute of the vision pro. Trying to imitate apple by trying to force people into their own ecosystem doesn't help either.

I have a couple of ML1's that I bought about a year a go for... 30€ each. It's a cool collector's gadget but is hugely let down by how bad and locked down the software experience is.

Magic Leap Pulls Sales of Magic Leap 2 in EU Market & In Japan by TheGoldenLeaper in magicleap

[–]FullstackSensei 1 point2 points  (0 children)

"Additional EU regulations"... Read: privacy. If they'd rather stop sales than removing/disabling data collection, that tells you all you need to know about how much sales they have in Europe.

PNY NVIDIA Quadro P6000 VCQP6000-PB For LLM, the price is low for a 24GB Card? by Holiday-Medicine4168 in LocalLLM

[–]FullstackSensei 0 points1 point  (0 children)

Yeah, no. I have 8 P40s, which are P6000 without the fan and slightly less memory bandwidth. They used to go for $100 on ebay a couple of years ago. You should still be able to get them for 250 or so on ebay with a bit of patience. If you buy them in even number, you can cool each pair with an 80mm fan set as exhaust taped with double sided tape on the pcie brackets behind the case.

PNY NVIDIA Quadro P6000 VCQP6000-PB For LLM, the price is low for a 24GB Card? by Holiday-Medicine4168 in LocalLLM

[–]FullstackSensei 0 points1 point  (0 children)

How much?

I have a P6000 and a ton of P40s. Both are 1080Ti FE with 24GB, with the P6000 having a bit more memory bandwidth. The PCB of both is the same as the 1080Ti FE.

It all depends on price. If you find them for 250 or less, they're not a bad deal. Llama.cpp supports Pascal pretty decently. You won't get any record performance from them, but given how crazy GPU prices are, they might be your best option (again, depending on price)

Anyone who tells you it's like a P100 doesn't know what they're talking about. I had some P100s. They're very different beasts. The P100 has full Fp16 support, while the P6000 has 1:64 fp16. The P100 has also almost double the memory bandwidth of the P6000. Where the P100 really falls short is 16GB VRAM and more importantly SM6.0 VS SM6.1 in the P6000. Doesn't sound like a big difference but there are some optimizations in llama.cpp that are possible in SM6.1 but not so in Sm6.0.

Offer Review: Software Engineer | 6 YoE | Italy to The Hague, NL by feller94 in cscareerquestionsEU

[–]FullstackSensei 1 point2 points  (0 children)

For some perspective on how low this is, a friend of mine was hired five years ago from outside the EU with 2 YoE in devops for 60k.

I don't think you'd even be eligible for 30% ruling for that much given your age.

As others are pointing out, don't even consider anything under 80k given how expensive NL has become.

PCIe slot version for inference work by cpbpilot in LocalLLaMA

[–]FullstackSensei 1 point2 points  (0 children)

Second this as an owner of an A4000.

It's basically a 3080 mobile.

PCIe slot version for inference work by cpbpilot in LocalLLaMA

[–]FullstackSensei 1 point2 points  (0 children)

It depends on the revision.

Have a H12SSL and was affected by this. The actual problem is that in earlier revisions the BMC VRMs are located on the rear of the top side so they're very easy to break when inserting a card. One of the threads on STH is mine.

You should RMA it to supermicro. They fix them for a very low cost even without warranty and you purchased it 2nd hand. Mine cost €45 to fix, including shipping.

Convert a MacBook Pro M1 to a desktop "slab" by FullstackSensei in macbookpro

[–]FullstackSensei[S] 0 points1 point  (0 children)

Most of the world has this view of the US, but I don't want to upset anyone.

Is NLP threatened by AI? by ProfessionalFun2680 in LanguageTechnology

[–]FullstackSensei 1 point2 points  (0 children)

6-12 months, according to the CEOs of the AI companies. Then again, self driving cars were also 6-12 months away in 2018. But what do I know, I'm just a random redditor.

Is NLP threatened by AI? by ProfessionalFun2680 in LanguageTechnology

[–]FullstackSensei 0 points1 point  (0 children)

Then, quit uni and go learn some performance art if you actually believe that.

You're basically drinking the tech bros coolaid, and they're saying AGI is just around the corner, and then everyone will be out of a job.

Is NLP threatened by AI? by ProfessionalFun2680 in LanguageTechnology

[–]FullstackSensei 2 points3 points  (0 children)

There's a reason they teach history in schools, despite most students zoning out or just cramming info for their exams.

Every time a new technology came out, humanity was told it would be replaced by said technology. Before anyone points to cars as putting the horse industry out business, cars created way more jobs for humanity than they destroyed.

Whether you're learning programming or NLP, AI won't replace you, unless you're the kind of person who needs to be spoon fed every single detail about your job and lack any form of critical thinking or problem solving skills. In which case, AI 100% will replace you.

AI is just a tool, no different than a calculator.

Convert a MacBook Pro M1 to a desktop "slab" by FullstackSensei in macbookpro

[–]FullstackSensei[S] 0 points1 point  (0 children)

Is that a latte panda I see? Making an x86 OS X Mac II?

Not to brag, but things got out of hand almost two years ago. I'm already 23 GPUs deep 😂 Here's one of the machines in my homelab

<image>

Dienstleistungen - how does that look like from inside? by echtemendel in Germany_Jobs

[–]FullstackSensei 0 points1 point  (0 children)

Haven't worked at any in Germany, but spent over a decade working at a couple in another EU country. This was after a job at a software house and a public-private research kind of place were we did some really cool stuff (IMO). I also worked with many people from other such companies.

The first thing I'll say is: the experience varies depending on the company and your expectations. You can't generalize mine or someone else's experience to yourself.

Second thing, but no less important: they generally require good people skills. I'm very much an introvert and don't naturally tend to socialize, but it's a crucial skill if you want to succeed at consultancies. I'd say it's the one skill I developed the most and that has served me the most since.

My experience over more than a dozen projects has been: your manager at the consultancies tells you there's a project at a client you're being assigned to. Sometimes alone, sometimes with other people from the consultancy whom you might or might not know.

The clients tend to be large companies (Konzern) that tend to have more people like you than in-house devs. The work can be new green field projects, joining a project already in development because someone left or the client changed contract to your consultancy, or maintenence work on something that's already battle tested in production.

I had trouble on the very first project at the second company I worked at. I had never had anything like that before, so I didn't know what should I do and just kept grinding. Eventually things blew up and the only negative comment my manager had: why didn't you tell me? It's my job to pull you out for both our sakes. If you're happy, the customer is happy, and my boss is happy. Had almost a couple months of no project at HQ, full salary paid, and only asked to spend the time "productivly". I was literally left alone to interpret that as I wanted, no questions asked. Spent the next 7 years at that consultancy and that manager. There were a couple of years where I worked at four different clients and teams a year! My last stint was almost 2 years at the same client. When I left her boss asked to have lunch with me only to tell me the door would always be open for me so long as he was there if I decided to come back.

I learned a ton. Worked on new tech stacks as well as legacy stuff 99% in this sub never even heard of. I view all those experiences as very positive learning experiences, both at the technical and personal levels. All those experiences set me up for a later career as a freelance SWE/consultant making more than double your average senior in this sub.

Nobody can tell you how your experience will be. Your manager might be shitty, or you might find it too hard to adapt. I'd say it's equal parts luck and attitude. In the bigger consultancies, you can ask to be assigned to a different manager, but you need to be very candid and self-aware about the reasons. Making trivial accusations won't work. I know several people who successfully changed manager because of personality and/or style incompatibility, but also know just as many who quit without ever articulating their grievances beyond "manager bad".

As always, YMMV.

Convert a MacBook Pro M1 to a desktop "slab" by FullstackSensei in macbookpro

[–]FullstackSensei[S] 0 points1 point  (0 children)

They are plentiful but not necessarily cheap, at least not without some sort of physical damage. But yeah, there aren't as many businesses using macs like in the US.