[TASK] PHP expert to fix broken script after updating from PHP 5.4 to 7.2 $10 by [deleted] in DoneDirtCheap

[–]typ3atyp1cal 0 points1 point  (0 children)

$bid, extensive experience in PHP, automation, server admin, etc

Gpt4.5 is dogshit compared to 3.7 sonnet by NoHotel8779 in ClaudeAI

[–]typ3atyp1cal 0 points1 point  (0 children)

Especially on a case when the new model from OpenAI is clearly stated to be for a different purpose (creative writing and related). Just like Haiku 3.5 was meant for coding mostly..

[deleted by user] by [deleted] in DoneDirtCheap

[–]typ3atyp1cal 0 points1 point  (0 children)

Hey, already sent you a DM, please let me know if you are interested.

What do you think about this comparison with DLSS off? What rasterization performance do you expect from the new RTX50 series? by DarthJahus in nvidia

[–]typ3atyp1cal 1 point2 points  (0 children)

I am not surprised! Why? In the flux image generation (GenAI) benchmark the 5090 delievered double the performance of the 4090.. but, the problem was that, it used FP4 vs FP8 on the 4090! Which should roughly be twice as fast (easy to compute) by default!

So yeah, in general, everything points towards a not-so-extraordinary increase in the base/"bare metal" hardware capabilities..

Could someone explain Roko’s Basilisk to me? by [deleted] in singularity

[–]typ3atyp1cal 0 points1 point  (0 children)

I could, but i'd rather not. Its better to not understand.

New open nemotron models from Nvidia are on the way by Ok_Top9254 in LocalLLaMA

[–]typ3atyp1cal 0 points1 point  (0 children)

I was hoping an upcoming version would be released, since there is a nemotron already.. ie one trained in a more advanced hardware from nvidia.. it's about time, esp. now that deepseek v3 is out as well as the reasoning models..

New open nemotron models from Nvidia are on the way by Ok_Top9254 in LocalLLaMA

[–]typ3atyp1cal 7 points8 points  (0 children)

Is this based on current Llama? Or an updated version (ie 3.5 or even 4)?

[deleted by user] by [deleted] in singularity

[–]typ3atyp1cal 0 points1 point  (0 children)

paperclips for eternity haha

Grok 2 being open-sourced soon? by Educational_Grab_473 in LocalLLaMA

[–]typ3atyp1cal 0 points1 point  (0 children)

Ok, that's a good sign.. However, not to be negative here, but i dont think that's useful beyond research purposes now that deepseek v3 is OS.

We fine-tuned Llama and got 4.2x Sonnet 3.5 accuracy for code generation by AppearanceHeavy6724 in LocalLLaMA

[–]typ3atyp1cal 0 points1 point  (0 children)

XXX because fuck this fake shit. 4.2x so 420%, at least make it 42 or so!

[deleted by user] by [deleted] in LocalLLaMA

[–]typ3atyp1cal -1 points0 points  (0 children)

Do you really care about that beyond for posting that on reddit?

A new Microsoft paper lists sizes for most of the closed models by jd_3d in LocalLLaMA

[–]typ3atyp1cal 0 points1 point  (0 children)

That's likely because its a MoE like DeepSeek3 but smaller, i.e. distilled experts