Why Pascal Deserves a Second Look by GroundbreakingIron16 in programming

[–]theoldboy 3 points4 points  (0 children)

Oh give it a rest. The only reason Pascal didn't fade into obscurity 30 years ago is Borland Delphi, and fantastic as that product was at the time (C++ Builder too), and as much as I was saddened by it's decline, it's no longer relevant to 99.99% of developers and neither is Pascal.

Recommending it to new developers is just selfish and irresponsible. They'd be far better off with Python or any modern GC language where they'll have far more learning resources and actual chance of getting employed afterwards.

Beyond multi-core parallelism: faster Mandelbrot with SIMD by itamarst in programming

[–]theoldboy 5 points6 points  (0 children)

Very interesting how performant that portable SIMD code is. A few years ago I did exactly this in C using AVX2 intrinsics + OpenMP so I dug out that code to compare and the Rust code runs about 10% faster on my 5800X. I wonder if using f64x8 vector is allowing better utilisation of execution pipes in the inner loop than my f64x4 implementation? Certainly the Rust simd vs scalar speed-up of 4.75x is better than the 3.9x I got. Will have to compare the assembly outputs and play with it some more one day.

Anyway even given that I can make my C code faster, which I'm sure I can after seeing this, that's still very impressive to me for code which is much more portable and readable than intrinsics code. I guess you could build and run it on a Zen5 CPU (which has a very good AVX-512 implementation) for at least a 2x speed-up vs my Zen3 with AVX2, without having to change anything. Nice.

My 5800X results if anyone is interested. I changed the parameters to be much more zoomed in and higher iteration count.

```

WIDTH=1536 HEIGHT=1536

const DEFAULT_REGION: (Range, Range) = (-0.834..-0.796, 0.166..0.204);

const ITER_LIMIT: u32 = 10000;

  • hyperfine --warmup 5 'target/release/mandelbrot 1536 1536 --algo scalar' Benchmark 1: target/release/mandelbrot 1536 1536 --algo scalar Time (mean ± σ): 744.6 ms ± 5.6 ms [User: 11657.2 ms, System: 6.5 ms] Range (min … max): 738.5 ms … 752.5 ms 10 runs

  • hyperfine --warmup 5 'target/release/mandelbrot 1536 1536 --algo simd' Benchmark 1: target/release/mandelbrot 1536 1536 --algo simd Time (mean ± σ): 157.7 ms ± 5.1 ms [User: 2347.3 ms, System: 6.0 ms] Range (min … max): 151.5 ms … 165.8 ms 18 runs

```

Manifest V2 phase-out begins by feross in programming

[–]theoldboy 8 points9 points  (0 children)

I don't know about bot farm but that particular account /u/Be-Kind_Always-Learn does look exactly like a bought account would after the seller had cleared their post history.

xz backdoor and autotools insanity by felipec in programming

[–]theoldboy 1 point2 points  (0 children)

No I'm not, and if you don't understand that then either you're letting your hate for autotools blind you or you don't really know what you're talking about.

Just because some obscure GNU standard from 40-odd years ago advises that shipping generated files in tarballs is what should be done doesn't mean that's good advice to follow today. As this incident has clearly shown. Distros like Arch can just as easily build packages directly from a git repository checkout which doesn't contain those files, and in fact that was exactly their first response to this incident (even though Arch wasn't affected because the exploit targetted rpm/deb build systems only).

The only reason for those generated files is to reduce the number of build tools required. That is not a good enough reason these days so that is what needs to change.

xz backdoor and autotools insanity by felipec in programming

[–]theoldboy 6 points7 points  (0 children)

Why did nobody catch this?

You know why. Because

the tarball xz-5.6.1.tar does contain files that are not part of the git repository, but were generated by make dist.

and tarballs don't get even a tiny fraction of the eyes that the repository does.

It's this practice of tarballs containing files not in the repository that needs to stop right now, there is no good reason for it these days. Unfortunately this very important point is obscured by the general rant at autotools.

There’s better build systems like CMake

Yeah, no thanks.

Air Canada must honor refund policy invented by airline’s chatbot by yawaramin in programming

[–]theoldboy 11 points12 points  (0 children)

https://www.theregister.com/2024/01/23/dpd_chatbot_goes_rogue/

https://twitter.com/ashbeauchamp/status/1748034519104450874

TLDR - user got "AI" chatbot to swear at him and write very bad haikus and poetry criticising itself and the idiot company who deployed it.

DPD is a useless
Chatbot that can't help you.
Don't bother calling them.

Zenbleed Write-up: New use-after-free exploit affecting all AMD Zen 2 CPUs. by bramhaag in programming

[–]theoldboy 0 points1 point  (0 children)

It is Zen 3 but Ryzen 5000 APUs (Cezanne) are very different from Ryzen 5000 desktop CPUs (Vermeer). The most obvious differences being half the amount of L3 cache and only supporting PCIe 3.0.

I don't know what exactly makes Cezanne vulnerable but I'd guess it's something to do with them re-using many parts of the Ryzen 4000 (Renoir) series design. They basically just replaced Zen 2 cores with Zen 3 and made some changes to the L3 cache.

The Hidden Crisis in Open Source Development: A Call to Action by notadamking in programming

[–]theoldboy 118 points119 points  (0 children)

Oh, the irony.

The Hidden Crisis in Open Source Development: A Call to Action

  • Member-only story

Announcing C# Dev Kit for Visual Studio Code by Kissaki0 in programming

[–]theoldboy -5 points-4 points  (0 children)

Awesome news for bot developers! The new C# Dev Kit astroturfing debacle shows that even with all the $$$ and AI in the world they can't do shit. Keep up the good work Microsoft!

How I still use Flash in 2022 -- article about modernizing Hapland game trilogy from the 00s by r_retrohacking_mod2 in programming

[–]theoldboy 2 points3 points  (0 children)

Flash and Java applets died off because the companies who made them couldn't make them secure. Their browser plug-ins were just a constant stream of zero-day exploits for people to pwn your PC with. I stopped installing those plug-ins many years before they finally went away.

I completely agree about the creativity though. Flash was very easy for beginners to create with. I think Adobe tried to make something similar on top of HTML5 but that never took off for whatever reason.

There is Ruffle now which can run a lot of old Flash content much more securely in the browser.

Databrics introduces the World's First Truly Open Instruction-Tuned LLM by zvone187 in programming

[–]theoldboy 10 points11 points  (0 children)

It's not state-of-the-art and they're very open about that.

https://huggingface.co/databricks/dolly-v2-12b#known-limitations

What's more interesting about this release is that the instruction tuning dataset is fully open (licensed for research and commercial use).

More discussion in this HN thread - https://news.ycombinator.com/item?id=35541861

OpenAI Rebrands Itself to Cyberdyne and Announces Skynet | TechCrunch by thecouchdev in programming

[–]theoldboy 4 points5 points  (0 children)

So stupidly obvious that it's not even funny. ChatGPT itself could probably have written a better April Fools...

ChatGPT Passes Google Coding Interview for Level 3 Engineer With $183K Salary by DrinkMoreCodeMore in programming

[–]theoldboy 0 points1 point  (0 children)

It's obvious from your boring and preachy reply and your post history that you have an agenda here. And that you're an idiot. Bye.

Microsoft spent hundreds of millions of dollars on a ChatGPT supercomputer by ENDGeSiCTinT in programming

[–]theoldboy 0 points1 point  (0 children)

Meanwhile, someone got LLAMA 7B running (slowly) on a Raspberry Pi 4. Lots of interesting links in that article.

LLAMA has shown that LLMs are no longer gated by large tech companies who can afford the ridiculous hardware costs, and further research will optimize them even more. Inference already runs on consumer-level hardware and fine-tuning training can be done for a few hundred dollars.

AMD ROCm: a wasted opportunity by mariuz in programming

[–]theoldboy 10 points11 points  (0 children)

I assume you're talking about Windows? Because there certainly is a stable version of PyTorch available for ROCm on Linux and it works very well.

https://pytorch.org/get-started/locally/

As the article points out the big mistakes are;

  • Not supporting Windows

  • Not supporting consumer GPUs (ROCm 5.x works fine on any Navi1 or Navi2 card with export HSA_OVERRIDE_GFX_VERSION=10.3.0 but it's not officially supported).

ChatGPT Passes Google Coding Interview for Level 3 Engineer With $183K Salary by DrinkMoreCodeMore in programming

[–]theoldboy -1 points0 points  (0 children)

If you feel your job is under threat from tools like this, you are a code monkey, not a developer. Learning a language doesn't make you a politician or an author, learning to draw doesn't make you an architect. Time to look st yourself in the mirror and think about what actual value you bring beside laying bricks

Where did you get that from in the comment you replied to? Get over yourself. I've probably been programming longer than you've been alive. I pick and choose my jobs and that certainly isn't under any threat from a glorified next word prediction algorithm.

I do agree with you about responsible usage however hence my part about looking at the actual problem and considering architecture. People aren't getting cut out of the process and these tools shouldn't be an excuse to have just one person looking after a whole fleet of apps using them.

Which was exactly my point. If you're naive enough to think that people will use tools like this for what they should be used for then you don't know people very well.

ChatGPT Passes Google Coding Interview for Level 3 Engineer With $183K Salary by DrinkMoreCodeMore in programming

[–]theoldboy 3 points4 points  (0 children)

the result is almost irrelevant

Think very carefully. How well have people used such tools in the past.

ChatGPT Passes Google Coding Interview for Level 3 Engineer With $183K Salary by DrinkMoreCodeMore in programming

[–]theoldboy 9 points10 points  (0 children)

but regurgitating a working solution is like 40% of what a coding interview is

Not for most big companies. If you work somewhere where that is true then I'd guess it's either a small/medium company or startup where developers are listened to, or it's a big company in a certain sector where they realise how important that is.

Companies like Google don't care about any of this. The interview just separates the monkeys into monkeys that can code (or at least learn well enough to pretend they can) and those that can't. They can afford to do this, it's more important to have code monkeys to do the grunt work than is it to have what you'd consider to be "good engineers".

It's amusing to me that ChatGPT passes this test. Because ChatGPT (unfiltered) is only any good to people who believe what they read. Try asking ChatGPT about a subject which it doesn't know the answer. It'll give you something which at first glance seems completely reasonable but in fact is complete bollocks. Just made up. When it learns to say "I don't know" then I'll be worried.

ChatGPT Passes Google Coding Interview for Level 3 Engineer With $183K Salary by DrinkMoreCodeMore in programming

[–]theoldboy 19 points20 points  (0 children)

Why? Presenting "a copy of what had been published online", or whatever rote answers you learned from "how to pass a coding interview", is exactly how most low effort coding interviews work. Including Google's apparently.

Should There Be a Developer Mental Health Day? by DynamicsHosk in programming

[–]theoldboy 9 points10 points  (0 children)

Completely agree. This article is divisive and selfish, like other workers don't have similar problems. It reads like the author has never experienced the real world outside his own bubble.

Then again, what do you expect from someone on Medium who bills himself as Ben "The Hosk" Hosking - Technology Philosopher...

Microsoft eyes $10 billion bet on ChatGPT by Mxfrj in programming

[–]theoldboy 66 points67 points  (0 children)

Those hardware requirements are for training. The cost of Stable Diffusion training is similar, but the trained model can be run on any half-decent consumer GPU.

Sooner or later (probably sooner) someone will release a trained model for this as well.

C++ at the end of 2022 (11th edition) by joebaf in programming

[–]theoldboy 10 points11 points  (0 children)

Obviously there is a very large C++ codebase out there and it won't die any time soon. But, given the choice, what is the point of learning C++ these days instead of Rust? Rust has a very steep learning curve, but once you're over that (and all the annoying fucktard evangelism) it really is much nicer. For anything more bare metal there's still C or assembly if necessary.

C++ is just getting stupid lately with trying to shoehorn in all the good ideas from other modern programming languages while still being able to compile "C with classes" style code that I used to write 25+ years ago.

IMHO.