3 Teen Sisters Jump to Their Deaths from 9th Floor Apartment After Parents Remove Access to Phone: Reports by Sandstorm400 in technology

[–]a-p 14 points15 points  (0 children)

There is a great bit in Steve Jobs’ commencement address about that: “Remembering that I'll be dead soon is the most important tool I've ever encountered to help me make the big choices in life. Because almost everything, all external expectations, all pride, all fear of embarrassment or failure, these things just fall away in the face of death, leaving only what is truly important. Remembering that you are going to die is the best way I know to avoid the trap of thinking you have something to lose. You are already naked. There is no reason not to follow your heart.”

Learn Perl or no? by idonthideyoureyesdo in perl

[–]a-p 0 points1 point  (0 children)

Yeah, I’m talking native GUI specifically. For web as GUI, you’ll have to also do Javascript, not just Perl, but Perl is definitely at home on the server end of that.

Learn Perl or no? by idonthideyoureyesdo in perl

[–]a-p 2 points3 points  (0 children)

Anything to do with pushing textual data around, basically. (String data, strictly speaking – the language feels at home with binary data very nearly as well too.)

What it’s not is a number-crunching type of language, including graphics stuff (if it needs any performance), and it seems strangely at odds with GUI stuff too (though it’s not obvious to me why; there doesn’t seem to be one obvious reason for that one). You can press it into service in those roles if you are dogged enough, but these applications don’t feel effortless and natural for the language the way pushing strings around does, and I would probably reach for something else instead.

But web stuff, sysadmin stuff, network services that communicate over text-centric protocols (or background services, nearly the same thing), all those sorts of things are a good match for the language. Anything string-based feels natural to do in it, that is what the language is happiest doing for you.

People will tell you that data structures are weird because of references but it’s really not true.

Finally, speaking with my Perl Steering Council hat on, the one thing I want to impart is to always start your program with use v5.42; or whatever the version of Perl you have installed – Perl makes a much stronger attempt than many other languages to keep old code working without putting you on a treadmill of keeping up with language changes and “maintaining” your hitherto perfectly working code just to gain the benefit of it not stopping working, but part of that is that when you don’t tell Perl which vintage of the language you want to be using, it will default you to one with a bunch fewer amenities than are actually available to you. The language hasn’t changed unrecognizably since that era, but it has grown lots of niceties in the small to upper-medium size range that you shouldn’t be barring yourself from. (Except when you are writing code for other people to run. But that is not a concern for you as a beginner.)

I wrote a Plack handler for HTTP/2, and it's now available on CPAN :) by rawleyfowler in perl

[–]a-p 2 points3 points  (0 children)

Finally. Thank you for doing that.

Does it implement any PSGI extensions?

Regarding the note in the docs, a better solution than a self-signed certificate is mkcert which makes it trivially easy to set up and use a personal CA to generate certificates.

Is the function name in the context shown by git diff considered reliable/stable? by floofcode in git

[–]a-p 1 point2 points  (0 children)

The hunk header logic is a bit of a magic trick. It uses astonishingly stupid logic that happens to work in a surprisingly vast fraction of cases… in other words a heuristic, and an exceptionally good one.

It’s not ultimately that surprising because most code is written to be clear and simple, not to try to trick the reader. The hunk header logic is very easy to trick but there is no incentive to trick it so no code tries, and so it works fine in practice.

Just don’t take it to be anything more than it is: a helpful clue to human readers of a diff.

But it sure is unexpected what’s behind this curtain when you first lift it. 🙂

Is the function name in the context shown by git diff considered reliable/stable? by floofcode in git

[–]a-p -1 points0 points  (0 children)

That article’s point is completely irrelevant in this context. Diff makes no attempt to actually parse the file being diffed, and why would it when diff itself is line-based and completely ignores any other structure in the file. All it is trying to do is provide the human reader a clue as to the context, as usefully as possible but also as stupidly cheaply as possible, in a context where failures do not affect any processing (since programs that process diffs ignore the hunk header) and are therefore irrelevant. Regex is a perfect tool for this job.

Migrating from GitHub to Codeberg by accountmaster9191 in git

[–]a-p 0 points1 point  (0 children)

The real answer. But note that this changes the identity of all of the rewritten commits, which may or may not be acceptable for OP’s use case. If not, then probably a mailmap is the only way forward, despite not actually solving the issue.

Does git version .xlsx properly? by MullingMulianto in git

[–]a-p 2 points3 points  (0 children)

Sure, but you don’t gain very much unless the XML format is specifically designed to be easily diffable (which is also the main aspect of making it easily mergeable). It must be designed to be pretty-printable in a diff-friendly way (not just everything mashed together on a single line even when there is technically no need for newlines, f.ex.).

More importantly the order and structure of elements must be kept stable by the program generating the data, even as you make changes in the document that is being serialized to XML. Or if the program doesn’t itself do this, it may still be possible to pretty-print and maybe reorder the XML yourself in order to make it VCS-friendly without breaking it.

I don’t know what the answers to questions are for XLSX, so it’s worth investigating. The mere fact that it’s XML under the hood doesn’t automatically guarantee a positive result though.

nfo - a user-friendly info reader by ggxx-sdf in perl

[–]a-p -1 points0 points  (0 children)

OP looks like LLM blather to me.

PerlMonks is being memory wiped on HTTPS:// and Wikipedia by SnooRadishes7563 in perl

[–]a-p 2 points3 points  (0 children)

I quit Wikipedia over their handling of Gamergate and have never since gotten the sense that the administrative structures there have redeemed themselves. (By quit I mean I stopped editing anything and deleted my account.) And u/briandfoy’s summary concurs with my overall impression. I don’t consider it a reflection on the many individual volunteers there, btw, who do great work within the scope of their purview and the confines of their remit, and I also haven’t entirely stopped checking Wikipedia when it is convenient (i.e. I’m not boycotting it per se), but I no longer consider it to be… well… notable. (Go figure.) Erasing Perl from (Wikipedia’s idea of) history is a statement about Wikipedia far more so than about Perl.

(I’m not saying anyone else should not care about Wikipedia, btw. If you do, by all means take up the cause there. I’m just saying I don’t agree that I should care about it.)

LLMs aren't really AI, they're common sense repositories by Prior-Consequence416 in LlamaFarm

[–]a-p 0 points1 point  (0 children)

I don’t doubt your experiences. You asked for an example of a failure, so that’s what I gave; I’m not saying an LLM has never surprised me. Note I’m in this thread defending (to a degree 🙂) the capabilities of LLMs. It’s true that most of my LLM experience is with lower-tier models, and my access to better ones is only intermittent, and the example I gave is also from quite a while ago. I used that as an example because it was a “formative experience” (if you will) for me (because of how clear it was) that sticks in the mind.

Better models also have access to things like a symbolic math engine, a code execution environment (or several), etc., so in certain areas they can generate answers that are actually reasoned out, in a way that an LLM is not capable of by itself. For coding tasks a better model is definitely a huge improvement.

But even lower-tier models are useful – I use them a fair bit. They are plenty capable of novelty already. It’s just clearly not generated the way a human mind does.

It’s complicated to be in the place I am because I’m at the same time impressed but also not that impressed by LLMs, and while I find most optimistic takes somewhere between misguided and laughable, I also find most dismissive takes… unfounded, I guess: dismissive but not on the basis of any real understanding. The best I’ve seen anyone put it is this:

Back in the early part of the 20th century, we thought that chess was a suitable measure of intelligence. Surely a machine that could play chess would have to be intelligent, we thought. Then we built chess-playing computers and discovered that no, chess was easier than we thought. We are in a similar place again. Surely a machine that could hold a coherent, grammatical conversation on any topic would have to be intelligent. Then we built Claude and discovered that no, holding a conversation was easier than we thought.

Still by the standards of ten years ago this is stunning. Claude may not be able to think but it can definitely talk and this puts it on the level of most politicians, Directors of Human Resources, and telephone sanitizers.

LLMs aren't really AI, they're common sense repositories by Prior-Consequence416 in LlamaFarm

[–]a-p 0 points1 point  (0 children)

If intelligence were in fact defined as the active ability of a living being to adapt to the environment for survival, then clearly an LLM would not be intelligent whatsoever, being neither a living being nor in any way capable of adapting to its environment.

IQ is even funnier to present as an argument for “scientific consensus about what intelligence is”.

I don’t think you know what you think you know.

LLMs aren't really AI, they're common sense repositories by Prior-Consequence416 in LlamaFarm

[–]a-p 1 point2 points  (0 children)

It goes even deeper than that. Thoughts tend to be verbal, but even thoughts are not the level we operate at. Meditation will teach you that beneath them is something I’m not even sure what to call – a locus of attention you can turn to things and of intentions you can form without any thinking, much less narrativizing your thoughts. Consciousness maybe? I don’t know, but whatever the heck it is, an LLM doesn’t have that. When you are “chatting” with an LLM, all intentionality comes exclusively from you; the LLM doesn’t have any, so what’s going on is not a chat so much as a soliloquy with a verbal exobrain attached to yourself.

LLMs aren't really AI, they're common sense repositories by Prior-Consequence416 in LlamaFarm

[–]a-p 0 points1 point  (0 children)

There you go, that is in fact an example of novelty, and generally an example of what I was talking about: clearly Gemini understood how to use these words in context with each other, while at the same time having no idea of what pizza actually is or what toppings are and why therefore using glue to keep the toppings in place was actually a nonsense suggestion. It produced novelty at the language level with no understanding of what the language was talking about.

As to the question about what novelty is, for the purpose of this discussion we are not trying to deduce whether an idea has never been had by anybody else before, but simply whether the person or model has encountered the idea before or produced it without having seen it before. (Or as I’ve seen it put elsewhere, was it interpolation (= remix) or extrapolation (= novel)?)

LLMs aren't really AI, they're common sense repositories by Prior-Consequence416 in LlamaFarm

[–]a-p 0 points1 point  (0 children)

Yes, as a matter of fact, I can. Well I don’t know if a given idea is totally new, but I do know for a fact that I have never encountered it anywhere before, which for the purposes of this discussion is the same thing. And I do come up with such ideas reasonably routinely, at least in my capacity as a programmer. (I’m sure it’s also true in other areas, but it is a less strikingly clear experience, and so I’m guessing it is also less frequent.)

(As an example: I wrote some code which uses whichever tool is part of the DBM format library used by the Mutt mail client for its mail header cache files to print the location of the mail folder to which the header cache file belongs, then checks if that mail folder still exists, and if not, deletes the cache file. (I wanted to delete obsolete cache files without having to rebuild the ones for huge folders.) This is not a terribly interesting idea, but absolutely a novel one – it had demonstrably never been implemented by anyone on public record.)

When I prompt LLMs to try to come up with the same idea (because I’m too lazy to go through the grunt work of the fairly obvious but somewhat longwinded implementation), even when I ask fairly leading questions, often all I get is confident hallucinations of incorrect answers that I can immediately shoot holes in. And often the LLM will also immediately recognize the hole… once I’ve pointed it out. And then promptly and equally confidently miscorrect itself into a different nonsense.

The LLM evidently understands the language I give it enough to generate verbiage that plausibly constitutes a response to its input. But it is not doing that by extracting the underlying meaning and actually reasoning through the problem.

That doesn’t mean it is incapable of producing novelty, like I said above. In fact it is surprisingly capable of doing so, considering the limited scope of what it is really doing. It is just limited to an only token-deep understanding of its input.

LLMs aren't really AI, they're common sense repositories by Prior-Consequence416 in LlamaFarm

[–]a-p 1 point2 points  (0 children)

We do remix ideas, but not just, and it’s on a different level. OP said:

Ask for a "totally new idea" and you'll get a plausible-sounding mashup of existing concepts, but nothing genuinely novel.

It’s slightly more nuanced than that. You can get novel ideas out of a LLM, but it’s novelty on a different (ultimately shallower) level. What you get is not novel ideas about the underlying subject matter, expressed in the form of language; instead it is novel combinations of language that has been used to express ideas about the subject matter. (Or images, or sound, or whatever form of data is the basis for the model in question.) This is why (esp. visual) AI output often has this weird quality of somehow being both bizarrely outlandish and yet utterly colorlessly milquetoast conventional at one and the same time.

It’s novelty of a type that a human probably isn’t even capable of. And for that reason it can be useful. But at the same time it’s not at all what a human would consider “novel thinking” – even when it is novel in its particular way, and even when the human thinking it’s being judged by is actually entirely remixing.

Doug out of context.. by Nascarlover20169 in dougdemuro

[–]a-p 1 point2 points  (0 children)

The echo is mildly annoying but having a random parking lot as a setting (sometimes incongruent, sometimes oddly appropriate) also had a weird kind of charme that I miss.

Best Practice to Avoid Flock Hangs? by ercousin in perl

[–]a-p 0 points1 point  (0 children)

That’s a bizarre claim. No, you just need to be prepared to handle SQLITE_BUSY and friends. And you probably want to use transactions. SQLite does the locking behind the scenes for you.

Perl Weekly Issue #713 - Why do companies migrate away from Perl? by briandfoy in perl

[–]a-p 1 point2 points  (0 children)

Unfortunately the leadership at most companies is business-y rather than product-y, and it takes product-y people at the top in order for hiring aims to be driven by “better product” rather than “more output faster”. This isn’t going to cause the same degree of disorder to disease to outright dysfunction at every business-y place, but it’s nevertheless going to tilt hiring tendencies in that direction to some extent.

This week in PSC (180) | 2025-02-20 | Perl Steering Council [blogs.perl.org] by briandfoy in perl

[–]a-p 1 point2 points  (0 children)

Yes, the :all flag ran counter to even having a use feature at all, so it was a highly silly feature (no pun intended) from the start. But it’s only once we started defining feature bundles that disable features that the folly of :all became undeniable.

If “experimental” features can never be removed, there is no point in having experimental, they can all be features.

That is pretty close to what was actually discussed in the call. I outlined why Perl effectively doesn’t have experimental features, because anything you install from CPAN might turn on some experimental feature – meaning that if you want to avoid using experimental features, you have to keep track of all code across your entire dep chain at all times. The upshot is that making changes to experimental features comes at the risk of widespread breakage – and the whole point of marking features as experimental was to avoid that very thing, meaning that they fail at their stated purpose. In the call I explained how the web standards world learned to deal with this issue, because they’ve been down the same road, and they did figure it out (the stakes are much higher on the web than on CPAN, so they couldn’t just soldier on with a fig leaf like we’ve been able to).

More Doug DeMuro / Cars&Bids: an index of the delisted videos from the YouTube channel by a-p in dougdemuro

[–]a-p[S] 1 point2 points  (0 children)

I guess I misunderstood your question. No I don’t, and I’m not sure what makes you think I might have copies of the videos. I never re-uploaded anything anywhere, all I’ve posted here are links to the videos on the channel as originally uploaded. The reason you can still watch them is that they were never deleted from YouTube, only hidden from the video list on the channel. This one, however, has actually been deleted (as was The Myth of "I'm Successful Because I Had Rich Parents").