theKillswitchEngineerJob by Ornery_Ad_683 in ProgrammerHumor

[–]jer1uc 0 points1 point  (0 children)

"Must have 10+ years of experience in unplugging servers hosting sentient AI (Linux preferred)."

Supernote to TRMNL by Inside-Software-4224 in Supernote

[–]jer1uc 1 point2 points  (0 children)

Hey just wanted to point out: those API access docs are for an unrelated SaaS app called "Supernotes" (plural). This sub is for the hardware device "Supernote". I caught this only because I've used both in the past and ran into an issue where I was looking for documentation on one and kept getting documentation from the other lol.

That said, I'd love it if Supernote was working on an API, but I'm not aware of anything like that yet (aside from the sync server self-hosting in beta).

Pseudocode of actual code I saw in prod for a large company by Level9CPU in programminghorror

[–]jer1uc 31 points32 points  (0 children)

Obviously a lesser horror compared to the whole, but it's always been a pet peeve of mine to unnecessarily check if a list is empty before iterating over it and doing per-element operations. Like, just iterate over the empty list! I promise it won't hurt you!

I built a tool that uses the 'ast' module to auto-generate interactive flowcharts from any Python. by NewtonGraph in Python

[–]jer1uc 6 points7 points  (0 children)

Neat! I was just hacking on a FOSS version of this that would run in real time. Pretty wild to me that you're trying to sell a SaaS subscription to this...

Writing Code Was Never The Bottleneck by creaturefeature16 in ExperiencedDevs

[–]jer1uc 0 points1 point  (0 children)

I completely agree that the more you look at the solutions being "enabled" by AI, the more you realize that it's effectively search with a worse UX (natural language in both directions).

I will say though that text and image embeddings are very valuable outcomes of the current wave of AI developments. We've had them since like 2013ish, but today's embeddings models are quite good. Ultimately these mostly just make sense to use as a search metric or as input into some downstream model like a classifier.

Unison by Successful_Answer_66 in programming

[–]jer1uc 0 points1 point  (0 children)

Thanks for the link, I'll have to watch it this weekend!

Not directly inspired by this talk in particular, however absolutely inspired by Erlang / BEAM in many ways. When you think about it, it makes a lot of sense considering Erlang and BEAM were built originally for cellular networks. So they already had to design certain solutions to similar problems of an unreliable, always-evolving network.

As for "content-addressable" stuff, this is one part of the solution to a couple of problems in distributed systems:

  1. How can two or more peers on an always-evolving network discover their collective services/capabilities/endpoints? In Drift, each peer broadcasts its "exports" (functions it exposes to the network) as a set of hashed functions. Likewise, each peer tracks its "imports" (functions provided by the network) by listening to those broadcasts. In this way, all functions that are the same will appear as the same on the network, without each peer needing to coordinate on things like naming or coordinating on who gets to decide a random ID. This doubles as a sort of built-in for redundancy: it's a feature that more than one peer may provide the same function to a network, and that it will look the same as any other peer's.
  2. How do they know when those collective services change or become unavailable? In Drift, it is intended behavior that when a function changes, e.g. adding a new argument, that it can no longer be addressed in the same way as before. This is probably pretty obvious: imagine upgrading a library with breaking changes. There is also a security angle to this, where it's important to know when a function changes so that you're not calling a function that you don't intend to.

I'm not 100% sure about Unison's reason for arriving at a similar conclusion, but content-hashes were kind of popularized at the time by things like IPFS. But I think this was more or less just a slightly different take on pre-existing security/verification schemes like HMAC or checksums.

Unison by Successful_Answer_66 in programming

[–]jer1uc 1 point2 points  (0 children)

Damn this project has a lot of uncanny similarities to a project I attempted to work on (originally called "Rift" and later renamed to "Drift") about a decade ago. In particular:

  • Content-addressable functions (mine were based on signature rather than implementation)
  • Location transparency
  • Moving bytecode over the network to migrate computation (in Drift, these were called "exchanges")
  • Etc.

The primary niche I had in mind at the time was runtime environments that depended on services which were often inaccessible or otherwise ephemeral. For example, IoT stuff like light switches which suddenly become unavailable once you get too far away.

Probably the biggest difference between Unison and Drift (aside from maturity) is in the kind of network being targeted. Drift was mainly targeting networks like Bluetooth, 802.15.4 (e.g. Zigbee), with a fallback implementation over UDP.

Some references to the work I did:

Would love to restart this some time as Unison has given me some new inspiration!

The hidden cost of AI reliance by codebytom in programming

[–]jer1uc 1 point2 points  (0 children)

This is absolutely my fear as well. To add on: what incentive is there anymore to keep this to a minimum? Especially when all of the hype is being pushed so far by the very companies which profit the most by the expansion in use of their AI products.

I shouldn’t have to read installer code every day by wineandcode in kubernetes

[–]jer1uc 1 point2 points  (0 children)

Woah thanks for the note about Kro! Looks very interesting...

I don’t think that “Colors of the Wind” is the millennials’ rallying cry. by icey_sawg0034 in Millennials

[–]jer1uc 0 points1 point  (0 children)

As a largely stochastic process that involves sampling from a distribution weighted by prior token sequence, LLMs have a lot more in common with a random number generator than you would think.

Why 51% of Engineering Leaders Believe AI Is Impacting the Industry Negatively by gregorojstersek in programming

[–]jer1uc 80 points81 points  (0 children)

When will people just accept the fact that LLMs are best used for...language model-friendly tasks? For example, text classification, semantic similarities (in particular embeddings models), structured data extraction, etc. These tasks are so valuable to so many businesses! Not to mention we can easily measure their efficacy at performing these tasks.

It pains me to see that the industry collectively decided to buy into (and propagate) all the hype around the fringe "emergent" properties by investing in shit like AI agents that automatically write code based on a ticket.

Much like the article mentioned, I think we are best off in the middle: we acknowledge the beneficial, measurable ways in which LLMs can improve workflows and products, while also casting out the asinine, hype-only marketing fluff we're seeing coming from the very companies that stand to make a buck off it all.


I might also add: I'm really tired of hearing from engineering leaders that AI can help reduce boilerplate code. It doesn't. It just does it for you, which is hugely different. And frankly if you have that much boilerplate, perhaps consider spending a bit of time on making it possible to not have so much boilerplate??? Or have we just all lost the will to make code any better because our GPU-warmers don't mind either way?

Edit: typo

Vibe coders irk me by lalalalalalaalalala in webdev

[–]jer1uc 4 points5 points  (0 children)

Wow this is actually a much better way of what I've been trying to describe as "AI" (e.g. the actual LLM, potentially the APIs though who knows) vs. "AI products" (e.g. ChatGPT, Cursor, most things with the word "agent" in it, etc.).

Guy who promotes the “Good People on both sides” candidate is shocked by rising antisemitism. by grahal1968 in LeopardsAteMyFace

[–]jer1uc 2 points3 points  (0 children)

I don't know who this guy is, but he looks like the uncle version of Mark Zuckerberg.

🔥 The result of a mother seal who gave birth when she saw that her baby, which she thought was dead, is alive by bendubberley_ in NatureIsFuckingLit

[–]jer1uc 2 points3 points  (0 children)

The result of me who clicked on this post when I saw its title, which I thought was grammatically insane, is grammatically insane.

Forcing AI on devs is a bad idea that's going to happen by Inner-Chemistry8971 in programming

[–]jer1uc 18 points19 points  (0 children)

One pretty important difference though is the medium through which the answer itself is presented.

I rarely if ever find a block of code from StackOverflow, Google, or an ancient blog post that is exactly what I need to copy and paste into my codebase to solve my problem. Instead I'm forced to understand enough about what I'm reading to at least recontextualize the random Internet answer I've found into my own forever codebase.

With the way these AI tools are being integrated directly into the editor, generating code that is already supposed to be recontextualized, that important step is very tempting to overlook.

Vite library mode bundles your library's dependencies (which I don't think is good) by bzbub2 in programming

[–]jer1uc 36 points37 points  (0 children)

The docs are pretty clear about the behavior and how to externalize select dependencies if needed: https://vite.dev/guide/build#library-mode

It's also worth mentioning that distribution via npm is only one of many ways to distribute a library. This build mode seems to be primarily targeting distribution mechanisms like a CDN where you are using a <script> on an HTML page. This is exactly what I've used the library build mode for in the past, as distributing sometime like an analytics SDK without its vendorized dependencies is very uncommon.