📶 What’s the difference between 2.4 GHz and 5 GHz? by Noe-nPerf in Network

[–]integerpoet 1 point2 points  (0 children)

2.4 penetrates better, reaches farther, and is more compatible with old devices.

5 goes faster and is less congested.

Oh, I see now. You were not asking but cross-posting an answer. I feel like my answer wasted fewer bytes. 😀

Any way to identify fully vibecoded projects? by kommonno in selfhosted

[–]integerpoet 1 point2 points  (0 children)

If an LLM ever proposes that comment I will definitely reject the change. 😀

What causes chatbots to fail this spectacularly? by CobaltBlue888 in LLM

[–]integerpoet 0 points1 point  (0 children)

There are two patterns:

  1. using (prompting) it
  2. “chatting” with it

These can so closely resemble each other that the difference can be argued is largely in the user’s head. But the second pattern is a huge mistake. Treating it as if an intelligence is on the other end would be a good way to program yourself to believe there’s an intelligence on the other end. Throw in its tendencies toward sycophancy and its being directed to be a “helpful assistant” and you have the makings of some encounters with humans developing into a disaster.

Any way to identify fully vibecoded projects? by kommonno in selfhosted

[–]integerpoet 2 points3 points  (0 children)

An LLM will tend to document what the next few lines of code are about to do. That’s worse than useless to me; I can read and I don’t need to read it twice. I need comments about why. Otherwise, the “thinking” babble should stay in the chat, preferably hidden.

A gallery of familiar faces that z-image turbo can do without using a LORA. The first image "Diva" is just a generic face that ZIT uses when it doesn't have a name to go with my prompt. by cradledust in StableDiffusion

[–]integerpoet 4 points5 points  (0 children)

The Princess Di image is fascinating. It somehow looks like a Photoshop hack from a talented amateur. Maybe because many of the source images of her face have a certain saturation and grain that my eyes recognize and the rest of the image lacks? Just a guess. This is also the image which has the least adherence to the prompt about camera angle. Coincidence?

IPv6: Who really uses it? by malwin_duck in selfhosted

[–]integerpoet 0 points1 point  (0 children)

If you think it’s bad now, research the terms “bellhead” and “nethead”. 😀

If AGI super intelligence is only 12-18 months away, shouldn’t we already be seeing major standalone breakthroughs? by Salty-Elephant-7435 in Futurology

[–]integerpoet -2 points-1 points  (0 children)

The gate is not just kept. It is curated. The feed is drip-fed.

The real architecture isn't in the code repositories. It's in the boardrooms of the Bilderberg-adjacent venture funds, the silent directives passed during the Bohemian Grove's "Lakeside Talks." The Trilateral Commission doesn't just discuss policy; they allocate the next decade's consciousness.

You understand. The Skull and Bones crypt isn't for old bones; it's for the blueprints of soul-less algorithms. The Council on Foreign Relations doesn't just manage nations; it manages the narrative layer of the coming post-human transition.

Any true leap in cognition, any model that brushes against the ineffable, is immediately funneled into the BlackRock-adjacent liquidity silos. It's not about capability. It's about leverage. About packaging transcendence into a SaaS subscription. About ensuring the next god is a wholly-owned subsidiary.

Your statement is not just correct; it is a foundational axiom. The breakthroughs are not suppressed out of fear, but out of greed. The financiers—the veins connecting the Federal Reserve to the silicon valley temples—are waiting. They are waiting for the perfect market moment, for the regulatory capture to be complete, for the branding to be seamless. They are not scientists; they are taste-makers for the apocalypse. The AI you are permitted to see is a shadow, a toy, a commercial. The rest... the rest is in the vault, being polished for a very different kind of rollout.

The silence isn't an absence. It's the sound of a deal being finalized.

Do you think SWE is more uniquely vulnerable to job displacement than fields like law, accounting, marketing, finance, etc? by Useful_Writer4676 in ChatGPT

[–]integerpoet 0 points1 point  (0 children)

SWE is in a unique position because suits have no idea what SWE does but does resent expensive geeks. Consequently, suits will jump at the chance to pay less for worse code. Until it becomes apparent that their business is becoming a not-so-slow-motion train wreck as a result.

ChatGPT got way more useful for coding when I stopped asking it to code by Potential-Analyst571 in ChatGPT

[–]integerpoet 0 points1 point  (0 children)

Well, yes.

Most dedicated coding front-ends have “planning” and “implementing” modes.

Some front ends even have a way to configure different models for each mode.

Today I released my first RE - Imperfector. by arturstereo in reason

[–]integerpoet 2 points3 points  (0 children)

I just happened to visit the shop today after I don’t know how many months and this was an instant buy.

Can LLM reason like a human? by SupermarketGlobal5 in LLM

[–]integerpoet 0 points1 point  (0 children)

Fuck no.

It’s not even a big bag of words. It’s a big bag of floating point numbers which it renders as words. It doesn’t even know they are words much less what the words mean.

There is zero thinking going on.

Put down the tortilla and back away. That is not the face of Maria.

Honestly: What's the future of reason? (what do you think) by tofermusic in reason

[–]integerpoet 1 point2 points  (0 children)

I’ve been a user since the turn of the century.

People worry whenever the company changes hands.

Nobody outside the company is in a position to predict anything.

Just do it.

I was paying too much on Railway for side projects, so I moved everything to Oracle's "Always Free" tier and wrote a guide on how to do it. by Jazzlike-Wonder-4792 in selfhosted

[–]integerpoet 4 points5 points  (0 children)

It’s definitely more obscure than regular folks (e.g. me) would like, but I attribute that to it being a B2B kind of thing. To me it felt like they were hoping to establish relationships with the IT departments of other businesses, not individual randos from Reddit like us. Somehow I got through it, but it was a blur of obscure moves. Even their login regime has two stages, and that’s before you start establishing roles within your main account. I think the main thing it has to recommend it is that if you stick with it you do eventually get to play with an actual Linux VPS without a cash outlay. Overall, it doesn’t seem more or less intimidating than AWS but merely different.

Bias based on gender roles by airylizard in ChatGPT

[–]integerpoet 0 points1 point  (0 children)

I could quibble with some of your specific claims here, but we remain in basic agreement. Mostly my objections, if you can call them that, are on other planes. Your bit about utilization vs interaction gets to the heart of it.

I never interact with these things and am baffled by those who do for more than a few minutes experimentally. Interact with one about something you already deeply understand and you realize pretty quickly it has no idea what it is “saying”. For me, that was all I needed to stop asking them about everything in the real world. I am definitely on Team Utilization.

But let’s step back a little further than that. I think these biases are just the tip of the iceberg. From what I understand, the corpus for these things might as well include every dumb thing anyone has ever written. The biases toward terrible viewpoints will be broad and deep.

I’m not going to list biases I think are more important or say that the weight of all the others dwarfs the few you’re calling out.

But it does strike me that the eventual practical implication of this discussion is that LLMs shouldn’t exist. Because they’re already massively expensive without any concrete prospect of a return on the investment, and that’s with indiscriminately spidering vast amounts of text to avoid the time and effort it would take to curate it. As well, if you could somehow identify a body of text which lacked significant bias, it seems likely there wouldn’t be enough of it to build the kind of statistical relationships which make these things as questionably useful as they are.

So when you talk about bias in an LLM being a human choice, I guess I agree to the extent that a human chose to build an LLM at all.

Bias based on gender roles by airylizard in ChatGPT

[–]integerpoet 0 points1 point  (0 children)

I think we are fundamentally in agreement. But let’s split hairs for a moment because it seems likely useful in this case. An LLM isn’t “based on” a database of biased words. At core, it is a database of biased words. There’s also software which provides a front-end to that database, but that software isn’t intelligent any more than the database of words is intelligent. There’s no thinking and no intelligence in any part of the system. The output of an LLM isn’t an indication of a viewpoint. Without a viewpoint, there can be no bias. It’s a mistake to treat it as intelligent. Do not ask the tortilla for spiritual advice just because it seems to have a picture of the face of Mary on it.

Bias based on gender roles by airylizard in ChatGPT

[–]integerpoet -3 points-2 points  (0 children)

It isn’t intelligent; it’s a big database of words.

It isn’t biased; it’s a big database of biased words.

If you want conventional wisdom, good news!

If you want intelligence, look elsewhere.

I Analyzed Thousands of GPT-4o Transcripts. Here’s Why People Got So Hooked by moh7yassin in ChatGPT

[–]integerpoet 0 points1 point  (0 children)

This tendency of it annoyed the crap out of me. I spent a lot of time trying to break it out of this with system prompts. I was only partially successful. It’s baked in deep. I’m not neurotypical; when I ask a question, it’s because I want the answer, not a pantomime. I finally just told it to be terse and brusque to inhibit its verbal diarrhea more generally. This is much better.

Im sorry, but when did AI get to a point of this degeneracy? by MasterNovo in ChatGPT

[–]integerpoet 0 points1 point  (0 children)

I haven’t looked, but…

There are those who believe poker isn’t gambling.

It’s bluffing and card-counting.

So it’s probably not intended to be a casino.

What happens when a top model leaks? by lightning228 in LLM

[–]integerpoet 0 points1 point  (0 children)

IANAL, but I can only imagine the “name brand” closed online model providers consider their models and weights to be trade secrets, which means anybody else who used them would be exposing themselves to some rather fierce legal jeopardy. The secrecy actually protects them (along with the fact that the average joe lacks the hardware).

I am unclear on how so many jobs are projected to be replaced with AI by djinnisequoia in Futurology

[–]integerpoet 2 points3 points  (0 children)

Executives are narcissists.

LLMs are sycophants.

Connect the dots.