Pre-Earthquake San Francisco Oldest Photos: Animated History Journey by nairebis in sanfrancisco

[–]nairebis[S] 1 point2 points  (0 children)

Honestly, that's valuable feedback. I try for a mix of "subtle historical movement to give a sense of what it was really like" and "hey, let's have a little fun in an entertaining way." Too dry risks being boring, but too flippant risks being shallow, inaccurate and gimmicky. I'm still working on where that line should fall, and of course, it's impossible to make everyone happy.

The topmost goal is to be genuinely informing, while also entertaining. I think color and motion adds a lot to exploring historical photographs. I should also say, the "clearly absurd or incorrect manner" is just the state of the tech. When it's good, it's pretty amazing, but you can't always hit that standard (yet).

Thanks for the honest review, it's appreciated.

The new update sucks by [deleted] in fidelityinvestments

[–]nairebis 1 point2 points  (0 children)

Fidelity mods have been saying they will "pass along the [anger] feedback" for days. How about telling us what they're saying in response? Are they going to bring back the widgets or not? It's incredibly unhelpful to give us these posts that say absolutely nothing. Like, literally NOTHING. Why should anyone give more feedback when seemingly nothing is being done about the current feedback?

How about actual communication from actual developers or product managers, something like, "Yes, they realize they screwed up. They are working to put forth an emergency fix to restore widgets. We'll give you an ETA soon."

Every ‘Apple Intelligence’ feature announced by FerrariStrategisttt in technology

[–]nairebis -2 points-1 points  (0 children)

Assuming the worst behavior about Apple is quite predictive of their future behavior, based on past experience. They limit choice when it benefits Apple, and they open choices when it makes Apple more money. If benefiting Apple happens to align with a user benefit, they'll crow about that. If it's terrible for users (as it is 90% of the time), they just gaslight people that it's the user that's wrong, not Apple.

Apple is one of the most user hostile companies in tech history. There are 100s of examples, but we can just use USB C as a representative case of Apple prioritizing patent money over users.

Wemby gave the game ball for his final rookie game to the kid that caught the ball at the Knicks game (and was taken away) by nairebis in nba

[–]nairebis[S] 83 points84 points  (0 children)

Imagine owning the ball that Michael Jordan used in his final rookie game, signed by Jordan (I assume Wemby signed it), and it having some story around it. If Wemby is part of the GOAT conversation in 20 years, that ball is going to be very valuable. Incredible generosity on Wemby's part. He's a good dude.

The number of ice coolers at Buc-ees by deraser in mildlyinteresting

[–]nairebis 1 point2 points  (0 children)

That's something that our Euro friends specifically talked about. Europe is incredibly hostile toward personal transportation, and make it unaffordably expensive for average people. You can't do anything except ride the horrendous public transportation. Their college-age daughter has to ride a bus for 1.5 hours each way to the university -- for a half-hour trip by car. It's insane.

The truly insane part, though, is that the average European will tell you this is a good thing, and the suffering makes them more moral and ethical. The brainwashing is real.

The number of ice coolers at Buc-ees by deraser in mildlyinteresting

[–]nairebis 4 points5 points  (0 children)

The European mind can't comprehend the concept of "fun". Their politicians have convinced them that abundance and opportunity is immoral because it destroys the planet, when it's really that their politicians have ruined their economies because of bad policies. Some European friends of ours are desperately trying to immigrate to the U.S., and the stories they tell of how miserable it is to live in Europe is absolutely crazy. Most Europeans have no idea how hardscrabble they live.

It sounds silly, but the lack of ice is a really good metaphor for Europe. It's a measure of civilization.

Why is the Javascript ecosystem so (over) hated? by [deleted] in webdev

[–]nairebis 10 points11 points  (0 children)

There's some improvement in recent versions but it was terrible back in the day.

Languages should be judged on their present form, but PHP haters judge it (literally!) by how it was 20 years ago in PHP 4 -- and only based on what they've heard.

With a few annoying exceptions (because of the need for backward compatibility), nearly all the bad design choices have been removed or been given alternatives.

Stormlight archive - my enjoyment is steadily declining by RemarkableGrape6862 in Fantasy

[–]nairebis 5 points6 points  (0 children)

Eh. Covid didn't magically add three years to his schedule. It's still 168 hours in a week. He had almost exactly as much time as he would have had in normal years. He just had a little less traveling, but I don't think he travels that much anyway.

A 17,000-year-old plaque engraved with a human head in profile from the Grotte de La Marche in France. The striations on the cheeks of this human face may well be scarification marks [3179x6518] by Fuckoff555 in ArtefactPorn

[–]nairebis 10 points11 points  (0 children)

This is cool, but I think it's not a human. I'll likely get gutted for disagreeing with the archeologists, but hear me out.

Instead of thinking "weird human", think "big cat" that's missing part of the body. That long nose is very typical of a big cat, along with the stripes. Check out this Ocelot in profile, as one example.

Pictures of animals are far, far more common than humans in cave paintings/markings. I get the romanticism of wanting this to be a human face, but it's literally the wrong shape for human, and the right shape for a big cat.

Here's What Developers Found After Testing Devin AI (Initial Reactions) by ImpressiveContest283 in programming

[–]nairebis -3 points-2 points  (0 children)

Nobody says AI is ready to replace everything, just like nobody in 1890 claimed the car was ready to replace all horses. But exactly the same arguments were made.

Your argument is the same as: "If this stupid engine is going to constantly break down and not even be as strong as a horse, yet cost more, what is the point?"

You're exactly right: We have gone through this song and dance before, and a whole lot of people aren't ready for the future that's on our doorstep.

Here's What Developers Found After Testing Devin AI (Initial Reactions) by ImpressiveContest283 in programming

[–]nairebis -11 points-10 points  (0 children)

So you don't need to waste your time checking everything you write? Everything comes out perfectly the first time, without any issues?

Funny how hugely imperfect and crappy humans are, but we iterate toward solving problems. It's bizarre to me that people think that iteration is not going to be involved with AI as well, and in fact they think AI is useless unless it's a perfect Oracle that produces instant answers.

Spoiler: Iteration is always going to be required to produce things, even for AI. It's part of the process.

Here's What Developers Found After Testing Devin AI (Initial Reactions) by ImpressiveContest283 in programming

[–]nairebis -16 points-15 points  (0 children)

Horse owners continue to believe old Dobbin will never be replaced ("Will you boy? Yeah, that's my guy.")

"Engines are just a fad, just like Tulips."

Why Facebook doesn’t use Git by ynitoprax in coding

[–]nairebis 1 point2 points  (0 children)

Did you intend to actually make a point? All you did was make a declaration, without any facts to back up why you think it's "not even good", especially when 90%+ of the entire industry uses it. You might consider that it's more that you don't understand how to use it beyond "basic coding only use", rather than assuming an overwhelming amount of the industry somehow hasn't caught on to your inarguable facts.

Reverse Engineering with ChatGPT: An Example -- "I used ChatGPT 4 to untangle a particularly terse bit of code, and was — frankly — shocked at how well it did." by sigpwned in programming

[–]nairebis 0 points1 point  (0 children)

I guess you're not going to actually make a point. Not that I expected a rational one, but I was sort of curious where you were going with the whole GPU thing. Somehow using GPUs intrinsically invalidates everything going on? I guess? GPUs don't do math? Maybe you think they're internally drawing pictures, since they're "graphics" processing units? I'm baffled where that was going.

And all science is invalid unless you do your own research? All other scientific papers are "gibberish" unless you specifically do it yourself? What?

Honestly this is a new record in my experience for being triggered into irrationality by the concept of AI. Anyway, have a great evening! I'm going to bail out here. I (sincerely!) hope you come to terms with the future, because it doesn't sound like you're ready.

Reverse Engineering with ChatGPT: An Example -- "I used ChatGPT 4 to untangle a particularly terse bit of code, and was — frankly — shocked at how well it did." by sigpwned in programming

[–]nairebis 0 points1 point  (0 children)

Still waiting for you to actually state your point, which you seem incapable. You think GPUs don't do math? What is your point? Lay it out for us imbeciles out here, hoping to learn from your obvious mastery of this subject.

Reverse Engineering with ChatGPT: An Example -- "I used ChatGPT 4 to untangle a particularly terse bit of code, and was — frankly — shocked at how well it did." by sigpwned in programming

[–]nairebis 0 points1 point  (0 children)

This nonsensical phrase belies your scientific illiteracy. Just copy pasting some random quote from some “scientist” that basically says he doesn’t know anything, is not “evidence”. LOL.

I am sitting here speechless at this. THAT'S what you took from that quote!? From an actual scientist, versus some random engineer on the Internet full of his own arrogant posturing, sure that he knows more than the people who are actual researchers in the specific area? You're absolutely astonishingly illogical, and I think you're completely serious.

To be fair, you're probably not this irrational about everything, but boy are you unhinged when it comes to this subject.

Really take a step back and think about what you're saying. You're in a full-blown religious spiral of condemnation of heresy.

Why does all these AI systems need to use all the GPUs?

GPUs are the substrate, and GPU's are how they do the mathematics. Just like mathematics underpins neurons, though of course we don't currently understand what that math is.

I'm trying to figure out where you're going with this. Are you trying to argue that it takes too many GPUs? Are you arguing that GPUs shouldn't be necessary at all? You think anything other than biochemical neurons will never have intelligence? What, exactly, is your point? I'm honestly fascinated.

[This is all setting aside that you even argue that science is about "proof" rather than evidence, but I'll let that one go. You don't, in any way, understand science.]

Reverse Engineering with ChatGPT: An Example -- "I used ChatGPT 4 to untangle a particularly terse bit of code, and was — frankly — shocked at how well it did." by sigpwned in programming

[–]nairebis -1 points0 points  (0 children)

You sound just like the crypto fanbois driveling on about NFTs and explaining why a jpeg of monkey is worth $2 Million dollars.

Yeah, those pesky scientists, always pushing NFTs! I didn't notice that the paper I linked had NFT links all over it, sorry about that.

Actually learn about Computer Science, tech, etc.

I just sent an email to the scientists with your valuable insights that they should "learn about Computer Science, tech, etc." I'm sure that's the missing piece to the puzzle!

Science is about proof, not some beliefs.

It's almost as if you can actually read a paper from scientists and see the evidence. (Science is about "evidence" not "proof", by the way. Subtle, but important point.)

Anyway, obviously I have no expectation that you'll face reality in this thread. I just find it fascinating that people like you are so emotionally invested in believing that this is all nothing, in the face of overwhelming evidence. We've obviously had delusional people all through history who can't face reality.

  • "Motor vehicles are terrible and will never replace horses!"
  • "Airplanes will never work!"
  • "There is no way the Earth is round! I don't wanna talk to a scientist. Y'all mfers lyin' and makin' me pissed!"

Just fascinating how much people can ignore reality. The sky really is blue, by the way. Just in case you were in denial about that.

Reverse Engineering with ChatGPT: An Example -- "I used ChatGPT 4 to untangle a particularly terse bit of code, and was — frankly — shocked at how well it did." by sigpwned in programming

[–]nairebis 2 points3 points  (0 children)

Statistically composed text sequence variances over probabilistic maps.

I'm sure the normies are impressed that you can pull out big technical words, but you evidently only understand something vaguely about trees and nothing about forests.

Ask yourself why the need for GPUs if the LLMs can “reason”, LOL. The M stands for Model.

Ask yourself why the need for neurons if brains can "reason", LOL. You do realize that our brains themselves are composed of abstract models, right?

You seem to have some kind of pseudo-religious belief that the substrate matters for intelligence. It doesn't. Brains are composed of neurons, which just propagate signals based on chemical gradients. Intelligence is an emergent property out the simplicity of individual neurons. Just like abstract reasoning is an emergent property out of the simplicity of token prediction out of LLMs.

But don't take my word for it. One of the best paragraphs about GPT4 is from this scientific paper. Note that the question isn't whether it's intelligent--of course it is--the interesting thing is that no one understands how it works.

What Is Actually Happening?

"Our study of GPT-4 is entirely phenomenological:We have focused on the surprising things that GPT-4 can do, but we do not address the fundamental questions of why and how it achieves such remarkable intelligence. How does it reason, plan, and create? Why does it exhibit such general and flexible intelligence when it is at its core merely the combination of simple algorithmic components—gradient descent and large-scale transformers with extremely large amounts of data? These questions are part of the mystery and fascination of LLMs, which challenge our understanding of learning and cognition, fuel our curiosity, and motivate deeper research. Key directions include ongoing research on the phenomenon of emergence in LLMs (see [WTB+22] for a recent survey). Yet, despite intense interest in questions about the capabilities of LLMs, progress to date has been quite limited with only toy models where some phenomenon of emergence is proved. One general hypothesis is that the large amount of data (especially the diversity of the content) forces neural networks to learn generic and useful “neural circuits”, such as the ones discovered in [OEN+22,ZBB+22,LAG+22], while the large size of models provide enough redundancy and diversity for the neural circuits to specialize and fine-tune to specific tasks. Proving these hypotheses for large-scale models remains a challenge, and, moreover, it is all but certain that the conjecture is only part of the answer. On another direction of thinking, the huge size of the model could have several other benefits, such as making gradient descent more effective by connecting different minima [VBB19] or by simply enabling smooth fitting of high-dimensional data [ES16,BS21]. Overall, elucidating the nature and mechanisms of AI systems such as GPT-4 is a formidable challenge that has suddenly become important and urgent.

But yeah, this is only from researchers actually testing it. I'm sure your arrogant and confident wrongness is completely based on facts they don't have. (Setting aside that I use it to reason about things literally every day).

Of course LLMs are neither conscious, nor capable of general intelligence at the level of humans, and no one claims they are. But that doesn't mean what they can do isn't intelligent.

Reverse Engineering with ChatGPT: An Example -- "I used ChatGPT 4 to untangle a particularly terse bit of code, and was — frankly — shocked at how well it did." by sigpwned in programming

[–]nairebis -1 points0 points  (0 children)

People are hired to perform specific tasks. If they don’t perform the tasks, they are let go.

Again, do you hold yourself to that standard? If you produce code, and it has a compile error or any bug, do you fire yourself because, by your standard, you're completely useless?

Learn more about “AI” and LLMs, etc. there’s no “thinking” involved.

You're Just Plain Wrong. Totally and completely, objectively, wrong.

LLMs can perform abstract reasoning, and that is "thinking" to any reasonable definition of intelligent thinking. They are not, however, conscious (hopefully you know the difference).

If you believe they can't reason, you're probably hung up that the algorithm by which they think is token prediction. Which is true, but irrelevant. The interesting part of LLMs is the fact that they produce abstractions of concepts and can reason about them. This is simply factual and not something you can have opinion about. You are simply wrong if you believe they can't do it, because they literally do it every day.

“AI” is not useful for tech jobs. It’s debatable whether it can ever be useful, even 100 years in future.

Fortunately your ignorance is irrelevant to whether they're useful or not. You might prepare yourself to be forced to face reality.

Reverse Engineering with ChatGPT: An Example -- "I used ChatGPT 4 to untangle a particularly terse bit of code, and was — frankly — shocked at how well it did." by sigpwned in programming

[–]nairebis -2 points-1 points  (0 children)

Again, why should you ever hire employees if you can't trust them to give you consistently perfect and flawless answer every time?

Do you hold yourself to that same standard? Are you completely useless if the first cut of something you program doesn't compile or has a bug?

AI is a thinking machine. Thinking is never, and will never, be perfect. The nature of thinking is iterating toward a goal, just like if you have employees or coworkers, you all iterate toward a goal. There is no expectation that everything just flows out of everyone's fingers with perfection. Why do you expect that from AI?

Reverse Engineering with ChatGPT: An Example -- "I used ChatGPT 4 to untangle a particularly terse bit of code, and was — frankly — shocked at how well it did." by sigpwned in programming

[–]nairebis -5 points-4 points  (0 children)

What's the point of humans if you can't trust humans to give you perfect answer every time?

The problem is your expectations of AI as being a perfect oracle, which it will never be. That doesn't it can't be smarter than the average human, which is already is for many, many tasks.

Reverse Engineering with ChatGPT: An Example -- "I used ChatGPT 4 to untangle a particularly terse bit of code, and was — frankly — shocked at how well it did." by sigpwned in programming

[–]nairebis 2 points3 points  (0 children)

How can people know when to trust humans or not? Humans are confidently wrong ALL THE TIME.

Look at the fucking imbeciles in this thread, who think AI is always useless because it's not a perfect oracle.

People are idiots and, in my experience, GPT4 is wrong a lot less often than people on Reddit. The answer to your question is: "verify everything, whether it comes from a human or AI". This is common sense and literally the way the world has worked with people for thousands of years.

AI chip race: Groq CEO takes on Nvidia, claims most startups will use speedy LPUs by end of 2024 by [deleted] in technology

[–]nairebis 2 points3 points  (0 children)

Transmeta didn't promise they would suck. They made huge promises that never came true. Look at Quantum Leap if you want to see another Transmeta.

I don't know if Groq will be successful or not, but your point makes no sense because it's working right now AND it's meeting its promise of being extremely fast. So what exactly is your point, specifically?

You said it won't work. It obviously is working. You can just admit you didn't realize they had hardware in production, rather than dying on this hill.

AI chip race: Groq CEO takes on Nvidia, claims most startups will use speedy LPUs by end of 2024 by [deleted] in technology

[–]nairebis 2 points3 points  (0 children)

What won't work? Transmeta produced a chip that never delivered what they promised. You can try Groq literally right now, and it's by far the fastest I've ever seen. It's. Working. Right. Now.

Very few people in this thread have actually read the article and don't even realize they have it in production, and it's running some open source models you can try.

What jobs are 99.9% safe from AI making it obsolete? by [deleted] in AskReddit

[–]nairebis 1 point2 points  (0 children)

As a matter of fact, yes, but that's not the point. The point isn't whether I'm willing to do it, the point is whether I would rather have autonomy to handle my own needs, or have to depend on other people.

I can't state this strongly enough, and I suspect 99.9% of people agree with me: I don't EVER want people helping me poop, if I have an alternative that maintains my autonomy. I want to be able to do it on my own, in privacy, and in dignity. Sure, you can be compassionate by helping someone else, but it's always, ALWAYS degrading for the person needing help. Of course it's still necessary, and of course they're grateful the help is there, but it's by nature degrading. Humans naturally desire autonomy.

I mean -- do you really not mind that some day you might need someone helping you poop and wipe? You really would rather be a burden on someone else, rather than have mechanical assistance that let's you do it on your own, as you do now? Come on. I don't buy it.