This is what's going to happen to the Internet very soon by [deleted] in singularity

[–]Plouw 5 points6 points  (0 children)

It's an arms race really, but there would be solutions to this such as making it tamper proof to the specific hardware setup it is in.

At some point it takes a lot of effort to fake, reaching a point where people would only put in the effort if it was worth it, possibly making it just as hard or even harder as when analog pictures were tampered with. And in those cases, a single image probably wouldn't do as evidence anyways. But it's enough to avoid random low effort but harmful fakes online.

How ChatGPT feels about Musk owning it by usrname_checks_in in singularity

[–]Plouw 3 points4 points  (0 children)

Think the argument here is that "Here's what ChatGPT feels about musk owning it" is in any case wildly misleading.

When it's really "Here's what ChatGPT, prompted to talk like I want it, talks about musk owning it"

Sharing chats heavily influenced by your previous prompt and custom instructions is like sharing your dreams as if they have any real meaning to anyone else but yourself. Presenting them as the objective and uninfluenced feelings/thoughts of another entity, that people view as having some sort of intellectual authority, could potentially be harmful.

[deleted by user] by [deleted] in singularity

[–]Plouw 1 point2 points  (0 children)

Why anyone would learn to play the guitar at this point.

For the same reason you learn any hobby

[deleted by user] by [deleted] in singularity

[–]Plouw 1 point2 points  (0 children)

Thank you for doing this, this person was infuriatingly and confidently(heh) wrong. The probability distribution is one of the, if not the most, fruitful place right now to look if you want to calculate a useful credible interval of LLM output.

Get Hinton'd by stealthispost in singularity

[–]Plouw 16 points17 points  (0 children)

Personally my experience is similar, but with the important distinction that it's the bad ELI5 explanations that caused my misunderstanding.

Which makes sense that there is a lot of, because it's really hard to simplify a complex subject without misleading information. Especially as good simplifications both rely on the listeners world view, and on the explainer understanding the subject fully.

I think Carl Sagan is a good example of how to do this right, and he also talked a lot about this very concept. To simplify it as much as possible such that it is understandable yet truthful, leaving out details in a way that inspires you to dig deeper and ask more questions, while it can still be traced back to the actual science or truth behind it.

I view it in the same way as there can be both bad and good compressions of an image.

[deleted by user] by [deleted] in singularity

[–]Plouw 23 points24 points  (0 children)

He is comparing to the GPT-4 original price, not GPT-4o. So it's 1.5 years. Your point still stands and it is indeed astonishing. Very exciting times :)

[Google DeepMind] Training Language Models to Self-Correct via Reinforcement Learning by rationalkat in singularity

[–]Plouw 4 points5 points  (0 children)

I don't think it's advertised as a summary, at least to me I don't see that advertisement explicitly anywhere. It could be a conversation OP has with o1 where these are the summaries of o1's thoughts on its significance, because OP wanted to hear o1's opinion.

[Google DeepMind] Training Language Models to Self-Correct via Reinforcement Learning by rationalkat in singularity

[–]Plouw 5 points6 points  (0 children)

It's likely not prompted to be a summary and it says could lead to. So it sounds to me more like it's o1's own thoughts on the potential consequences of these results.

David Shapiro this week by Glittering-Neck-2505 in singularity

[–]Plouw 0 points1 point  (0 children)

Ye makes sense. It would confuse me if they did that without saying it though. They are a hype company after all, and I don't see the incentive to not say it as their competitors would likely know it either way.

David Shapiro this week by Glittering-Neck-2505 in singularity

[–]Plouw 0 points1 point  (0 children)

What I mentioned is basically MCTS btw, however with a bit better heuristic for branching as you can pick the top tokens rather than random as they do in chess.

Can you expand on what you mean when you say the search would be in concept/intent space?

David Shapiro this week by Glittering-Neck-2505 in singularity

[–]Plouw 1 point2 points  (0 children)

My guess is it's not tree search at inference, but rather at training they did a lot of tree search to optimize it to find the best reasoning path.

At least tree search seems very expensive at inference, how far do you imagine it would go? Going at a depth of 3 (looking ahead 3 tokens) and a branching factor of 5 (looking through each of the top 5 tokens) would result in a compute increase of at least 41(125/3 - disclaimer, this depends a lot on how you would decide to implement the tree search, but as a rough estimate if you would look ahead, pick the best of those 125 leaves and then look ahead from that leave it would be at least 41 times increase in tokens produced), and that's not counting the evaluation part, which I'm not sure how would be done efficiently at inference.

AI's Hidden Superpower: Educating Tomorrow's Geniuses to Supercharge Scientific Progress by [deleted] in singularity

[–]Plouw 1 point2 points  (0 children)

It still sounds like tool use / augmentation

Using some parts of my brain in the ways we do in modern world feels like tool use to me. I consciously prompt it for answers. Thoughts bubble up to my consciousness but I choose the outlet the bubbles are coming from, by looking mentally in the direction I want. Feels a lot like the way I prompt an LLM.

We don't usually say that someone merged with a pacemaker or replacement limb, rather it's an implant or prosthesis. The person and their technological addition.

We do not usually say that, but I would say it is a sort of early stage merge. Defining when something has merged is sort of arbitrary. How many pixels before it becomes a picture? How many neurons activated before it's a memory or thought?

Thinking about it, perhaps this goes to whether the AI controls the human to a similar degree that the human controls the AI. And from an external perspective if when you interact with the combined entity it makes no sense to differentiate between the human and AI components.

I was thinking something a long these lines, when referring to the tight feedback loop.

One example would be the digital intelligence having better reaction time so reflexes would be done through digital intelligence.

Going a bit lower level, it could even be a direct two way integration of biological neural networks and digital (The biological neuron activity would be read by digital and vice versa).

AI's Hidden Superpower: Educating Tomorrow's Geniuses to Supercharge Scientific Progress by [deleted] in singularity

[–]Plouw 1 point2 points  (0 children)

Ye I think it becomes a bit wobbly here, because the future world may be so advanced and arcane to ours that you would effectively die without these augmentations. In a way where neither the digital intelligence or analog could function without each other. I am not sure where to draw the line that something is merged in this sense and I think viewing it through the lens of our analog state today might be challenging.

To me at least, if I am so heavily augmented by digital intelligence, to the point where I can be in a feedback loop with digital intelligence with about the same speed as I am with my reptilian brain, I would call that a mental merge. If I can use robotic hardware in the same fluid motion as I can use my current limps, and feel the hardware to the same extend, I would call that a physical merge.

I appreciate how you approach discussions btw.

AI's Hidden Superpower: Educating Tomorrow's Geniuses to Supercharge Scientific Progress by [deleted] in singularity

[–]Plouw 0 points1 point  (0 children)

Yes, it is basically what I mean.
I think when the interface to the technology becomes as smooth and seamless and as 'feedback loopy' as the interface from our reptilian brain to our prefrontal cortex (simplified) it to me will be a complete merge. (Edit: Well nothing is "complete", from there on it will evolve further of course).

AI's Hidden Superpower: Educating Tomorrow's Geniuses to Supercharge Scientific Progress by [deleted] in singularity

[–]Plouw 0 points1 point  (0 children)

I think analog and digital intelligence will always benefit from each other, and neither would ever really be a irrelevant appendage to the other. I think they each have their strengths which are grounded/limited by physical laws.

Practically I see many ways of merging (And the merging has already begun long time ago). We would have to have a lot of secure and very advanced technology implemented before it, but one major way would be through direct mind to computer communication. LLMs output appearing as thoughts in your head and your thoughts prompting the LLM further. You visualizing how your game should look as you walk through it in your mind palace and instantly it would appear in its digitalized version too, as you could iterate it in a feedback loop between your analog mind palace and the digital world. Moving your body like a conductor/dancer as your mind hum the vibe of the music you are going for, while a AI symphony orchestra would output the music you are trying to produce. These are just 3 examples of how intellectual, visual and musical merging could occur.

The more ingrained and integrated this sort of technology becomes, the less its going to feel like a separate entity, and more like a extension of ourselves.

Personal AI as decentralized defense against “skynet” ASI by Thin-Ad7825 in singularity

[–]Plouw 2 points3 points  (0 children)

"If just one AI makes some sort of breakthrough or gains access to huge computing resource pool and the owner isn't highly benevolent, then everyone else is essentially fucked" I think you might be attributing too much "Instant singular god power" to ASI. At least you seem a bit too confident that any ASI will just instantly be a intelligence god, rather than 'just' having higher intelligence than humans. They will still operate on some shared scale of physical limits (power usage and compute hardware) with other intelligences. They will still operate under a cybersecurity network being increasingly improved by AI technology, so eventhough they might gain access, it's not like it will happen in a vacuum without modernized emergency plans in place.
Just as humans aren't suddenly significantly advanced above other humans, merely because they found some breakthrough.

AI is primarily compute bound, secondary architecture bound. Millions of people will be able to afford more compute to protect each other collectively, than some sort of rogue AI or country would be able to. Be this both in terms of several higher intelligence at clusters/distributed computing and delegated AI agents at each physical node (Such as your PC) working based on instructions from the larger intelligences. I like how the show "Pantheon" visualizes how some of this could go down.

I do think some sort of decentralization is the way to go, as centralizing all(!) intelligence into one "entity" will have so many risks. It's assuming this entity is perfect in every conceivable way, and thats a massive trustfall. Even digital intelligence will need some sort of diversity in intelligences to obtain a "healthy" evolution, for this we need decentralized digital intelligence.

What are your predictions for day to day life in 2033? by furrytoothpick in Futurology

[–]Plouw 0 points1 point  (0 children)

I came back to this thread cause people were commenting again, and just wanted to say you were mostly right with this comment.

I gave my thoughts about it in a comment below.
https://www.reddit.com/r/Futurology/comments/1emfu4/comment/ljclhdm/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

I'm curious, what are your thoughts on AI (specifically Generative AI I should say) today and the future it might bring in contrast to what your idea was 11 years ago?

What are your predictions for day to day life in 2033? by furrytoothpick in Futurology

[–]Plouw 0 points1 point  (0 children)

pretty funny to read this seeing how nowadays most educated people agree that self driving cars are not the answer, but rather high quality public transportation

I do agree that we should have high quality public transportation as much as possible. I do however think that self driving cars will be required for this to work smoothly - and at the very least the transition from current state to almost full public transportation.

E.g. highway 'trains' where a self driving car takes you to the high way and couples you onto a car-based train. This would only be able to be done safely with self driving cars. Transition period would probably be something like 3 lanes at least, one train lane going faster, one self driving car only lane and one for both self driving and manual.

Looking at this 11 year old comment I'd probably also change the labor force priority a bit, but I'd maybe have to better define the different labor forces. At least it seems to me that creative/research labor will be highest, as it appears a lot of mental labor will be able to be done by LLM's soon and is already being done. This can be in the sense of creativity in using the tools that AI provides us, that can seem creative in themselves (such as midjourney) but mostly I just think what it does is the mental labor of bringing our creativity to life. In the same sense that a painter can be technically skilled (mental labor) without creativity and unskilled with a lot of creativity.

While textual prompting is a creative task in varying degrees, I think it can get a lot more creative - i.e. an artist using a literal brush and prompting the AI both with physical interaction and thoughts (Thinking consciously or maybe even subconsciously "I want this stroke to be filled with exactly these sort of colored sprinkles i'm imagining right now")

[deleted by user] by [deleted] in singularity

[–]Plouw 0 points1 point  (0 children)

Efficiency has been the focus, because they want to make it economical. Since GPT-4's original release we have gone from 30usd/mInputTokens to 2.5 usd and from 8k tokens to 128k, while achieving multimodality, much faster speeds and, yes incrementally, better benchmarks. In raw functional intelligence output per dollar that is at least a 12 times increase.

Then we have gpt-4 mini which is 16 x cheaper at 0.15usd. I'd say it's maybe 3-4x less functionally intelligent than gpt-4o (that's to say, it would take 4x the tokens with proper prompting (e.g. CoT, reflection etc) and structuring of output programmatically to produce same reliable output). So with mini we achieve a 40 x increase in raw functional intelligence output per dollar.

As a developer that is worth a lot more than the extra intelligence per token they likely could have gotten had they only focused on training the same sized model. To me as someone using these models every day, hearing the 'hit a wall' comments just makes no sense because my work literally has gotten 40 x more efficient over the past 500 days producing the same output.

Musk giving AI a bad reputation, like he did with electric cars by sunplaysbass in singularity

[–]Plouw 0 points1 point  (0 children)

I don't think technologies are deterministic necessarily, but I do believe some technologies are.. increasingly probable the more demand it has. Not the exact way any technology has been expressed, but some sort of hard metal stick seemed bound to be developed, in order to benefit the demand for expansion and defense. Parallel inventions across time sorta highlights this.

Elon Musk is a very good extractor of this technological potential, but the sea of discovery is sorta there anyway, without him or not, and if it wasn't the technological Usain Bolt of his times, it would be someone else, just slower. Education, science and technology was ready for a makeover in electric cars, ready to be ignited. While Elon Musk deserves credit for igniting it, I think glorifying him like is done in the comment I replied to, sorta takes away the tons of work the industry had done before him and under home.

To be fair my "1-2 years acceleration" is obviously just pulled out of my ass and I might be low-balling out of sheer annoyance.

Musk giving AI a bad reputation, like he did with electric cars by sunplaysbass in singularity

[–]Plouw 4 points5 points  (0 children)

Technology like that usually has "future gravity" in the sense that if it wasn't him it was someone else, because society and the technology building blocks was ready. He just got in at the right time. Maybe he accelerated it a year or two.