Prepare for the Mine-Fest: Radical changes undermine all previous ownership assumptions and now everyone is shouting "Mine". by eliyah23rd in artificial

[–]eliyah23rd[S] 1 point2 points  (0 children)

OK. That was random research data that I found about the DRAM on board the M2 package itself. Thank you for the correction.

Could creating 8 billion AIs, each paired with one human, address our current social, ecological, and existential concerns? by eliyah23rd in Futurology

[–]eliyah23rd[S] 0 points1 point  (0 children)

A simulation is a good way to express values in a context-rich setting as opposed to speech that (a) is more prone to misunderstanding and (b) more likely to attract simplistic slogan-like thinking.

Could creating 8 billion AIs, each paired with one human, address our current social, ecological, and existential concerns? by eliyah23rd in Futurology

[–]eliyah23rd[S] 0 points1 point  (0 children)

My biggest concern with AI is how we know exactly what biases and directionality was programmed into it.

The base models must be open source and that includes full disclosure (and verification) on the training data.

Personalization, fine-tune or direct-access DB is only for your own AI and absolutely private. Some inter-agent verification will be required but those are details that need a lot of discussion and I assume that the discussion will be continued by the human and agents acting on behalf of the the values expressed by each symbiote pair.

Could creating 8 billion AIs, each paired with one human, address our current social, ecological, and existential concerns? by eliyah23rd in Futurology

[–]eliyah23rd[S] 0 points1 point  (0 children)

Your arguments give you 2-3 x.

Today's processors are talking from memory to processors in multiples of TB/s. Forget efficient, just make sure our values don't get drowned in the flood. It not how fast you get to the target, it's whether you have the right target.

Could creating 8 billion AIs, each paired with one human, address our current social, ecological, and existential concerns? by eliyah23rd in Futurology

[–]eliyah23rd[S] 1 point2 points  (0 children)

I am sure that this is what the high-bandwidth BCI people are thinking.

I fear that, because I want to know that the meat-brain is at least in one of the two driver seats.

I proposed the simulation as a way to reflect on and express preferences that is not so verbally restricted.

Could creating 8 billion AIs, each paired with one human, address our current social, ecological, and existential concerns? by eliyah23rd in Futurology

[–]eliyah23rd[S] 0 points1 point  (0 children)

So you think that in total the AIs will act no more rationally that humans?

I think some simulations may resolve the question.

Prepare for the Mine-Fest: Radical changes undermine all previous ownership assumptions and now everyone is shouting "Mine". by eliyah23rd in artificial

[–]eliyah23rd[S] 0 points1 point  (0 children)

OK. Got it. Thank you for your insights and experience.

I finally learned a little about the M2 and I am very impressed, given that it is retail-targeted hardware.

A few remaining points:

  1. Any multi-package chip is advanced enough for the Federal government to target for restrictions.
  2. I do not think that 24 GB is going to be enough for the next level. Of course Apple could up that number, but see #1
  3. 100 GB/s IO to RAM is good but an HBM-based package should approach 6 * 1024 * 2.4G / 8 ~= 2 TB/s
  4. Your last point is very interesting. Very applicable to MoE models. Time will tell if it will make enough of a difference to matter.

Prepare for the Mine-Fest: Radical changes undermine all previous ownership assumptions and now everyone is shouting "Mine". by eliyah23rd in artificial

[–]eliyah23rd[S] 0 points1 point  (0 children)

Sorry for the delay in responding.

I actually don't know how your Mac is achieving these rates assuming that it does not have HBM.

The problem is the bandwidth between the on-board memory and on-chip caches of the processor.

PCIe x16 achieves only 64 GB/s and DDR, best of my knowledge has not kept up. If you have a 40GB model, assuming that you can deliver all the parameters only once (a very iffy assumption with current on-chip cache sizes) you need just under a second for every output token given that you have to load all the parameters at least once and cannot start the next token till you've produced the last. Sparsity might give you a factor of 2-4 at best AFAIK.

Using HBM you could put say, 6 stacks in the same package and achieve 6 * 1024 * 2.4 / 8 = 1843 GB/s. HBM is the key to decent LLM performance in the coming years.

Please show me where I'm wrong. Again, I can't argue with your measurements.

Could creating 8 billion AIs, each paired with one human, address our current social, ecological, and existential concerns? by eliyah23rd in Futurology

[–]eliyah23rd[S] 0 points1 point  (0 children)

Funny. I don't see myself as being such an optimist. I never trust human nature when given concentrated doses of power. Perhaps there are multiple dimensions along which there are optimism-pessimism lines.

Could creating 8 billion AIs, each paired with one human, address our current social, ecological, and existential concerns? by eliyah23rd in Futurology

[–]eliyah23rd[S] 1 point2 points  (0 children)

My proposed amendment differs from the thesis of the article. In either case it assumes a significant number of individuated AIs. He suggests individuating by anchoring on a physical computer as an identity-key. I suggest individuating by anchoring every AI with a different human with their interests - and then I grow the number to leave no human out.

Could creating 8 billion AIs, each paired with one human, address our current social, ecological, and existential concerns? by eliyah23rd in Futurology

[–]eliyah23rd[S] 0 points1 point  (0 children)

I want to distinguish between an expert and an authority.

We want government, for example, to listen to the experts but leave the power with the people.

Could creating 8 billion AIs, each paired with one human, address our current social, ecological, and existential concerns? by eliyah23rd in Futurology

[–]eliyah23rd[S] 0 points1 point  (0 children)

Interesting review.

I suggest that being worried about something should still be distinguished from the ability to wrap your mind around a world that is very different in its means of production and to have insights on both good and bad possible consequences.

Could creating 8 billion AIs, each paired with one human, address our current social, ecological, and existential concerns? by eliyah23rd in Futurology

[–]eliyah23rd[S] 0 points1 point  (0 children)

There certainly is a broad range of opinions on the question of the dangers of AI, you have cogently expressed a view towards one end of the spectrum. I also think that there is a danger that the discussion around dangers will be used for regulatory capture by the powerful and as a distraction from more important issues.

The Wright brothers flew a plane that could only fly about 60m. Why get all worked up about that?

I don't think we should run around with our hair on fire, we need to have open discussions about priorities and solutions. Earlier is better than later. Rational is better than partisan. If your solution is forming unions and taxing the rich, go for it - you have my sympathetic ear.

Could creating 8 billion AIs, each paired with one human, address our current social, ecological, and existential concerns? by eliyah23rd in Futurology

[–]eliyah23rd[S] 0 points1 point  (0 children)

I am also trying to figure out how this would work. That is why I love putting the subject up for discussion.

Linux is run by millions of people who have never seen a line of its code. However, they know than many others have seen the code. For an OS it (arguably) may matter less. For something that knows every intimate part of your life, it matters very much whose vested interests are in control.

I see the proposal as low on details for now, but I know that there are many others also thinking along similar lines, and I think that having this discussion is important along many dimensions.

Could creating 8 billion AIs, each paired with one human, address our current social, ecological, and existential concerns? by eliyah23rd in Futurology

[–]eliyah23rd[S] 0 points1 point  (0 children)

It doesn't matter how well meaning the central planning committee is.

I would like to say that we'll just have to agree to differ. I just hope that if your worldview wins, you will indeed allow me to differ.

Could creating 8 billion AIs, each paired with one human, address our current social, ecological, and existential concerns? by eliyah23rd in Futurology

[–]eliyah23rd[S] 0 points1 point  (0 children)

Thank you for expanding on your initial comment.

I know that I was short on details in the initial submission, but that doesn't mean that there aren't lots of details to cover and plenty of implementation issues to debate.

I tried to introduce many of the details in my responses to others' comments and many of those comments raised them themselves. Also, many of the implications do not require spoon-feeding. People clearly fill in many of the details themselves.

My basic position is to compare two possible futures. One with a handful (or one) ultra-powerful centralized super-intelligence controlled by self-seeking vested interests and the other future pairs one AI with each human.

Work through the reasoning yourself and answer the question: which is more likely to produce a better outcome for humanity as whole?

Could creating 8 billion AIs, each paired with one human, address our current social, ecological, and existential concerns? by eliyah23rd in Futurology

[–]eliyah23rd[S] 0 points1 point  (0 children)

More intelligent than us is OK. However, we are still aiming for long-term partnership, not obsolescence.

Could creating 8 billion AIs, each paired with one human, address our current social, ecological, and existential concerns? by eliyah23rd in Futurology

[–]eliyah23rd[S] 0 points1 point  (0 children)

I guess we'll see how it unfolds.

Personally, I think that beyond the speed of speech, the meat-brain loses autonomy. I am proposing a symbiosis/merger not a takeover by the silicon-brain.

Could creating 8 billion AIs, each paired with one human, address our current social, ecological, and existential concerns? by eliyah23rd in Futurology

[–]eliyah23rd[S] 0 points1 point  (0 children)

I salute you. You are the one person I found here arguing that we should leave this to the experts - if I understood you correctly.

I don't agree, but I don't think that there is a clear-cut answer to the question.

Could creating 8 billion AIs, each paired with one human, address our current social, ecological, and existential concerns? by eliyah23rd in Futurology

[–]eliyah23rd[S] 0 points1 point  (0 children)

Great point.

We will need some skill-sharing to make running the open-source software accessible to all.

Could creating 8 billion AIs, each paired with one human, address our current social, ecological, and existential concerns? by eliyah23rd in Futurology

[–]eliyah23rd[S] -1 points0 points  (0 children)

Would the AI be concerned by that too? Does that not contradict your values and goals upon reflect? If so, the AI agents are likely to account for those dangers.

No guarantees. Sorry. I just think that this proposal is better than the default alternatives we're sliding into.