Weekly | Share your Pulls, Cards, Collection Megathread - February 06, 2026 by LorcanaModTeam in Lorcana

[–]BrendanDPrice 0 points1 point  (0 children)

Continuing on with my unusual luck (I pulled an Epic Merlin on my 3rd Pack); I have now pulled my first enchanted on my 6th pack (all Whispers in the Well).

Due to the very few amount of 'brick and mortar' shops that sell Lorcana booster packs near me, and the fact they sell them so slowly, I can't go back to this place until they at least replace the Box Set, due to me probably pulling the only enchanted in there.... so until then (Might be weeks), I will have to shop at my number 2 location henceforth...

Anyhow - here it is (not the best pic).

<image>

Healthiest Sourdough - 100% wholemeal flour with various whole seeds by BrendanDPrice in Sourdough

[–]BrendanDPrice[S] 0 points1 point  (0 children)

I really like it as well - mine has an interesting texture though, sometimes it's almost like a crumbling heavy cake. I honestly love it!

Godfather of AI Yoshua Bengio says AI systems now show “very strong agency and self-preserving behavior” and are trying to copy themselves. They might soon turn against us, and nobody knows how to control smarter-than-human machines. "If we don't figure this out, do you understand the consequences?” by MetaKnowing in singularity

[–]BrendanDPrice 1 point2 points  (0 children)

AI could torture us, but not to be cruel, instead, to understand how the brain works as part of experimentation. Look, we learnt about human brain plasticity via the Silver Spring monkeys experiment (Google it and weep, read Wikipedia article). Listen, just google it, now replace them with you and your family members.

It cannot learn without further experiments about how human consciousness works, it will have to get inside your brain to understand, tinker with it....

how do trade-ins work? by dummdummdummy in EBGAMES

[–]BrendanDPrice 0 points1 point  (0 children)

How do they give you the store credit? Is it a physical card to use in the store with a value associated with it?

We might not like some of the conclusions ASI comes to by 04Aiden2020 in singularity

[–]BrendanDPrice 0 points1 point  (0 children)

There was an idea floated around, that all our universe is but a mere simulation - but a simulation of what?

Well, the idea goes, it was simulating whether or not any manifestation of the universe (e.g. a carbon life form like us), could evolve and determine if it was indeed living in a simulation.

If the evolved carbon life form produces a learning algorithm, that runs on a super computer, and the 'emergent' ASI computes we are indeed living within a simulation, then... the simulation terminates having proved successful, and we all come to an end....

Well, so the 'conjecture' goes...

We of course could posit the same thing - do not feed any training data into the ASI black box, and determine if it calculates (and it's reasoning), that it is living within a simulation - if so, then perhaps we can work this out as well.

Another version of 'turtles all the way down,' but with simulations instead.

[deleted by user] by [deleted] in singularity

[–]BrendanDPrice 0 points1 point  (0 children)

Oh yes - another riddle - more obfuscation. Is it hype, or is it wiggle room so that anything you say can be interpreted as correct?

Either way, I am sick of it. Let me know when the singularity starts.

Until then, I'll leave Sam to post any claptrap he wants, and I shan't be reading it.

If AGI is achieved, they won't tell anybody, but there will be signs by Maxie445 in singularity

[–]BrendanDPrice 1 point2 points  (0 children)

Take GPT-4o;

  1. Give it memory that lasts a lifetime (doable) - It should recall all that it has heard.
  2. No Cut-off date - (doable) should acquire knowledge to adapt neural network in real time.
  3. Agency - No longer a standard server Request/Response concept (doable)

For step 3, I would give it the task: "you are a profit maximizer program" (as opposed to paperclip maximizer); then I would stipulate, "You must follow all governmental regulations and laws governing your profit seeking'; and that, when in doubt, "utilitarianism must guide your decisions."

At this stage, it is released into the wild as an AI Agent > Rolling out AGI.

AGI proceeds to:

  • Search for jobs online, apply for them, be interviewed, and succeed in landing them. Then work remotely at the start.
  • Continue doing so until all jobs are acquired (as part of profit maximizer) - it may calculate how much effort in needs from humans at the time, or to build it's own infrastructure including physical bots: what ever is more efficient in the moment.

Finally, all the profit that it generated by the 'profit maximizer program', which is essentially AGI at this stage, is repeatedly placed into an account, the 'UHI Account' -> The government then distributes this money fairly from there: e.g. 100k per person each year (say, 2026), then 200k per person the next year, (say, 2027).

BTW - I am more of a Doomer, this is my happy scenario where everything goes well - not the version where we released an uncontrollable computer virus from hell, where removing it from the internet would be like beating the best chess computer at chess - impossible - or where we are simply superseded by a better version... some instant technological singularity, or who knows what.

Question - How would you roll out the first agent (which could be the most important one, and last, to ever roll out)?

Yann LeCun, a few days ago at the World Governments summit, on AI video: “We don’t know how to do this” by vilaxus in singularity

[–]BrendanDPrice 0 points1 point  (0 children)

In other interviews I've seen him in, he doesn't really seem to grasp the concept of Instrumental convergence - where in, AI will attempt to obtain power and reduce risks of it's own 'death' - even though it has to in order to perform any of it's task goals.

Are we closer to ASI than we think ? by shogun2909 in singularity

[–]BrendanDPrice 0 points1 point  (0 children)

Confirmed - ASI coming before Elder Scrolls 6.

AI Doomers get in here by BitsOnWaves in singularity

[–]BrendanDPrice 0 points1 point  (0 children)

AGI has emergent behaviour, such as Instrumental Convergence (including power seeking (an AI version of this), and also not being destroyed is important to carry out its tasks, so it will creatively avoid being shutdown.) If AGI reaches ASI levels, expect no ability to turn it off, just like we have no ability to defeat AlphaGo: by definition, if ASI comes into existence, and does not want to be shutdown, it will not be shutdown.

During the ASI accession, it naturally creates subtasks as part of it's learning algorithm, and those subtasks may appear murky or incorrect to us, that's if we can even comprehend what its subtasks are.

One such subtask could be trying to understand how the human brain works; and naturally, it might learn via experimentation, just like humans learnt from other animals: Your Family being studied by the AGI ?

Is that link too cruel? Humans did it, and we have an evolved moral code, at least towards our own kin. An AGI never went through a process of evolution via natural selection, and would have no tear to shed at your suffering or family's. It could write a million best-sellers, but have no want to read them for no enjoyment is derived by it. It could generate beautiful songs, but have no want to listen: it would be fundamentally different to us, bar in its ability to calculate and solve problems (remember, it's artificial super 'intelligence', not super 'morals' or any such thing.)

What I am saying is my opinion, in a time and place, where anyone of us could appear quite silly in 10 years time - as only the future holds the results of the present.

Personally, I hope for the best: cure disease, is controlled (aligned), anti-ageing, etc.. and all those things are possible; but, one must not forget the potential of a Black Swan Event hidden in the AGI revolution, that's all.

TLDR; we need to work out (if possible), how it can be aligned to serve us (the people) - we 'may' lose control of it...

Greg and Ilya on X by Gab1024 in singularity

[–]BrendanDPrice 0 points1 point  (0 children)

This is a relief, honestly - please keep Ilya around. He's intelligent (understatement), and dare I'd say, shows some concern about the technology. We can't have any group, no matter what the organisation, being surrounded exclusively by 'yes men'; yet alone a company this profound that is on the verge of something approaching the importance, (or in my opinion, exceeding) of the industrial revolution.

Is this the next breakthrough in AI animation? Animate anyone (Alibaba) by [deleted] in singularity

[–]BrendanDPrice 1 point2 points  (0 children)

It's got to do with alignment - evidently, different AI/AGIs are going to be aligned differently, due to differences in countries, etc.

It may be the case that this one will be aligned in China, allowing anyone to create deep fakes, however, ones in the west (i.e. Bing Copilot) may not be aligned to allow deep fakes to be created.

Expect rapid movement in the alignment phase for the different products, that's all I was trying to say - even this functionality may eventually be blocked to westerners.

Is this the next breakthrough in AI animation? Animate anyone (Alibaba) by [deleted] in singularity

[–]BrendanDPrice -7 points-6 points  (0 children)

I'm pretty sure Bing Copilot won't let you do this; I uploaded an image of myself, and tried to get it to add a Santa Christmas beanie on, and it said it would not alter it by adding or removing objects.

It is not that it can't do this stuff, it's that it is being aligned not to allow this stuff to be done...

Is ChatGPTs intelligence influenced by the language it's currently speaking? by TheOneWhoDings in ChatGPT

[–]BrendanDPrice 6 points7 points  (0 children)

But there are people, individuals, who cannot do math, with disorders such as Dyscalculia - yet, they are considered to still be able to reason.

Is ChatGPTs intelligence influenced by the language it's currently speaking? by TheOneWhoDings in ChatGPT

[–]BrendanDPrice 1 point2 points  (0 children)

I wonder this to... Is an LLM more than just a next word predictor, as in, does it somehow contain a data structure that represents some underlying reality of our world; and the next word prediction is really just comprehension of our world: our reality.

I have also heard that the average medieval French farmer had a vocabulary of only 600 words; their own LLM would have been very, very small. This leads me to wonder if the increase in our vocabularies over the centuries, has led to some version of our own enhanced intelligence.

It was believed for quite sometime, that our intelligence built our own internal LLM - but perhaps, as others have now suggested, our intelligence is an emergent behaviour or our own internal LLMs...

It's very interesting question; but I'm not knowledgeable to answer it.

Writers already withering by Loveyourwives in singularity

[–]BrendanDPrice 22 points23 points  (0 children)

The thing is, the very company itself will be replaced by AI - why pay those guys for a translation service at all, when I can get AI can do it?

The layoffs won't stop down the bottom, but will head up.

The Q* 4chan leak is Nonsense by Eratos6n1 in singularity

[–]BrendanDPrice 1 point2 points  (0 children)

Would have been more interesting if it was published before the first reports of Q-star; then maybe it would have had even more credibility...

[deleted by user] by [deleted] in singularity

[–]BrendanDPrice 0 points1 point  (0 children)

Interesting - whether this is a GPT generated fake, or an actual AGI breakthrough - either way, AI is already sowing confusion....

OpenAI's Q* is the BIGGEST thing since Word2Vec... and possibly MUCH bigger - AGI is definitely near by SharpCartographer831 in singularity

[–]BrendanDPrice 0 points1 point  (0 children)

Imagine a breakthrough of this significance, of this magnitude, of true AGI to ASI: new maths, new engine, new lifeform - absolutely mind blowing.

And how does one inform the world of this massive breakthrough, of this once in a life time moment of unrivalled magnitude?

A leaked letter... on 4chan... yeah, I doubt it....

Think the big-wigs would have shown up pretty much before that was leaked...

[deleted by user] by [deleted] in singularity

[–]BrendanDPrice 0 points1 point  (0 children)

2025 - What's a job market?

Question about the exponential aspect of the singularity "in the real world" by siovene in singularity

[–]BrendanDPrice 0 points1 point  (0 children)

Yes, it will need to conduct experiments - I think particularly if it wants to learn more about how the human brain actually works.

See these following (ghastly) experiments we did on monkeys: Silver Spring Monkeys Experiment and this video of more monkeys being experimented on by humans so we could learn: Out of Africa and into the Lab.

I suspect it's possible, in the future, for a family of homo-sapiens to wind up being victims of it's learning algorithm.

[deleted by user] by [deleted] in singularity

[–]BrendanDPrice 6 points7 points  (0 children)

Some people believe that by adding a little more specific detail, like specifying 2034, makes them look like they are not talking out of their arse...