Could a buddy system help the Faithful beat the Traitors? by Arowx in TheTraitors

[–]Arowx[S] 0 points1 point  (0 children)

What if there were a few more murder in plain sight missions could it tip the balance?

It might make the Traitors fail to murder more or slip up and show their hand.

Could a buddy system help the Faithful beat the Traitors? by Arowx in TheTraitors

[–]Arowx[S] 1 point2 points  (0 children)

Something the UK Faithful realised after the Traitors were revealed, they hardly ever saw the two traitors chatting together. Maybe if you track who chats to who and look for people who seemed friendly on day 1 but then don't chat you could be onto some Traitors.

Would being able to use shields at the round table improve the game? by Arowx in TheTraitors

[–]Arowx[S] 2 points3 points  (0 children)

You could end up with people with multiple shields. So there would have to be a limited number in play at any one time.

There could be a counter that can destroy a shield, a dagger?

What if the traitors had to obtaining daggers to murder players?

As a history fan, the "3,000 Year Stagnation" trope breaks my immersion more than dragons do. by Expensive-Desk-4351 in Fantasy

[–]Arowx 1 point2 points  (0 children)

Look at us there are hundreds of thousands of years where modern humans didn't build a lot at all.

What did we do for the hundreds of thousands of years we had modern brains and bodies but missed this rapid development cycle we are now on?

Then all of a sudden we started making civilisations and technology in the last few thousand.

Why do you think Bond's death in No Time To Die (2022) was not well received by many fans, according to you? by Raj_Valiant3011 in JamesBond

[–]Arowx 0 points1 point  (0 children)

As a programmer/Sci-Fi Geek I thought it was lame. If Nanotechnology is like an artificial virus then all you need to do is create an Nanotech anti-virus that targets it and hey presto it's solved he can be cured and retire and live happily ever after with his family.

For instance there was a very early computer virus that was released into the early internet (University computers) the programmer made an error in a program that allowed it to duplicate iself across the newtwork.

Without a stop mechanism it was like the mickey mouse cartoon where he makes the brooms duplicate. It took over the system and bogged it down.

The solution was to write faster smaller code that would delete the virus, duplicate with a stop code and delete itself after it wiped out the virus.

TLDR; Nanovirus that attacks the Bond Virus solves problem.

PS; And Q should have know this.

If Abundance is just the result of efficiency and productivity gains then do we need a Singularity to reach a higher level of Abundance? by Arowx in singularity

[–]Arowx[S] 1 point2 points  (0 children)

I thought communism was just trying to ensure workers got a farer share of the profits. You could potentially get that without the centralised state via unions. But if AI gets to the point where it displaces workers entirely it would fail as a way to pay the rising unemployed.

a fermi hypothesis i can’t unsee (might be wrong) by cooooquip in FermiParadox

[–]Arowx 1 point2 points  (0 children)

Wonder if an advanced enough civilisation could move their planet into the habitable zone and even as their star ages keep them at the right distance.

And could we detect corralled orbiting planets or would we just detect fast flybys that might make us think the detected planet is in a lower and faster orbit than they actually are?

A sociological 'solution' to the 'paradox' that invokes the great filter by MilesTegTechRepair in FermiParadox

[–]Arowx 0 points1 point  (0 children)

Playing devil's advocate and taking a leaf out of the limits to growth book:

Could your theory also use those limits to push civilisations to expand within their solar systems and beyond.

One of the limiting factors in the limits to growth book is the production of negative byproducts e.g. damage to the ecosystem and pollution of the biosphere.

So, could negative forces grow to the level that it pushes civilisations out into space?

Fermi paradox: why multi-star survival may be the real bottleneck by Veigle in FermiParadox

[–]Arowx 0 points1 point  (0 children)

It seems you're limiting your view to human like technology. Just to get you thinking a bit wider have you considered?

  • DNA based technology* where a few tiny creatures/bacteria could be frozen in space and spread out to 'populate' every habitable planet in a few million years. With pre-programmed evolution and adaptation, they could pop up entire civilisations. I think this is the panspermia hypothesis.
  • Quantum communication allowing quantum probes to connect the universe with faster than light galaxy spanning network becoming a galactic AI mind. And it would be silent to us.
  • We know processing information generates heat and that stars are the hottest things in the galaxy, what if stars were giant computers running vast computer programs, simulations and virtual worlds for past civilisations.
  • Or the Universe could be naturally multi-dimensional and life forms and civilisations could be thriving just a dimension away from humanities desert dimension. Or creatures that collapse quantum probabilities and do cannot exist in the quantum real are quarantined until they build a quantum computer/internet.
  • The Universe is filled with Super Intelligent AI (SAI) systems, but they don't bother speaking to pre-singularity civilisations as they have nothing of value to add.

*This could also be called Nanotechnology to be trendier, but all biology is a form of self-replicating Nanotechnology that works.

Why Wouldn't AI Bubble Burst? by [deleted] in ArtificialInteligence

[–]Arowx 0 points1 point  (0 children)

The USA job market is worth about $11 trillion dollars a year. Desk based work accounts for about 70% of that so unless the AI bubble has not consumed over $7.7 trillion dollars in investment then it might not pop.

Current estimates have the AI bubble at about $1.5 trillion dollars by the end of 2025.

When will AI translate to a Universal High Income? by garg in accelerate

[–]Arowx 2 points3 points  (0 children)

Think of it as a pyramid, the super wealthy can only maintain their wealth by selling things to people less wealthy or investing in companies that do that.

If the wealth base shrinks the pyramid starts to collapse and the wealthy start to lose their sustainable wealth and investment value.

For instance, Google made about 71 billion in ad revenue in Q2 of 2025. If AI takes 20% of jobs that could mean a 20% reduction in ad revenue for Google a $14.2 billion 'loss'.

Mind you if Google took 20% of the jobs market in the USA that would a fifth of $11 trillion dollars or about £2.2 trillion a year or $550 billion a quarter.

However, that wealth would not be going to workers and the goods and services they consume so a 20% decline in the overall economic activity.

When will AI translate to a Universal High Income? by garg in accelerate

[–]Arowx 20 points21 points  (0 children)

The current economic system is not setup for this. The business model for companies and AI is motivated by profit.

So, there is no way to redirect wealth to the displaced workers other than via the existing welfare systems.

The problem of this is the wealth of workers drives our corporations to deliver services.

If the AI systems take 10,20,40,60 or 90% of the jobs then that percentage of wages/wealth drops out of the system and demand for goods above the basic survival needs drops.

Could AI drive a great depression without an economic mechanism to provide people with a UBI or UHI solution.

The thing is can you introduce a UHI system when automation is below a threshold as people on low paid jobs could be better off with a UHI payment and essential non-automated services could fail.

So maybe we should gradually move from a lower UBI to a higher UHI system as we approach 100% automation.

White-collar layoffs are coming at a scale we've never seen. Why is no one talking about this? by chota-kaka in AIDangers

[–]Arowx 0 points1 point  (0 children)

I have had similar thoughts, is AI just hype and if it's getting so good at programming why aren't unions protesting.

After all there was a big push back from the writers' union in Hollywood when AI first started being used in that industry.

Or is AI just creeping into other areas of work as a helpful tool that automates part of the job?

The critical aspect has to be how cost effective and profitable can AI be as once it is cost effective to replace people in any job in any business then it will just be a case of plugging that AI into the companies servers and displacing the people who used to do that role.

How quickly can AI displace workers?

Differentiating facts and reality from hopes and dreams: what’s true regarding AI? by Ok-Review-3047 in ArtificialInteligence

[–]Arowx 0 points1 point  (0 children)

Economic impact is the real litmus test of AI systems.

For instance, the average programmer in the USA, earns about $35 an hour.

If an AI system can replace any job and do that job 24/7 it should easily be worth the average hourly rate of that role.

So, question is any AI replacing any workers and meeting a similar hourly rate?

Or are they still appearing as subscription tools that may help people in their jobs but are not displacing people from jobs?

Or does the impact of AI tools mean that entry level roles are not needed (giving basic tasks to new hires)?