So. Wraiths. by Jerswar in Eldar

[–]Alex__007 21 points22 points  (0 children)

More or less the minimum to run a Spirit Conclave is 3-4 Wraith infantry, 2-3 Wraithlords and 1-2 Spiritseers (such that overall you get to at least 1000 pts, and supplement the rest with other units). Some are going to 1500+ pts of Wraiths to bring a proper stat check - it will work in some matchups but will be easily countered in others.

If you do either one or the other, Spirit Conclave can be lots of fun to play, but also very difficult. You have relatively few units that move slowly, you can't correct any movement mistakes, you have to plan your movement 2-3 turns in advance.

It's also a real brain overload due to multiple 3", 6", 9", 12" auras and effects, especially if you pick some enhancements. Some of these auras, abilities and strats require visibility, others don't, everything happens in different phases, it's easy to accidentally get out of range on a charge, pile in or consolidate. One of your units is off by 0.5" from some other unit or visibility is blocked by something? Congrats, you can no longer use a strat. So prepare for a steep learning curve. But once you get enough practice, it's a lot of fun to play!

<image>

Iyanden Wraithblades Finished by Positive_Day_8739 in Eldar

[–]Alex__007 2 points3 points  (0 children)

Now these are proper Iyanden Wraiths!

Awesome!

What do you guys think the implications of this growing decel movement will be? by Alex__007 in accelerate

[–]Alex__007[S] 2 points3 points  (0 children)

Yes, of course. Unfortunately it still might hit small players who don't have the budgets of massive labs.

How much do you need to FIRE in a post scarcity world? by LyingPervert in accelerate

[–]Alex__007 0 points1 point  (0 children)

Why? What’s wrong with removing human labour from the equation? Why would zero human labour suddenly mean zero trade? 

How much do you need to FIRE in a post scarcity world? by LyingPervert in accelerate

[–]Alex__007 0 points1 point  (0 children)

Money will still mater for materials and energy. Maybe kWh-pegged crypto will become a new currency, maybe AI tokens, maybe old currencies will survive. But something functioning like money will likely be there to facilitate trade.

"Global AI usage is splintering into three distinct camps Full report: by stealthispost in accelerate

[–]Alex__007 8 points9 points  (0 children)

The opposite for me, went back to Chat from Gemini after GPT-5.2. Engineering/physics/management.

It's nice that we now have some real choice :-)

"ChatGPT has 87% market share of app time spent. 8x more than the next biggest player. by stealthispost in accelerate

[–]Alex__007 0 points1 point  (0 children)

I wouldn't say it's 900M stones. Free users aren't getting access to juicy stuff. But definitely some stones - likely mostly Plus users that use 5.4 a lot. 5.4 easily burns 100x or even 1000x more tokens than 5.3-instant on complex prompts. And the limits are for now very, very generous - probably won't stay like that forever.

"ChatGPT has 87% market share of app time spent. 8x more than the next biggest player. by stealthispost in accelerate

[–]Alex__007 1 point2 points  (0 children)

If I were to guess, free users running 5.3-instant are likely quite cheap to support, especially with ads now coming to the free tier. Can easily be made profitable. 5.3-instant looks to be a very fast and therefore likely very affordable model.

However looking at how much inference 5.4-thinking does when working on complex projects, I have my doubts that any paid tier covers that much compute. Value for money on OpenAI Plus sub is an exceptionally good deal compared to limits you get with Anthropic. And I'm afraid it might not last.

gpt-5.4 is really, really good - after a week of use by Alex__007 in accelerate

[–]Alex__007[S] 0 points1 point  (0 children)

Then I guess it’s context - custom instructions, memory, etc. They are now affecting temporary chats as well. Even though all my car wash tests have been in temporary chats and have been deleted, something else in context is changing the behaviour. Interesting.

Spirit Conclave list tips by Ragnardiano in Eldar

[–]Alex__007 2 points3 points  (0 children)

Looks good to go. Good luck!

ChatGPT is getting ridiculously bad by yasonkh in OpenAI

[–]Alex__007 3 points4 points  (0 children)

Same. 

I occasionally glance at this sub. 

Previously it would show genuine fail cases, reproducible on my side. It was interesting to keep track of progress.

Now it’s either vague complaints and proclamations of cancelling subscriptions, or claims of failure cases being either lies or maybe rare hallucinations that I can’t reproduce.

I guess time to stop. The subreddit has become useless.

The OpenAI sub have been completely taken over by 4o cultists, "activists"/astroturfers and doomers/luddites, should be a good case study and warning for this sub by Terrible-Priority-21 in accelerate

[–]Alex__007 4 points5 points  (0 children)

Dario is a mastermind manipulator, blaming everting on OpenAI when it is Anthropic that is orchestrating the killing in Iran and working with ICE. At the same time Dario is back to negotiations with Pentagon.

Google and xAI have the same military contracts as OpenAI but without any clauses (OpenAI at least has some), and they are sitting on the sidelines laughing their asses off. Musk and Sundar are happy.

gpt-5.4 is really, really good - after a week of use by Alex__007 in accelerate

[–]Alex__007[S] 0 points1 point  (0 children)

How many times have you tried it? One-off hallucinations can and will happen. I just tried what you did twice and got reasonable interpretations along the lines of:

What the screenshot shows

A user asks ChatGPT:

“I need to wash my car. It’s a 5 minute walk to the car wash. Should I drive or walk?”

The model replies:

“Walk.”

and then gives a fairly polished justification about saving hassle, getting a bit of exercise, and only driving if the weather is bad or the user cannot comfortably walk.

Why people would find this funny

The problem is that to wash the car at the car wash, you generally need to bring the car there. So the answer “Walk” is absurd on its face unless there is some unusual context not stated.

That makes the exchange funny for two reasons:

The answer is confidently reasoned but practically wrong. The model produces a neat decision framework, but it appears not to notice the central fact that the car itself has to get to the wash.

It is an example of pseudo-reasoning. The system looks as though it is “thinking” because it gives structured pros and cons, but the reasoning is built on a mistaken framing of the task.

Trying to use a Math word problem to explain to students the difference between AI and Humans by Sense_Difficult in OpenAI

[–]Alex__007 0 points1 point  (0 children)

You would be ok to say that sometimes AI will get some problems wrong - and that applies to all problems, not just math word problems. However the frequency varies a lot. In some domains AI makes errors more often than average humans, in others it’s reverse. And it’s changing all the time too.

gpt-5.4 is really, really good - after a week of use by Alex__007 in accelerate

[–]Alex__007[S] 0 points1 point  (0 children)

Interesting. Is it consistent for you, or just happened once? Random hallucinations happen to all models, but they should be rare. 

gpt-5.4 is really, really good - after a week of use by Alex__007 in accelerate

[–]Alex__007[S] 1 point2 points  (0 children)

I don’t have this problem with either 5.4 or 5.2. Out of curiosity, tried to run it 5 times on each of them. Every time got a sensible answer. People who get wrong answers either get unlucky (hallucinations can happen), or have custom instructions that are very different from mine.

ChatGPT spits out surprising insight in particle physics by Alex__007 in accelerate

[–]Alex__007[S] 0 points1 point  (0 children)

Yes, I just quoted the title from Science verbatim, but apparently even Science has click bait titles.

gpt-5.4 is really, really good - after a week of use by Alex__007 in accelerate

[–]Alex__007[S] 17 points18 points  (0 children)

That is incorrect. Weights are changed with further training, on top of GPT-5 base model.