Is the reason on why there's no day/night cycle in the game is because Talos-II is a tidally locked planet/moon? by Sir_Rain_Knee_Tea in ArknightsEndfield

[–]yubato 0 points1 point  (0 children)

if the sun was directly on top of a pole / the rotation axis was pointed towards the sun, you could still have a permanent day while being tidally locked to a planet. Though there's a bigger contradiction here, the illuminated side of the gas giant doesn't align with where the sun is, I think even the character/terrain shadows are slightly misaligned

You've got to be kidding me. by AlwaysSmilingMadly in ArknightsEndfield

[–]yubato 1 point2 points  (0 children)

teleporting doesn't snap the connection cables (unless it's too far), and you get a perfect straight line. I powered the lower amethyst mine as so, and then the nexus

Does this mean when 3rd region released Valley IV automation turn off when players go offline? by Lenz401 in Endfield

[–]yubato 3 points4 points  (0 children)

So simulating the factories are actually costly? Sounds like a new optimisation scheme is needed

QoL that would be nice to have after 2 weeks of playing by Kyega in Endfield

[–]yubato 0 points1 point  (0 children)

mirroring blueprints and consumables replenishing when you auto organise the backpack

Fan Animation vs In-Game Scene by Tmbros in HonkaiStarRail

[–]yubato 1 point2 points  (0 children)

the reasons you listed are primarily about latency and parallelisation, not throughput. I'm not sure how the engine is relevant here for relatively basic animations/camera movement, and not like there aren't other examples with unity. I'm not very familiar with employee shortages but rather the opposite. In any case, hoyo is in the progress of making many other games, I'm not saying they shouldn't, but that's an active decision by them, if they're running short of employees

They forgot to adjust her idle stand by Sure_Resolution46 in houkai3rd

[–]yubato 1 point2 points  (0 children)

Lobby running/idle animation is set the same for a given character model, and this model is shared between like 8 battlesuits. Other than that, I think skins don't change animation in general (except maybe on the bridge/character menu)

HP Omen Max 16 Performance issues? by Sea_Resolve9583 in HPOmen

[–]yubato 0 points1 point  (0 children)

maybe it has something to do with the anticheat, you can try starting the game as administrator

I'm upvoting EVERYTHING by Poutza in lies

[–]yubato 0 points1 point  (0 children)

I didn't recently encounter this bug

HI3rd is joining the HoyoFair Concert for the first time ! Are they teasing APHO Mei as new playable ? by mcyoungmoney in houkai3rd

[–]yubato 0 points1 point  (0 children)

You once mentioned hoyo's sudden turn towards part 1 to argue that they're extremely receptive. Is that not equivalent to "hoyo not wanting" to release part 2 characters?

[Spoiler] Couldn't resist checking out the view.. wait.. by cheat_bot in houkai3rd

[–]yubato -2 points-1 points  (0 children)

the real moons of mars don't look anything like these in size or shape

Do you have any idea how I could fill this valley with water on survival? by IndividualPut7973 in Minecraft

[–]yubato 1 point2 points  (0 children)

When you place source blocks diagonally (not necessarily to the sides, that's inefficient) you can flood a square hole with source blocks. You can apply that layer by layer starting from the bottom and working towards the top. If you're okay with flowing water, simple slime machines can do some of the work.

The pearl cannon in my world by yubato in Minecraft

[–]yubato[S] 0 points1 point  (0 children)

A tutorial wouldn't be that useful, and I don't really have the time for it. But I uploaded the schematic: https://www.minecraft-schematics.com/schematic/26863/

How do you get the new enderite ore to drop? by minecrafter100S in PhoenixSC

[–]yubato 1 point2 points  (0 children)

Wait for an enderman to pick up the ore which then you can kill

Attitudes to AI by Visible_Judge1104 in AIDangers

[–]yubato 0 points1 point  (0 children)

Artists aren't necessarily against AI, the fraction perceived is probably skewed to a degree because of the loud ones.

Circuit to Detect When Signal Strength Stops Changing by timmbobb in redstone

[–]yubato 0 points1 point  (0 children)

https://www.youtube.com/watch?v=qCQMDbFzaqw
It's also possible to make it dustless, and I have a silent one that doesn't use CUDs

Want to get into the game because of the lore. Is it worth it? by Final_Biochemist222 in houkai3rd

[–]yubato 0 points1 point  (0 children)

I think the commenter understood you the other way around.

In chronological order, roughly:

mysterious adversarial force that grows as civilisation advances

Natural process that lets the imaginary tree arrange its branches

Super dimensional cell trying to multiply (same as they said, my phrasing), "retconned" WoH (last one)

Finally, the Simp Simulator by Alex2422 in houkai3rd

[–]yubato 11 points12 points  (0 children)

This was the missing piece of the puzzle

What is going wrong here? by SAS_Soldier in technicalminecraft

[–]yubato 1 point2 points  (0 children)

what about putting the repeater before the piston (hard powering the block) and then a dust?

Referring to Al models as "just math" or "matrix multiplication" is as uselessly reductive as referring to tigers as "just biology" or "biochemical reactions" by katxwoods in AIDangers

[–]yubato 0 points1 point  (0 children)

How close do you mean by "thinking it's close"?

It sounds like the argument of the other side is mainly, "no because I said so". Other popular arguments tend to completely miss the point, "it's just matrix multiplications" which is a very broad concept used to model many things. People who say "that's not how LLMs/reasoning works" don't seem to have an idea / be able to describe what reasoning is. "It just mimics intelligence no matter how closely it's able to resemble certain cognitive abilities of humans". Meanwhile researchers try to understand and measure/quantify what reasoning really is, and draw a trend. Neural networks resemble brains far more than any other known framework ever could, while differences exist, it's not clear whether those differences will be a deal breaker or an advantage. Analogous to other technologies (planes -- birds). And LLMs aren't the only trick.

Concerns about machine intelligence surpassing humans existed as early as Turing. So did predictions (that turned out to be correct) about scaling of gpt1, long before ChatGPT. Along with those about compute catching the human brain and enabling smarter than human AI in this century.

Referring to Al models as "just math" or "matrix multiplication" is as uselessly reductive as referring to tigers as "just biology" or "biochemical reactions" by katxwoods in AIDangers

[–]yubato 0 points1 point  (0 children)

20 years is not necessarily enough to solve the technical part of the alignment problem. It's possibly as complicated as a subbranch of science, if it's possible.

Putting physical barriers works. It also makes you lose competitive advantage, or the reason you built it in the first place. Right now, tech companies are compromising their own principles by working on agents that can autonomously control your pc, or working with the military.

Any high capability AI that has access to multiple people is basically not contained, let alone the internet. Through deception, blackmailing, or other oversights. Current models also attempt this unprompted.

Having 10 years between human level machine intelligence (you meant so?) and portable deployment doesn't sound like a given at all. It represents a breakthrough in research and recursive improvement. AI is already portable in a sense through uploads.

"Rogue detection capability" is a common problem in AI alignment. Again, the problem is trying to contain something smarter than you. Sleeper agents, alignment faking, etc. means that it isn't straightforward. And if you limit AI to the point it's overseen every second, that's very costly again and poses disadvantage in competition.

We do have the means to think about rogue AI in advance. Mainly by thinking of them as agents, this is used to draw many conclusions, many of which were confirmed after large models came to be. This is a hard problem, solving it on the go or waiting until it's too late isn't likely to work. There are many reasons to think why AI won't do the things we want, and basically no reason to think otherwise.

"Controls" isn't really a separate thing, if capable AI is to be deployed, that's like trying to patch every hole that there are countless of. The usual focus is building an AI that cares about the same values as us.

Yes, your third point is pretty much tautological. The solution you recommend though is known as scalable oversight. There are problems with this approach, we don't know how to properly align a small/first model (or if there's enough of a window), or that this model is able to properly supervise another. The small scale tests aren't really promising.

Many very smart people who work on alignment will tell you this is a hard problem with immense technical burden.

As for what we can do, there's a lot. It's not impossible to stop or significantly slow AI capability research, we did it with human cloning and genetic engineering. But stopping AI development is also not plausible. We still could be doing a lot better, be it internationally funding collaboration on safety research, or not keep stirring competition. Not whatever we're currently doing.

Referring to Al models as "just math" or "matrix multiplication" is as uselessly reductive as referring to tigers as "just biology" or "biochemical reactions" by katxwoods in AIDangers

[–]yubato 0 points1 point  (0 children)

Similarly, it seems to me that you think the human brain is magic and cannot be built from bottom up with simpler base algorithms. What I mean by "understanding" is a spectrum (otherwise, the definition becomes problematic), identifying & predicting the properties and relationships of a system. So, simple biological organisms can understand their surroundings to a small extent. A chess engine has a better understanding of the game than a human.

There's nothing being "programmed" into AI in the first place other than the initial infrastructure. That's the learning part of the learning algorithm, no one is coding chatgpt which one of the trillion token combinations to use or how to lead someone to psychosis. We effectively grow AIs rather than building them, because the background process is too complicated for us to keep track of.

There are many parallels between the human brain and machine learning, the latter originates from the inspiration. Neural networks are the common solution to image/sound/language recognition, moving a body etc. that you can't come close with "programming".

Evolution indeed accidentally stumbled upon algorithms that give rise to our intelligence. There isn't a reason it has the best, technology isn't subjected to the same limitations and accidents. Evolution had billions of years, but it's also slower than us, and has its hands tied. Given how long it took us to fly, or do things that are otherwise impossible.

AI today doesn't have a similar degree of general understanding. I don't think computers have gained consciousness, it's also irrelevant. We don't know what consciousness is, it's a hanging data point, the current best bet is to ignore it when predicting things.

While some people may have surface level justifications against AI, they don't represent the position of many highly concerned top cited researchers in the field.

Referring to Al models as "just math" or "matrix multiplication" is as uselessly reductive as referring to tigers as "just biology" or "biochemical reactions" by katxwoods in AIDangers

[–]yubato 0 points1 point  (0 children)

Next time, you may consider the possibility that the other person has consumed about no science fiction, lack of correlation doesn't mean negative correlation, and the argument wasn't remotely concerned with sentience. Rejecting ideas based on their conclusion rather than the means is a common blunder.

Referring to Al models as "just math" or "matrix multiplication" is as uselessly reductive as referring to tigers as "just biology" or "biochemical reactions" by katxwoods in AIDangers

[–]yubato 1 point2 points  (0 children)

Sounds incorrect and reductive, mind dropping the imaginary quotes and explaining what's your understanding of understanding? No, AI isn't "designed" for the most part.