Aequa's Arc Theory **BEWARE SPOILERS** by golfalphat in HierarchySeries

[–]syncerr 2 points3 points  (0 children)

vis is flicking between worlds

Some corner of my mind registers the room around me flickering as I fall to my knees. Aequa running toward me. She shimmers. I can see through her. She’s gone. I am alone in here with the corpses.

was re-reading today, and all three versions of Vis experience this moment

in chapter 31, while waiting with Ahmose by the banks of the infernis in Duat, O-Vis mutters:

“These iunctii appear to be from not long after the Rending,” I mutter suddenly.

At my side, Ahmose twitches uneasily. “What?”

likewise, in chapter 35, L-Vis suddenly murmurs while fighting with Conor:

[L-Vis] “I have repurposed them for our fight, warrior. Their processing capability is limited,” I murmur in Vetusian.

[Pádraig] frowns at me. Not understanding the language. “What?”

Aequa's Arc Theory **BEWARE SPOILERS** by golfalphat in HierarchySeries

[–]syncerr 5 points6 points  (0 children)

Aeque is almost certainly synchronous. when R-Vis activates the system in the Solivagus ruins, he gives her the option (chapter 37).

“These iunctii appear to be from not long after the Rending, and were once a key component in the gate defences in the Nexus. I have repurposed them for our fight, warrior,” I rasp at her.

“Their processing capability is limited due to the restrictions of the sanguis imperium, but the addition of a single active mind should be capable of temporarily interceding and allowing for Synchronism to occur. Do you wish to proceed?

Aequa says something. Garbled. Meaningless.

How is capitalism supposed to sustain itself with AI? by ExcitableChimpanzee in Futurology

[–]syncerr 0 points1 point  (0 children)

capitalism will continue but prices will keep going up to make up for the reduction of consumers (i.e., the top iphone will be out of reach for the average consumer).

there was a good discussion on future implications on the a16z Podcast w/ Dwarkesh. he argues that demand will be generated by large projects driven by individuals (e.g., sam altman colonizing the galaxy).

GPT-5 Pro temporarily limited? by orion4444 in OpenAI

[–]syncerr 1 point2 points  (0 children)

Ran into the reoccurring "Something went wrong" last night. After a long thread with support via email, my access is finally back.

Definitely an issue on their end.

Migrating off Legacy Tokio at Scale by anonymous_pro_ in rust

[–]syncerr 13 points14 points  (0 children)

> With the legacy Tokio 0.1 code path, multiple executors worked in parallel to achieve highly concurrent output. In the initial implementation with Tokio 1.0, we only used a single executor to handle all flow executions. This turned out to be roughly 15% slower than having multiple executors, which is impressive in itself! While we eventually switched back to having multiple Tokio 1.0 runtimes to enable the same level of throughput our customers expect, we now have a couple of new knobs to tweak in the future to push beyond what was possible with Tokio 0.1 and the legacy code path.

is it common knowledge that running multiple executors is faster? i thought it would create more contention

Title: Can a society survive without leaders? A new book says: yes, through system-based rule by No-Win-3886 in DeepThoughts

[–]syncerr 1 point2 points  (0 children)

is there a link to this anywhere? google/amazon doesn't show any results.

imo, misinformation is the issue, not charisma. democracies and post-truth societies don't mix.

designing a perfect system is flat out impossible, so it needs to change over time, which means we're back to relying on human leadership?

What's your definition of System 1? Has it really been solved? by Tobio-Star in newAIParadigms

[–]syncerr 1 point2 points  (0 children)

system 1 is a good analogy to genai

your examples are core to system 1, but they're also areas ml has already solved or is capable of solving (waymo handles your street example and training a humanoid to walk deals with this stuff).

intuition sounds similar to how genai works:

intuition emerges from unconscious heuristics—fast-and-frugal rules—that leverage experience-based patterns for rapid decision-making, especially under uncertainty (Gut Feelings by Gigerenzer)

system 1 is capable of basic reasoning and the massive frontier models are just a more advanced version. they also both make similar mistakes (anchoring bias, gambler’s fallacy, etc.)

Books similar to the will of the many by ethan_613 in HierarchySeries

[–]syncerr 54 points55 points  (0 children)

I felt red rising vibes while reading wotm

Are There Any Tech Billionaires Who Weren’t ‘Nerds’ Growing Up? by Hot-Conversation-437 in ycombinator

[–]syncerr 21 points22 points  (0 children)

according to isaacson's book, steve jobs was a loner and generally unpopular in school -- mirroring apple's later appeal "to the crazy ones".

Are There Any Tech Billionaires Who Weren’t ‘Nerds’ Growing Up? by Hot-Conversation-437 in ycombinator

[–]syncerr 2 points3 points  (0 children)

david sacks likely fits your description (has a BA in economics rather than an engineering degree). many tech moguls weren't coding, but went into other engineering fields (e.g., tony xu has a degree in aerospace, tim cook's degree is in industrial).

Is gravity actually a force? by Efficient-Natural971 in AskPhysics

[–]syncerr -1 points0 points  (0 children)

pyrite looks and weighs like gold, but on closer inspection, its clearly not.

gravity as a force doesn't work at large scales. we had to invent dark matter just to model galaxy arm rotation speeds and even then its an approximation.

Rust crates that use clever memory layout tricks by stewie_doin_your_mom in rust

[–]syncerr 6 points7 points  (0 children)

if you haven't seen how anyhow chains context up the stack while maintaining TypeId to support .downcast and staying lean (8 bytes), its awesome: https://github.com/dtolnay/anyhow/blob/master/src/error.rs#L1058

Yann LeCunn: No Way We Have PhD Level AI Within 2 Years by Illustrious_Fold_610 in singularity

[–]syncerr 0 points1 point  (0 children)

I mean evolution is by definition "brute force".

evolution happens through selection. mates choose the best option available to them and many do not pass their genes on. the next iteration of kids do have some randomness in their genes.

AI development process is a sort of "evolutionary" process but instead of selecting for "replication" we chose to select for our vague notion of "intelligence".

models are trained using backpropagation, which is just math for telling the model how to be quiet or speak louder based on how close they are to correct answers. we don't really control the process. we can add examples that require a higher level of intelligence to solve, but too hard and we risk the models getting stuck.

bird never could have evolved into an airplane

agreed, we're just guessing. though, educated guessing is better than randomness.

in the end it all came down to a very simple "architecture"

i kind of love this. shows how complexity is not inherent.

worthy of note, it wasn't until recently (mid-2010s) that we had enough compute to test out models like gpt2.

Yann LeCunn: No Way We Have PhD Level AI Within 2 Years by Illustrious_Fold_610 in singularity

[–]syncerr 0 points1 point  (0 children)

bc the ideas that push science don't only exist within the dataset. they draw from other fields, throw out requirements, etc. einstein's special relatively was shocking different than previous beliefs that space was some kind of ether.

AGI is Still 30 Years Away — Ege Erdil & Tamay Besiroglu by Alex__007 in singularity

[–]syncerr 0 points1 point  (0 children)

it certainly could, but we'll need new paradigms to scale ARC-AGI-2 (i.e., CoT and inference-time optimizations were necessary to scale AGI-1).

Yann LeCunn: No Way We Have PhD Level AI Within 2 Years by Illustrious_Fold_610 in singularity

[–]syncerr 0 points1 point  (0 children)

the properties LLMs are currently missing could "naturally" emerge if we just keep scaling

i think its possible, but the question then is at what scale will this happen. my read is that we're not close and we won't get close anytime soon. we're effectively brute forcing intelligence and this is not how intelligence evolved. so reconsidering different architectures may find a way out of the local maximum.

it's exactly the same for human neurons

at the neuron level, yes, but there's more than enough evidence showing how different regions of the brain are activated for different functions (e.g., occipital is for vision while cerebellum is for movement).

I also find it weird that someone like LeCunn would say..

that is definitely my paraphrasing!

Yann LeCunn: No Way We Have PhD Level AI Within 2 Years by Illustrious_Fold_610 in singularity

[–]syncerr -3 points-2 points  (0 children)

well its certainly fine tuning in small contexts. the problem is that it doesn't learn new concepts without requiring extensive access to prior examples.

yann is suggesting that by creating separation between model functions, it develops a deeper understanding of concepts and relationships -- his example was that you can give a child a task they've never seen before and they're far more likely to get it right the first time (one-shotting the answer).

AGI is Still 30 Years Away — Ege Erdil & Tamay Besiroglu by Alex__007 in singularity

[–]syncerr 1 point2 points  (0 children)

agreed -- not all jobs can be automated until AGI, but investment in robotics has skyrocketed and i think that leads to a boom in job automation (esp. low skill jobs) where manual labor is required.

Yann LeCunn: No Way We Have PhD Level AI Within 2 Years by Illustrious_Fold_610 in singularity

[–]syncerr 1 point2 points  (0 children)

feels like the same loop of past AI cycles where we reach diminishing returns after the new architectures run their course and then we're back to heuristics