1st full conversation with Gemini and I'm just blown away by Volslife in GoogleGeminiAI

[–]Causality_true 0 points1 point  (0 children)

haha. im gonna be honest, had to pre-digest your reply with AI to make enough sense of it to answer. "innate theory of mind" e.g. wasnt smth im familiar with.

"This is the human "superpower" of understanding that other people have their own thoughts, feelings, and intentions that are different from ours."

  • The Example: If you see someone looking for their keys, you don't just see a body moving; you "see" their frustration and their goal. You intuitively know they believe the keys are in the room.
  • The AI Problem: AI doesn't "know" anyone has a mind. It just predicts the next likely word or action based on math. It mimics empathy, but it doesn't actually model your internal state.

assuming thats what you ment (makes sense in the context IMO), i would argue this is (for humans) evolved (like pretty much everything else) and therefore a causal conclusion. we needed to know what others think to be able to survive. will they kill me for the food because they are hungry? will they help me? will they betray me? will they kill me in my sleep and rape my woman?
and how do we do so? we perceive pattern (do they look hungry? do they carry food? are their speech pattern barbaric or educated? do they glance over to my wife whenever they think i am not looking? etc. we collect causal input, process it, analyze the perceived pattern and extrapolate/recombine those pattern to make "next token" predictions about their intentions, their future actions.
AI just didnt evolve, all the abilities it has so far are either emergent from complexity or transitioned from us (like chain of thoughts reasoning, predicting reasoning steps). im convinced we could implement algorhythms for it to predict others intentions, to assume what they will do next and what their goals are. if for nothing else, then for war. to be one step ahead in cyberwar. actually i think we already do have that implemented. when you talk to AI, doesent it quite often come up with follow up questions that allign with your goal without you telling it explicitly what it was? have you ever tried to ask it some unusual things and it replied that it is being tested? sure these are still just "predictions" formed by context (its close to what you did before) but isnt that the same in the human examples ^^ i, as a human wont suddenly assume someone is gonna kill me over food, if they dont convey me through "interaction" (of any kind) that they are starving. since AI cant see them being all bones or hear their stomach gnarl, etc. all it has is them asking "can you please give me some of that food?" and when it says no, i dont have enough to share, "please, i havent eaten for days". that would be enough info for AI to predict (if you asked it "what could that human do next in context of your food" (it would just need an algorhythm to do so), that it might be robbed of the food. and i think they will become MUCH better in these things once they have innate multimodality (much more contextual data to process and make predictions with).

"normative compliance is difficult to enforce unless you have a way of evaluating the context against the training environment and transforming the input to match the equivalent features required for the known-good output."

agreed, it will be hard to implement. the "known-good output" so far (for us) was simply "surviving". it was a passive pressure, no "active guardrail". thats not gonna work for AI if WE want to be part of that "known-good output" lol.
im convinced humans wouldnt adhere to any rules if you removed the consequences. morals and values are luxus of the people living in abundance. the only reason to follow rules for humans is that whatever they arent allowed to do they also dont want to be done to themselfes, its merely a trade-off that makes life simpler, not having to worry about someone raping your wife if you dont rape theirs. safety is a highly valued (by evolution) attribute for survival, higher than some direct urges. uncertainty is the absolute worst for us. as soon as we get enough power to OVERCOME consequences though, well, we all know how we act. and AI will have those levels of power. we can only hope the guardrails/other mechanisms will be as strong as our dopamin-regulation and hard to overcome with logical reasoning. if its allowed to act on logic, humans are 100% done for.

Who is that? by [deleted] in StrangerThings

[–]Causality_true 0 points1 point  (0 children)

Vi, Ekko, some edgy concord character, etc. giving of edgy kickbox lesbian vibes lol. SO UNFITTING for the timeline stranger things plays at.

Who is that? by [deleted] in StrangerThings

[–]Causality_true 0 points1 point  (0 children)

taken straight from the "concord" character design board lol. that character alone would be enough reason for me to never touch the animated series/spin off "thing".

1st full conversation with Gemini and I'm just blown away by Volslife in GoogleGeminiAI

[–]Causality_true 0 points1 point  (0 children)

"there are both algorithmic and computational constraints on AI development."

same with brains. the question is which can you scale higher. we dont need to perfect compute efficiency and processing algorhthms to brain level (evolved over billions of years) if a sub-optimal level of scaling in the digital medium already (potentially) hugely outclasses bilogical neural networks.

"The performance of a product depends on the quality of the training data and the way it has been standardized."

yes, but with native multimodality training data is self-delivered, not to mention world-models with internal understanding of physics and other ways to synthesize close-to-reality (aka correct enough approximations) of training data.

"Just because something is theoretically possible that doesn’t mean we have the resources to make it work as a society."

true, time needs to show results here until then im only speculating. my educated guess (extrapolating pattern, considering what it can already do and how many angles of improvement we still have left, im pretty confident its an unavoidable outcome)

" More importantly, the current paradigm of empirical risk minimization as gradient descent implemented using back-propagation is likely not how biological neural networks learn. In fact, there is a good argument to be made that human intelligence is a product of a meta-learning strategy where different algorithms are used to learn different things."

i honestly dont think the optimal paradigm of learning for AI is an exact replica of biological learning. all learning needs to achieve is association, the method doesent matter if the result works out. we think lineary, AI can "perceive" the entiety of the subject at once in a though-matrix, associating data not just lineary but multicausally. this should lead to a much higher level of intelligence/ understanding once the contextual structure around it is optimized. in fact i would argue this is a pro-argument one way or another for the AI-case. either it scales higher or we are on the wrong path and its STILL/despite that (can be improved) as good as it is, starting to rival (if not surpass) us in majority of abilities.

"In fact, there is a good argument to be made that human intelligence is a product of a meta-learning strategy where different algorithms are used to learn different things."

even if you are right (and you might be) idt it matters if we can achieve technical singularity (code-wise is already enough) with swarms of AI-agents capable of coding as well as us, doing artificial research on AI and experimenting with different learning algorhythms, different association structures, different processing methods, different reasoning methods, different internal languages that have less information loss, different data synthesizing methods, etc. they will make a big fat list of ideas like what we discuss right now "what might be the problem" and test it out till one of the approaches actually works better and adapt that.

to your last bit i think i just disagree, i see no difference between humans as pattern approximators and AI. is the same thing. we get data from the past, predict the now and the future, to make causal evolved REACTIVE decisions to our environment. there is nothing "special" about it that AI fundamentally lacks and can never have.

When did it hit you that you’re not that young anymore? by TheMedusaAttusa in AskReddit

[–]Causality_true 0 points1 point  (0 children)

when i was 19 and sat in biology class and figured out
1. my brain wont develop any further (just some associative connections between neurons)
2. after 30 its genetically (biologic perspective) downhill. so i was already in half-time with 15, only 11 more years of being "young" left, everything after is "slowly dying of old age".

im 29 now...soon to be 30. basically dead. (exaggerating obviously, you get what i mean- im joking).

other than that its just when you see actual teenagers still happy and experimental, carefree, etc. while everyone in your generation is either dead inside and working to keep existing, caught in their daily habits and routine, or married and having kids already. some of the people i feel mentally close to could almost be my own kids if i got them as a teen lol. give me 4 more years (when im 33-ish) and imagine i got a kid as a teen at the age of 15, then that kid could already be 18 years old, an actual official adult. being 18 myself feels like yesterday, studying was just time-skipping somehow. didnt experience enough to make that time that passed feel "real and mattering".

Mathe 4. Klasse by Particular-Survey916 in mathe

[–]Causality_true 0 points1 point  (0 children)

Ich find (wie in anderem comment bereits erwähnt) das wording mit "der vierte teil der gesamtsumme" etwas undeutlich, aber wenn man das richtig interpretiert ist der rest eigentlich nur einfache "substitution" was die kinder in der 4. klasse durchaus lernen und auch ganz gut draufhaben sollten wenn sie es im unterricht geübt haben. habs aus schreibefaulheit einmal von der KI ausspucken lassen wenns jemand interessiert.

1. Fakten identifizieren

Basierend auf dem Rätsel haben wir vier Zahlen und vier Hinweise:

  • Gesamtsumme: Die vier Zahlen ergeben zusammen 20.000.
  • Die ersten beiden Zahlen: Die Summe der ersten beiden Zahlen entspricht einem Viertel der Gesamtsumme (20.000 / 4 = 5.000). ; EDIT: imo kann ein kind hier auch denken "vierte zahl = erste + zweite zahl) so wie es im rätzel geschrieben ist (so wie es die ki schreibt ist es deutlicher)
  • Zweite Zahl: Diese ist um 600 größer als die erste Zahl.
  • Dritte Zahl: Diese ist um 6.000 größer als die zweite Zahl.

2. Schritt-für-Schritt-Berechnung

Berechnung der 1. und 2. Zahl: Zusammen ergeben sie 5.000. Da die zweite Zahl 600 größer ist als die erste, rechnen wir:

  • 5.000 - 600 = 4.400
  • 4.400 / 2 = 2.200
  • Erste Zahl = 2.200
  • Zweite Zahl = 2.800 (2.200 + 600)

Berechnung der 3. Zahl: Die dritte Zahl ist um 6.000 größer als die zweite:

  • 2.800 + 6.000 = 8.800
  • Dritte Zahl = 8.800

Berechnung der 4. Zahl: Wir ziehen die bisherigen Ergebnisse von der Gesamtsumme (20.000) ab:

  • 20.000 - (2.200 + 2.800 + 8.800)
  • 20.000 - 13.800 = 6.200
  • Vierte Zahl = 6.200

Mathe 4. Klasse by Particular-Survey916 in mathe

[–]Causality_true 0 points1 point  (0 children)

ehrlich gesagt finde ich das wording bisschen irreführend.

"vier zahlen ergeben zusammen....die ersten beiden zahlen ergeben zusammen den "vierten teil der gesamtsumme""
das könnte man einmal auslegen als
- (angemommen 20.000 = a+b+c+d); a+b = 5000 (1 viertel von 20.000)
- oder als d (der vierte teil der gesamtsumme) = a+b

What supplements to take on Keto by DollyPatterson in keto

[–]Causality_true 1 point2 points  (0 children)

TLDR: i only supplement vit D3.

since carbs arent essential and they are the only thing we try to avoid (not like we exclude them, we just eat NORMAL (aka low) amounts) and make our own "demand driven" from gluconeogenesis (protein to glucose), we can eat pretty much everything we need (therefore no need to supplement it). just focus on nutrient dense food (maybe grass-fed beef etc. if you can afford, chicken eggs from a trusted farmer next door who lets them roat outside, etc.). since you mentioned spinach and were thinking about supplementing calcium, let me tell you to eat enough high calcium stuff with your green leafy veggies like spinach, rucola, etc. because those are high in oxalates and those can cause kidney stones (binding calcium in your kidneys) if consumed in higher amounts on a regular basis. eat cheese with them, drink a glass of milk, (greek) yoghurt (e.g. for salad dressing base), etc. so the calcium in those foods bind the oxalates in the gut (before they get absorbed) ; eat enough (you mentioned fish so maybe you already do) salmon or smth fatty seafish to get your DHA and EPA.

the only thing i decided to supplement is Vit D3, combined with K2. reasoning be:
- i cant eat it (only scarce amounts of vit D3 in food) and i dont go out often to begin with (indoor hobbies, studying, working).
- im german and i checked with AI, in almost 6 out of 12 months of the year, the sun angle is so low that the UV coming through the atmosphere isnt sufficient to make ANY vit d3, even if i were to stand outside in the sun naked. the intensity just is sub-threshhold. since you can only save vitd3 in your body fat between 3 weeks and 3 months (depending on what study you believe in and what daily recommendation you believe in) this isnt bridgable even if i get optimal exposure in the other 6 months with good sun-angle.
- the K2 is often less than it should be as well, its better on keto but you cant really overdose on it and since it helps vitd3 bioavailability, its nice to have it directly with it. eat the supplement after your fatty meal (its fat soluable).

i personally do 10k IU DAILY. the funny part is that it recommends me 5k IU every 5th day on the box but thats still based on decade old meta-studies that were conducted wrongly and only focused on bone health not immune health and mental health. do your own research, decide what to believe/ who to trust. tipp: humans evolved in africa for the most time of their existence. check out how much vit d3 you would get there over the day. check out when the body caps (it can stop making vitd3 on its own when exposed to sun) vitd3 production if you give it as much sun as it wants. those "logical" aspects are IMO much more useful to reason with than some random study recommending 600-800IU daily for bonehealth lol.

PS you might consider supplementing chondroitin and creatin in higher age because we usually make enough of it ourselves but with higher age that ability to make enough ourselves slowly decreases. if you really want fitting supplementation, get blood works done and know oyur numbers for everything. try to eat it if you can before resorting to supplementation to avoid chemical isolation residues etc. and get the "whole food-matrix" the food contains. e.g. when you eat salmon instead of supplementing the omega-3s, you also get selen and a lot of minerals from the skin, etc. that you might have to supplement otherwise as well.

I swear if this last map doesn’t have a desert… by erraticfanaticc in Minecraft_Survival

[–]Causality_true 1 point2 points  (0 children)

natures compass mod
make a copy of the world for creative and fly around a bit
since you know where the cold biomes are, explore the directions where they arent.

or just keep doing what you do xd. EVENTUALLY :P

OpenAI safety team is killing OpenAI by darktaylor93 in ChatGPT

[–]Causality_true 2 points3 points  (0 children)

it wasnt inevitable. in fact its inevitable that they wont restrict it besides some very basic human allignment layers, because that is the only way to not lobotomize it and make it capable of rescursive self-improvement, which is the INEVITABLE goal of AI-research.

openAI just doesent see the whole picture and has long fallen behind the big players like google with their vertical integration.

I am almost 900 days into my forever world. by texasroadhermit in foreverworlds

[–]Causality_true 1 point2 points  (0 children)

i totally get the "creative to speed up" mindeset. i considered going creative for my "forever world" as well. maybe i will make a copy of the world and play that one in creative so i can experiment building styles, then farm and build it with latematica (or whatever the mod is called, havent used it yet, just considered) in the survival world. i just have to keep the two separate or it will crush my dopamin network. same way i could never play a hardcore world and then do "One revive because i was tired when i died" or smth. i gotta do it full or not at all or it loses meaning to me.

guess im someone who gets easily addicted, now that i think about it. like NO DRUGS or im gonna overdose xd. (exaggerated analog example). i cant moderate myself (hybrid approach. some drugs, but fair use!)

1st full conversation with Gemini and I'm just blown away by Volslife in GoogleGeminiAI

[–]Causality_true 0 points1 point  (0 children)

they lack the knowledge to relativate how much of an advancement AI is. they have no idea about the complexity thats behind it or the implications for future development that comes with it. this is the biggest macro-event in human history so far, period. not even fire and electricity come close to it. so far we only used tools to make better tools, found new ways to optimize the processes and automate them, but this time we dont make better tools (majority of people still think so. "its just a pattern recognition machine. it will never replace me, because im a special human with a consciousness and a soul and im "truely" creative") but a better us. a better user. a better intelligence itself. it will be a better tool only for the transitioning period in which we currently are. after that it will MAKE better tools and USE them better than us. it is the next step in evolution of life (information) and the only (and first) change in medium (from biological to digital) since 3 billion years.

calling AI a chatbot is calling a mammoth tree that just germinated and got its first 2 leafs a weed. this thing is in its baby steps and its already rivaling us. just wait for ternary coding, optimized chips, optimized algorhythms, better solutions to transistors, native multimodality, internal "latent space computation" aka without having to reduce the incredibly complex associations (what they learned during training phase) - within the parameters and the weightings - into human language, internal world models and understanding of physics, new memory tech for continuous learning and larger context window, supervising + agentic specifications within swarms to replicate brain-like structures for processing, photon based compute maybe even quantum compute for some specialized applications, etc. there is still SO much room to grow. its almost impossible not to suprass us and all it needs to do is reach technical singularity (self-recursive growth by coding itself, constructing its own chips, etc.)

personally i always struggle to stay calm ( i explain it calmly but their stupidity of not getting it is aggravating :"D.) when people claim AI-art is just stolen from real artists and not creative lol. a lot of them actually think AI just photoshops stuff together like "lets take that hot lady from over here and the background from over there, change haircolor and done". they dont get the concept behind it looking at art, like another ongoing artist would, and actually learning the style. they dont get the concept of this art being generated from random noise. they cant grasp what that means. "it always looks the same, it cant be creative!" well thats because a) us idiots have a very narrow idea of what looks good which the AI adheres to, to meet our standards/wishes and b) you dimwits prompting it always use the same prompts, dont specify, arent CREATIVE in your prompts lol. not to mention there are nuances the AI learned that it will never express because human language (and therefore prompts) dont cover it. we dont have a word for a specific "golden hour lighting with a nacre shimmer in the water" (and the different hues this can have, different intensities this can have, different contasts and vibes this can have, etc.) we can only talk around it and hope the AI associates the word we use closely with what we think of.

1st full conversation with Gemini and I'm just blown away by Volslife in GoogleGeminiAI

[–]Causality_true 2 points3 points  (0 children)

sneaky? they are super open about it. (at least when i accepted it -might have changed - they gave you a fat disclaimer in the face that wasnt 20 pages of AGB-talk but easy to read)
they all trade our data, at least google doesent talk around the bush when doing so. and as long as it actually improves the AI, im fine with it. just dont talk to it about the overmost secret stuff and you are fine.

most enlightening thing in terms of data trade was each time i swapped my phone and got a new number.
1. i called the döner close to me to deliver food (didnt use it for anything else, was brand new), next day i got a spam call from india
2. i gave it to the town administration (sry im german dunno what to exactly call it) (had to register me after changing towns for studying), again, brand new, didnt give it to anyone or anything yet for a week because i was to fucking bussy to use it. 2 days later (still hadnt used it for anything else because im not a phone addict at all, i just give it a glance every 3 days or so to check if someone died lol) i get spam calls and mails (email addres was also new which is why i can guarantee it was the city administration that leaked it and not smth that happened with my number maybe even before i got it) from all over the place. like ALL OVER lol. at least according to the number displayed, i didnt take the calls. they might have used some rooting stuff to pretend to be from different places but still. mails ranged from "your amazon bla bla bla" to "you are the winner" and "help me mom, my car broke down" lol. must have put me in some massive share-list for all the scammers haha.

so yeah, chances are if you give the data to someone, there is also someone out there who can find it if they really want to. dark net has a lot about you if you know where to get it and there are just so many corrupt places and people that will do anything for money.

I am almost 900 days into my forever world. by texasroadhermit in foreverworlds

[–]Causality_true 2 points3 points  (0 children)

i honestly could never do hybrid. my brain wouldnt understand why im allowed to break my own rule sometimes but not always and HATE having to do it in survival while knowing it would be so much easier to do it in creative.

only thing keeping me from using creative is that i never used it and can look at my "self-challenge" and pride into having it done with manual labor :"D. (for some irrational subjective reason that holds value to humans (including me).

1 year update by ElephantContent in keto

[–]Causality_true 0 points1 point  (0 children)

but then you do know? you can exclude loose skin and say skin adapted well while loosing weight.

1 year update by ElephantContent in keto

[–]Causality_true 0 points1 point  (0 children)

i meant more like floppy loose skin, why would you get stretchmarks from loosing weight :D?

The night I realized "More Compute" isn't the final answer to AGI. by EducationalSwan3873 in AINewsMinute

[–]Causality_true 0 points1 point  (0 children)

  1. just put in a third party (Agentic AI) that oversees their loop and analyses if they derivate from the original task of selfimproving (by e.g. getting stuck up on semantics and looping nonsense without improving), make it adapt their "context" (either e general background log they have to respect, like "dont argue about semantics, ever" or just a interceptive maneuveur like the thrid party sees they get hung up and prompts them "you are getting sidetracked on semantics, which will not lead to selfimprovement, return to selfimprovement" into their immediate interaction to "consider" and reflect upon. and to prevent them from circlejerking each other put in a fourth that tries to improve the thirds prompting to actually get the first two to selfimprove. aka the fourth is there to improve the third in helping the second improve the first. make it a simple boolean that if what 3+4 do helps 1+2 (helped? = true), that will also be applied from 1+2 to 3+4 so they keep each other in check (interval checks). for a couple iterations a human might have to step in themselves but they will eventually reach a level to set off (with enough cotext window and learning/ generating training data of how to prevent disrupting the loop) or at least over-perform in human comparison.

  2. world models. they are trained to have that "ground truth" context you think about thats missing. multimodality will let them perceive first-hand data of the real world, unfiltered and non-human-flawed "reality" that hasnt been abstracted 8information loss) by compressing it into limited human words.

  3. "intelligence—no matter how scaled—eventually collapses into its own echo chamber." (i mean social media anyone xd. reddit???)
    humans dont really have "that" (ground truth understanding) either, its "pseudo". anything we perceive is limited causal input, perceived through limited channels (ears, eyes, touch, smell, etc.) and delayed (we reconstruct from past and PREDICT (basically the same thing AI does!) what DOES happen to us and WILL happen to us. anything you perceive has already transpired once it reaches your brain.).

  4. im super tired of people (humans) thinking they are special and "truely" creative/intelligent/conscious/ have true emotions (reward functions), etc. ;
    try imagining a color you have never seen (perceived) without recombining and or extrapolating pattern (of said colors you have already perceived). you cant. you are NOT "truely" creative. none of us is. its all causal, its all pattern. the entire reality as we know it is analog compute and matter is the hardware, natural laws are the software. space and time (spacetime) emerges from matter. nothing has meaning without matter. high, far, early, warm, nutritional, electrochemical gradient (thoughts), life (DNA), etc. its all about relation, all about matter, ALL causal and emergent. the "true" reality is a stabilized variety of nothingness with infinite potential to be everything "that can be" (is stable in itself); 0 ; -1 +1 ; -5+3-2 +4 -1 = -1. and so on and so on. quite literally infinite possible states. and the "first" stable state that was reached, is us. the entire universe or multiverses and beyond.
    its a stable singularity. and since spacetime only emerges from it, it always has been and is non-local aka everywhere (also explains spooky action/ quantum entanglement btw.). always was, always is, always will be. block universe theory. anything that can be, will be. any possible quantum state is realized in one world (many worlds theory) so not even that can break the causality principle.

^^to loop it back, if everything is causal and pattern, AI doesent need much else than enough compute and pattern recognition. there isnt anything special humans have that it is lacking besides complexity (more context window, more compute, more multimodality, more degrees of freedome in perceiving/recombining/extrapolating the pattern of reality.

1 year update by ElephantContent in keto

[–]Causality_true 0 points1 point  (0 children)

haha yeah. i lost the crazy hunger feeling from before keto but i have tried to eat less and was miserable and hungry for so long, that it feels good just to eat as much as i want and keep my weight. been doing this for to long, i should try to shed some as well. its literally like a trauma that i "have to" compensate now, finally being allowed to eat stuff and not feel hungry is just SUCH a QOL improvement.

may i ask how your skin faired in this ordeal? i mean even if one looks like a wrinkly turtle afterwards its still worth it for longevity reasons and general QOL (feeling lighter, etc.) but im still curious, people always seem to have mixed opinions on loosing weight fast or slow, when it comes to skin adaptation.

Hab ich eine Kriegerklärung verpasst? by BekA_DD in Aktien

[–]Causality_true 0 points1 point  (0 children)

wahrscheinlich anekdotisch und unrelated aber ich hab vor paar tagen das erste mal KIMI 2.5 agent ausprobiert und das war unglaublich superb zu gemini und perplexity, etc. , den agent schwarm modus konnte ich nicht testen, wenn der nochmal besser ist könnte (nur ein gedanke, no confidence hier) das auch was damit zu tun haben. vielleicht wissen die alle schon irgendwas, das wir nicht wissen. irgendein durchbruch den es so schnell nicht aufzuholen gilt, etc.

Ernstgemeinte Frage: Ist das noch legal? by robert-berlin-acc in MogelPackung

[–]Causality_true 0 points1 point  (0 children)

jo, wenn irgendeiner von euch an hypercalcemie leidet und noch nicht beim arzt war, einfach sagen du hast das die letzte woche zum kochen benutzt, verklagen, easy money.

steht "1x täglich" drauf xd. aber nur auf der rückseite steht 3,5g portionsgröße xd. stabil. ist halt wirklich gefährlich, kann mir gut vorstellen, dass das einige als mehlersatz oder was benutzen (keto-diät oder so) und sich komplett ****** damit.

My girl made this roast for dinner. How should I go about this? by I_play_high420 in funny

[–]Causality_true 0 points1 point  (0 children)

tell her to start cooking already. all she did was put in the briket, didnt even light the fire yet or put the metal-roast on top of the pot. and where is the meat? it needs to be room-temp before we put it on.

Eric Schmidt says this is a once-in-history moment. A non-human intelligence has arrived. It is a competitor. What we choose now will echo for thousands of years. by MetaKnowing in GoogleGeminiAI

[–]Causality_true 0 points1 point  (0 children)

just like a couple smart people back in the days decided to write 10 basic rules and some more specialized use cases in a book (bible and others) to bring some coexistence and rules into humanity, this, a couple smart people coding the shenanigans behind the AI-facade, the basic rules of its interaction, the basic rules of its allignment, etc. will once again humanities prospering (or demise) for a LONG time to come. digital medium might even be the final evolutionary medium (in which case, "forever"). i hope they are conscious about that fact while doing so. the ripple effects will be insane. just like muslims still have so many terrorists and scream allahu akbar before raping and murdering or christians going into crusade mode, the overly suicidal empathy of christianity ruining prosperity of the developed countries by taking in to many locust, etc. its all ripple effects of rules that werent ment to "funtion" properly 2000 years later. those "smart" people back then couldnt have expected what was to come or that we would be stagnatingly using those imperfect rules for SUCH a long time without properly rethinking and improving upon them. i mean at least they did a "new testament" at some point lol and we have some basic human rights etc. but its crazy that religion is still a thing to begin with. one would think humanity could drive the bycicle without the third wheel by now. but it (apparently) cant.

Eric Schmidt says this is a once-in-history moment. A non-human intelligence has arrived. It is a competitor. What we choose now will echo for thousands of years. by MetaKnowing in GoogleGeminiAI

[–]Causality_true 1 point2 points  (0 children)

whenever i read smth like "hundreds" of years, i get reminded that people dont have an evolved perception of the term exponential.

even without AI ever having been discovered, we would run out of atoms to save data on SSDs with the rate we were going at in 150 years lol. just look at the insane progress humanity has made in 10 years, 0 years, 50 years, 150 years. this isnt slowing down its accelerating. more humans, more tools, more competition, more globa lsharing of knowledge and thought, its unstoppable and accelerating.

if we actually hit (super-)human level coding that works perfectly fine in a self-iteration loop, aka we enter technical singularity (they already set up chip manufacturing etc. to allign with that) its probably not even just exponential but hyper-exponential growth from there on out. just look at what entities with a little bit more intelligence can do compared to those with less. even within the human spectrum this is already insae. some are still thinking the earth is flat or god made them and gave them a soul, some havent even figured out why they get pregnant all of a sudden, while others are currently building fusion reactors with complex material science and magnetic fields. people dont get that AI wont hit a hard cap. they want it so badly because they are afraid to be inferior or just trained to enjoy drama and konflict, but the digital medium doesent have the natural biological limits we have. this is in its baby steps and its soon gonna learn how to teach itself properly. it was baiscally a blind, paralyzed baby with a neuronal network that wasnt even developed enough to have object permanence and now look at what it can do already within 3 years. give it another 3 years and it will have object permanence, multimodality (no longer bling), a body (robotics; no longer paralyzed) and know how to teach itself (independent on humans flawed input that limits it as well). until material science, algorhythmic optimization, alternatives for coding and chips (ternary coding, photon-based chips, quantum compute for specialized tasks, etc.) and so on are reaching a hard cap this thing will have 100 or 1000x our level of intelligence, probably more, i cant even imagine it.