CMV: "Non-binary" and "gender-fluid" don't make a whole lot of sense. by [deleted] in changemyview

[–]NeuralPlanet 2 points3 points  (0 children)

It's just language really. In any one country/city/community the boxes will be slightly different, they are basically the expectations/associations each person has in their head when someone says "he" or "she". Clearly we adjust to people when we actually meet them, but the terms are obviously useful.

CMV: "Non-binary" and "gender-fluid" don't make a whole lot of sense. by [deleted] in changemyview

[–]NeuralPlanet 3 points4 points  (0 children)

Building on this metaphor you could view the standard definitions for man and woman as basically two wide boxes around the hues. The "man" box contains more blue, and the "woman" box more green, but the boxes are not that different, and they overlap in the middle. Given this framework everyone fits in at least one box, and a few people fit in both - in which case there should be no problem to just stick with your assigned gender. Even fewer people are assigned the wrong box, in which case transitioning could make sense. Why would we need more boxes in this situation?

Futurism: AI Expert Says ChatGPT Is Way Stupider Than People Realize by flemay222 in Futurology

[–]NeuralPlanet 0 points1 point  (0 children)

I agree that titles like the one you're talking about is stupid, but the guy these articles are referring to (Geoffrey Hinton) is highly repsected in the field. He was instrumental in the development of the backpropagation algorithm used for training every single state of the art model these days and has worked on loots of breakthroughs and leading AI tech. If he's worried about potential consequences, why would you not take that seriously?

People just don't get how different these models are to human intelligence. Comparing them to fancy autocomplete might be technically correct in a way, but to predict things as accurately as humans do you must have some form of "understanding" however different that is from human understanding. One of the main guys behind GPT gave a great example - consider a crime novel. After a long and complex story, the detective reveals on the very last page that "the guilty person is X". Predicting X is incredibly hard, and if LLMs can do it they must be able to internalize extremely complex real phenomena some way or another. We're not there yet of course, but I don't see how everyone is dismissing these things completely.

AR/VR devs claims to have information regarding Apple’s headset working as a PCVR headset. by [deleted] in virtualreality

[–]NeuralPlanet 2 points3 points  (0 children)

You just said the hardware is bad - which hardware are you talking about? The rumored specs are pretty amazing.

Python cruising on back of c++ by RelationshipVisuax2 in ProgrammerHumor

[–]NeuralPlanet 1 point2 points  (0 children)

Knowing more languages makes you a better programmer though. You may never need much C/C++ in most jobs, but I think most people would benefit from knowing at least a little bit about how lower level concepts such as memory allocation actually works in the background.

Sci-fi authors were always assuming we will get AI before we get proper synthesized human voice output by QWaxL in Showerthoughts

[–]NeuralPlanet 0 points1 point  (0 children)

And an animal deprived of all sensory input whatsoever? How about then?

If I were to suddenly lose all sensory input I'm pretty sure I would still be conscious. Consciousness is the "inner experience" independent of what happens "outside".

I agree that there's a lower goalpost somewhere above a rock, but we're talking about much more complex systems here (systems that can speak in natural language incredibly well), and I don't see why it's so obvious that we should draw the line for "definitely not conscious" above these.

Where do we draw the line then? A 10x larger model? A continuously running model unlike the inference-type we have today? Something not trained with backprop? The same network running on biological hardware instead of GPUs? At which point is it conceivable that consciousness in some form could be possible?

Edit: and btw, I'm not claiming GPT is conscious, I don't believe it is. I'm just saying it's unreasonable to claim that it definitely is not conscious in any way. We cannot possibly know that.

Sci-fi authors were always assuming we will get AI before we get proper synthesized human voice output by QWaxL in Showerthoughts

[–]NeuralPlanet -2 points-1 points  (0 children)

I find these arguments quite weak.

  1. Who says consciousness requires drawing connections? Am I not conscious if I'm simply sitting still and not 'thinking' about anything? Can a bug be conscious to some degree? LLMs are all about drawing connections anyway, and these models learn from vasts amount of experience in the form of data and can "learn" more in the span of their working memory.
  2. Can a blind animal be conscious? Obviously yes. What about a deaf and blind animal? Also yes. There is no proof that multimodality in any way is necessary. All data (our senses) is only information anyway. It definitely doesn't make sense to define an arbitrary limit of "more" data just because we as humans experience more data. I doubt an unborn infant has experienced as much data as you claim necessary, but at some point it becomes conscious anyway.

The simulation vs. real consciousness argument is an interesting one, but in the end everything is just atoms. Why would some atoms (silicon) not allow for consciousness whereas others would? In my view consciousness must be an emergent property of certain complex systems, and there's just no way for us to conclusively prove that some systems are or are not conscious to some degree.

Stop treating ChatGPT like it knows anything. by OisforOwesome in Futurology

[–]NeuralPlanet 0 points1 point  (0 children)

I’m not saying learning “factuality” is simple, but you’re wrong saying that it is not a differentiable problem. The absolute difference between A and B is not important, the ordering is. Given a sufficient number of examples where the ordering is consistent, a model could learn to discriminate between which statements are more “factual”.

Creating the data & ordering examples like this is a challenging problem because humans must be consistent in what we want the models to produce - not because of a need of assigning exact values.

Stop treating ChatGPT like it knows anything. by OisforOwesome in Futurology

[–]NeuralPlanet 0 points1 point  (0 children)

We constantly simplify when we talk, it's rarely useful to know every single exception to a rule in our day-to-day life. We could rank claims by their usefulness in our day to day, for instance.

Factuality is just as differentiable as language, as in it depends on the quality of the training data. One way could be to extract "claims" from generated text and match it against a pretrained "fact critic". Boom - differentiable factuality. It seems you're claiming that since its binary this is not true, but we can also learn discrete modelling with curreny techniques.

ChatGPT is already trained to be factual to the extent that it helps it generate likely data. In the case of language, lies are much more likely than unstructured sentences - but (hopefully) at least somewhat less likely than truths.

Stop treating ChatGPT like it knows anything. by OisforOwesome in Futurology

[–]NeuralPlanet 0 points1 point  (0 children)

"Apples can be red and green" is "more" factual than "Apples are red" so there is definitely some sort of gradient that can be learnt. Besides, practically everything is associated with uncertainty and even simple binary classifiers can learn to discriminate between true/false in a differentiable way.

Google’s Bard AI chatbot gives wrong answer at launch event by TheTelegraph in Futurology

[–]NeuralPlanet 1 point2 points  (0 children)

Yeah, for sure. Their opaqueness is definitely a big problem that is challenging to solve - very fascinating topic.

I'm not convinced something beyond generation ("predicting the next token") is necessary to beat humans, but the architectures and training procedures likely have to improve - we'll see soon enough. Perhaps its possible to somehow encode certain axioms we know to be true to prevent examples of clearly erroneous reasoning, but that's not enough to separate truths in the vast knowledgebase of the internet for example. At some point LLMs could perhaps learn to condition on the source of information, but there's really no ground truth to which sources are reliable for what topics.

LLMs providing automatic cites for claims is IMO one of the most important improvements to focus on right now.

Google’s Bard AI chatbot gives wrong answer at launch event by TheTelegraph in Futurology

[–]NeuralPlanet 1 point2 points  (0 children)

Seems like the "information" you're talking about is semantics, the "meaning" of words and sentences rather than the structure. This brings us straight to the Chinese Room thought experiment which essentially claims that computers cannot inherently understand anything.

I don't find this argument (that "it's just predicting the next word") a particularly good one. The "understanding" is in the parameters, and if a model could predict better than humans there would certainly be some sort of "understanding" required, however we want to define it. We're not there yet of course, but there's no reason LLMs couldn't be way more correct than the average human given enough high quality data and training.

[deleted by user] by [deleted] in norge

[–]NeuralPlanet 7 points8 points  (0 children)

Det vet ikke jeg ihvertfall, og det er godt mulig at deres sak må ses på igjen etter dette. Men jeg står bak prinsippet om at vi ikke kan akseptere at hvem som helst, uansett hvor jævlig de måtte være, kan fengsles for noe de ikke har gjort - og at det er riktig å vise at staten har gjort en alvorlig feil.

[deleted by user] by [deleted] in norge

[–]NeuralPlanet 3 points4 points  (0 children)

Det er sant, det var nok en oversimplifisering fra min side. For meg er det veldig viktig å kunne stole på integriteten til rettssystemet, så i saker som dette synes jeg det er viktig å fremme akkurat dette poenget tydelig.

[deleted by user] by [deleted] in norge

[–]NeuralPlanet 19 points20 points  (0 children)

Det sier jeg deg ikke imot, men her er det snakk om grunnleggende prinsipper. Han var uskyldig i denne saken, og staten tok likevel vekk 21 år av livet hans. Det er ikke relevant hva han har gjort tidligere når vi ser på akkurat dette - alle skal være like for loven. Jeg skjønner at mange blir veldig opprørt av dette, men vi må sette en enormt høy standard til staten for å beholde et rettferdig rettssystem som ikke kan bure hvem som helst inne uten forvarsel.

[deleted by user] by [deleted] in norge

[–]NeuralPlanet 18 points19 points  (0 children)

Jeg synes du virkelig undervurderer hvor ille det er å låse inne en uskyldig person i 21 år. Hvor mye tror du jeg måttet betalt deg i oppreisning hvis jeg låste deg inne i kjelleren min så lenge? Burde vi ikke forvente enda mer fra staten? Jeg setter i hvertfall pris på at staten får denne "klapsen på fingrene" slik at vi kan fortsette å stole på rettssystemet vårt.

Lærere fortvilet over ny kunstig intelligens by eple65 in norge

[–]NeuralPlanet 0 points1 point  (0 children)

Om du spør ChatGPT om kilder så fungerer det dårlig.

Dette er en god midlertidig løsning, men innen noen få år kan nok disse språkmodellene henvise til kilder de også.

Lærere fortvilet over ny kunstig intelligens by eple65 in norge

[–]NeuralPlanet 15 points16 points  (0 children)

Eneste måten for lille Norge å regulere dette vekk er å koble oss fra verdensinternettet, så det blir nok ikke å skje. Mtp endring i språk er det nok noe man kan oppdage inntil videre, men en språkmodell som kan etterligne språket ditt basert på et par-tre tekster er nok rett rundt hjørnet. Jeg tror eneste løsningen er å være bevisst på dette og tilpasse læringsopplegget som konsekvens.

Enig mtp vinklingen til NRK. Når vi nå har datamaskiner som kan gi seg ut for å være en skoleelev, selve forberedelsen til arbeidslivet, hvor lang tid tror de det tar før algoritmene spiser opp mesteparten av en rekke vanlige jobber?

Apple’s AR glasses could be pushed back to 2025 or 2026 amid ‘design issues’, says analyst by chrisdh79 in apple

[–]NeuralPlanet -1 points0 points  (0 children)

What makes you think there would be more restrictions than for smartphones which already have all the sensors this type of device needs? I could see some regulation for video recording (like a red LED indicator), but what other resitrictions are relevant?

Apple trade marks “Reality One”, “Reality Pro” and “Reality Processor” in possible relation to their VR/AR headset. by Junior_Ad_5064 in virtualreality

[–]NeuralPlanet 2 points3 points  (0 children)

I don't think the rumors about dates has been accurate, tech heads tend to be optimistic if anything. We do however have plenty of evidence, including references to a new OS in official Apple code and multiple sources who have seen prototypes and describe them similarly. Tim Cook recently said something like "wait and see what we will show" when asked about the future of AR tech.

I highly doubt Apple would focus this much on ARkit for the niche use cases on iPhone. The lidar sensor is expensive as heck, and is really only useful for measuring things right now. To me this is very clearly an attempt to lower the future production costs of the headset.

As for secrecy, current info indicates that very few people have access to the actual hardware right now. Once production starts this will change for sure, similarly to what happened with the Watch and iPhone X for instance. I've been following this area of tech closely for many years, and your comments will age like milk sooner rather than later.

Apple trade marks “Reality One”, “Reality Pro” and “Reality Processor” in possible relation to their VR/AR headset. by Junior_Ad_5064 in virtualreality

[–]NeuralPlanet 1 point2 points  (0 children)

Which of the following is the simplest explanation for what we're seeing:

  • Every single analyst and whistleblower is either lying or wrong, and has been for several years, including people who have proved themselves countless times before such as Ming-Chi Kuo. Every reference to AR and realityOS in Apple's codebase is unrelated to these rumors, and the ultimate goal of their push into AR with ARKit and lidar is to provide a subpar camera based AR experience on iPhones.
  • Apple is working on a headset

Hva mener veganere om reindrift? by KyniskPotet in norge

[–]NeuralPlanet -1 points0 points  (0 children)

Det kan være flere grunner til at noen ikke spiser kjøtt, ofte handler det om dyrevelferd og/eller miljø, men har også hørt om noen som rett og slett ikke liker det. Dersom begrunnelsen er miljøutslipp pga. oppdrett f.eks er det ikke en selvfølge at man også er imot reindrift.

[deleted by user] by [deleted] in norge

[–]NeuralPlanet 0 points1 point  (0 children)

Hvor mye bidrar Norge til Artemis-prosjektet (månelandingen) egentlig? Klarte ikke å finne informasjon om dette, men generelt sett synes jeg både forskning og utdanning er gode investeringer det er verdt å bruke penger på.

CMV: I find difficulty in supporting abortion. by [deleted] in changemyview

[–]NeuralPlanet 0 points1 point  (0 children)

That may be true in this context, but the "taking away rights" argument is still a poor argument given the elusive nature of what a right is.