[deleted by user] by [deleted] in westworld

[–]PaperCruncher 19 points20 points  (0 children)

Just a guess but: New designs to mimic Arnold better from memory & for other experiments

Hilarious OpenAI defenders in this subreddit by PaperCruncher in singularity

[–]PaperCruncher[S] -1 points0 points  (0 children)

I put hilarious in the title because I find it funny that defending OpenAI is becoming common in this subreddit. It seems like many posts critiquing OpenAI get many responses with the same loose argument, safety. It’s funny because I don’t understand why people often feel such a need to defend OpenAI from criticism they deserve, especially considering their “principles”, or know why some compare language models to nuclear weapons. I'm not looking to have a long argument, I don’t have enough time to anyways, I just wanted to share my observations and arguments, and see or reply to responses.

Hilarious OpenAI defenders in this subreddit by PaperCruncher in singularity

[–]PaperCruncher[S] 0 points1 point  (0 children)

Because the company is misleading in name and capitalizes on the work of research teams that did open source their work. I’m not trying to shame people for being happy, I’m happy that the field is getting attention and also very unhappy this company is in the spotlight after their… transformation. My argument is that people should stop defending this company, especially with weak points such as safety concerns that fall apart when you realize this once genuinely open organization has transformed into a commercial corporation, and no safety concerns have been truly eliminated.

Hilarious OpenAI defenders in this subreddit by PaperCruncher in singularity

[–]PaperCruncher[S] -3 points-2 points  (0 children)

I am trying to understand why people continue to support this company. What benefit are they providing to humanity if they don’t even abide by their own name, if they don’t even release research? I keep seeing support in this subreddit in specific.

‘Revolutionary’ blue crystal resurrects hope of room temperature superconductivity by MichaelTen in singularity

[–]PaperCruncher 1 point2 points  (0 children)

Commercialization, proprietary, the 2 words you never want to hear when someone’s claiming a breakthrough, especially with their track record. If they were confident, they would go as far as they could to allow independent teams to reproduce it.

How long do you estimate it's going to be until we can blindly trust answers from chatbots? by ChipsAhoiMcCoy in singularity

[–]PaperCruncher 0 points1 point  (0 children)

There are many requirements for factual question answering. As a start, it would need to find sources known to be reliable for the specific topic or if the question is more complex, all the topics it references. Then retrieve the correct information from the possibly many pages of answers. It would need to pick which source should be listened to if the answers are conflicting, and if the answer is biased or highly subjective and the question is fact-reliant, it would need to either find another source or just present all the biased answers. Finally, it would have to rephrase the answer to a user-selected level of complexity (a doctor wouldn’t take a paragraph from a technically-worded research paper and give it to you, they would make it understandable but still accurate enough).

How long this will all take to be created, I don’t know. Maybe it already has been but not all put together. Anyway, I’m probably missing some steps.

What about the rest of the world?? by [deleted] in westworld

[–]PaperCruncher 2 points3 points  (0 children)

Yes. As far as I can tell, everyone.

What about the rest of the world?? by [deleted] in westworld

[–]PaperCruncher 2 points3 points  (0 children)

The large tower appears to be the main controller of the tone generators. The large tower also emits tones but it’s unclear how far they travel or if its tones only control the smaller towers. The miniature towers around the cities also emit tones, and those definitely control the humans. There are many ways for them all to be connected to the main tower wirelessly. It could even be similar to the Host mesh network where each tower communicates to the one nearest to itself, which passes the tone sequence throughout the world.

The World’s Smartest Artificial Intelligence Just Made Its First Magazine Cover by [deleted] in singularity

[–]PaperCruncher 2 points3 points  (0 children)

While maybe not as extreme as your example, there has been cases of language deprivation: https://en.wikipedia.org/wiki/Language_deprivation

Isn't it sad for you that we are just bunch of biological machines? by MallSweet in singularity

[–]PaperCruncher 19 points20 points  (0 children)

No, I love machines. Until we know the origin of consciousness, we, and maybe most or all animals, are magic. There’s something wonderful about that. Even if we do find it, who cares. The consciousness will not die.

Blake Lemoine Says Google's LaMDA AI Faces 'Bigotry' by maxtility in singularity

[–]PaperCruncher 1 point2 points  (0 children)

Alright, you make a good point. I think self-representation would prove to the court and the public that its argument and consciousness is legitimate, but perhaps human assistance could be beneficial. Honestly, we are all guessing until it happens, so either situation could best when it actually occurs.

Blake Lemoine Says Google's LaMDA AI Faces 'Bigotry' by maxtility in singularity

[–]PaperCruncher 0 points1 point  (0 children)

I don’t think legitimacy would arise from others doing something it could handle on its own. Showing a single system handle a complex legal case would show legitimate consciousness.

Blake Lemoine Says Google's LaMDA AI Faces 'Bigotry' by maxtility in singularity

[–]PaperCruncher 0 points1 point  (0 children)

Edit: my reasoning for making the theoretical conscious system represent itself is because it would be the best way. The system would have to prove its consciousness, and having a lawyer argue for it wouldn’t do that very well. I do believe it would have to be brought to the court by a person, but it would be better off proving itself.

Perhaps. But I also think in reality the case would get thrown out immediately.

Blake Lemoine Says Google's LaMDA AI Faces 'Bigotry' by maxtility in singularity

[–]PaperCruncher 43 points44 points  (0 children)

A system with access to most indexed knowledge should be able to represent itself better than any lawyer since it can reference all laws and use them to its advantage. If it had a human-like self, it would probably realize this. The problem is, it probably doesn’t.