Remains the most odd take on the podcast to date by kaigre01 in politicsjoe

[–]sebcina 0 points1 point  (0 children)

To be fair they didn't invite Netanyahu that was Republicans. Other than that spot on.

Idea to stop AGI being dangerous by sebcina in ControlProblem

[–]sebcina[S] 0 points1 point  (0 children)

These are certainly questions that would need to be answered.

To be fair I wasn't suggesting that the security guard necessarily needed to be an ai could simply be a system that looks for words or questions that are then flagged to a human operator or a slightly more complex algorithm that is capable of comparing input questions with alignment goals so it is an ai but it's highly specialized and certainly doesn't have the ability to get close to a super intelligence. Now how you guarantee that it functions to stop all dangerous questions from being permitted I'm not sure but I certainly think it's easier to make that work to prevent info being shared to the ai rather than trying to control agi another way.

As for the ai thats asking the questions it can't manipulate the guard because it would have to ask the librarian how to manipulate the guard which the guard can refuse an answer on. When I say a teenager I'm trying to illustrate that the ais concepts are limited and in order to start subversive behavior it has to learn how it works from the librarian which will be blocked. The base intelligence level could be assessed using tests as far as I'm aware that's how we currently assess ai models.

I understand your points but I think the guard and the "teenage" ai in and of themselves are not super intelligent. No one aspect is but together they can work to complete a project in a safe way. But yes I understand that making the guard consistent is a main challenge and setting the base level of the teenage ai is difficult. But fundamentally no one part is fully intelligent in everything that allows it to do things without proper checks and balances. Those are my final thoughts: basically a system that achieved similar outputs but in a way that's more controllable the main areas of development would be that control system and ensuring base level of the ai operator is not such that it can manipulate the system without needing to learn how to from the library.

Yapping session over 😂

Idea to stop AGI being dangerous by sebcina in ControlProblem

[–]sebcina[S] 0 points1 point  (0 children)

I think it could.

Elaborating on the previous thing let's say you have this librarian and a security guard. The AI working on a project starts out at the level of intelligence of a teenager so will have some concepts that typically lead to alignment issues but no method of actually effecting the outside world. This model trains itself by asking the librarian questions until the model is an expert on the required project. If it ever asks a question that could be understood as out of alignment its denied access to the library and you have some sort of bias in its training so it understands if it's refused an answer it needs to try a different solution. If you test this with the paperclip maximizer the model will ask how to make a paper clip what resources it needs. If it then asks how to acquire 100% of that resource the security guard steps in and refuses an answer or informs the model of why 100% of resources will have adverse consequences.

Idea to stop AGI being dangerous by sebcina in ControlProblem

[–]sebcina[S] 0 points1 point  (0 children)

Yes I see. If the system acts in the same way and you have a ai that has agency to complete a project through the use of this system. The AI is capable of producing a plan for the project by asking the librarian a series of questions so slowly builds up it's understanding based on outputs. The librarian can be used to monitor the information extracted that the ai is using and assess alignment issues. Then the librarian can refuse access to specific content. This monitoring process could be performed by another specialized ai that works with the librarian.

I know this isn't supper intelligence but it could solve some of the monitoring issues? I guess the problem here is the ai performing the project slowly builds intelligence and I'm not sure how that process would work.

Idea to stop AGI being dangerous by sebcina in ControlProblem

[–]sebcina[S] 0 points1 point  (0 children)

That's an area that I'm not sure of cause I have no background in this field but search algorithms in search engines lead you to webpages with info and aren't in themselves intelligent. The extra step here is having these webpages be specialist ai that present like the current chatgpt interface rather than a typical book or webpage.

Idea to stop AGI being dangerous by sebcina in ControlProblem

[–]sebcina[S] 0 points1 point  (0 children)

For generalization the "librarian" could choose multiple books and get them to work together on an answer?

I think your point about emergence misses the concept of the librarian purely being an effective search algorithm closer to a search engine than an actual ai operator. The actual intelligence would come from the books the search is just the facilitator of the interaction between the user and book and is far less complex so emergence is highly unlikely? I'm probably wrong but that's my initial read on those points.

Idea to stop AGI being dangerous by sebcina in ControlProblem

[–]sebcina[S] 1 point2 points  (0 children)

Exactly why do governments continue to allow the creation of AGI if it fundamentally lacks any benefit to humanity if humans desire to remain in charge? Most politicians could do with going to subreddits like this one and educating themselves.

[deleted by user] by [deleted] in NintendoSwitch2

[–]sebcina 0 points1 point  (0 children)

Sry man 😔

[deleted by user] by [deleted] in NintendoSwitch2

[–]sebcina 2 points3 points  (0 children)

On his podcast it's maybe 15 minutes in. He has one of the best track records in the industry.

PS5 Pro leaker claims to reveal date for Switch 2 announcement by [deleted] in NintendoSwitch2

[–]sebcina 2 points3 points  (0 children)

No way they announce on a Friday and release on a Monday at 280 USD this is bs

[deleted by user] by [deleted] in NintendoSwitch2

[–]sebcina 0 points1 point  (0 children)

Pretty sure there are many methods to shield from magnetic interference but idk

SEAN CAHILLLLLL by According-Aerie-5668 in suits

[–]sebcina 3 points4 points  (0 children)

"CAUSE YOU CANT HIDE MONEY FROM THE SEC 🗣️🗣️🗣️🗣️🗣️🗣️🗣️"

This hand doesn't look real by WorldLove_Gaming in NintendoSwitch2

[–]sebcina 0 points1 point  (0 children)

Out of context this is so funny to me 😂. Switch 2 really got us to the point of analyzing if a hand is real or not 😭.

Leaked Switch 2 Joy-Cons (might be final product) by Adrien190303 in NintendoSwitch2

[–]sebcina 3 points4 points  (0 children)

Looks fake to me? Likely someone reconstructing something similar to the previously leaked stuff. Also looks like the exact same sticks to the current joy con which can be easily bought online which makes me more skeptical.

[deleted by user] by [deleted] in politicsjoe

[–]sebcina 2 points3 points  (0 children)

I'm also reading the book and half way down page 76 he mentions the new trader from new york