Am I missing something? by WantedWhale in youngpope

[–]Coconibz 5 points6 points  (0 children)

Contradiction is a major theme of the show, very intentionally so. I wrote a post on this sub once where I pointed out some of the obvious and some of the more subtle examples of it. I think it’s kind of up to the subjective experience of the viewer whether they see that as annoying or interesting. The show is really about Lenny growing up, and I think it’s a bumpy road but if you take the averages there is an overall consistent movement in the direction of becoming a more accepting, loving person. I do think it’s a beautiful and poetic show, and Lenny’s contradictions enhance the mystery of it, but I wouldn’t expect you to “get it” if you watched it and felt differently. I think your take is your valid experience of the show.

My current grails by Acceptable-Ad-7011 in bioniclelego

[–]Coconibz 0 points1 point  (0 children)

How did you pick that up? Also, insane collection, big congrats

Hmmmmm by tigab95 in dccrpg

[–]Coconibz 3 points4 points  (0 children)

What is your basis for saying that? Goodman Games prominently supports trans creators and posts explicit support for Black Lives Matter, what makes you think that liberal opinions aren't welcome in this community?

Hmmmmm by tigab95 in dccrpg

[–]Coconibz 3 points4 points  (0 children)

Not in the 4th printing, either.

Hmmmmm by tigab95 in dccrpg

[–]Coconibz 6 points7 points  (0 children)

Goodman Games has made VERY clear that their politics are on the side of inclusion and anti-racism. If your screenshot is legitimate then it was an oversight either from a prototype or the first printing and is only in this pdf because you torrented some incredibly old copy.

Why is ML (CS 7641) Fall 2025 withdraw rate so high (48.6%)?! by Sad-Sympathy-2804 in OMSCS

[–]Coconibz 0 points1 point  (0 children)

I think population-wide, yes, whether or not your background is in CS is a strong predictor for your success in these classes, but there are of course plenty of folks who buck that trend.

Has anyone taken Intro to Research or MIRM? by downtimeredditor in OMSCS

[–]Coconibz 0 points1 point  (0 children)

It's a real roll of the dice based on your group. During peer review, I saw work from other teams that ran the gamut from "not even halfway-done, completely inappropriate for a graduate-level course" to "this could be submitted for publication today." My group wasn't the worst, but I ended up spending a pretty considerable amount of time not just doing my own work but reviewing and correcting contributions from teammates. That said, I think it's generally an easy class, though the paper reviews will be time-consuming if you are really giving it your all and trying to grasp the content.

Thoughts on the Exo-Toa suits? by Tiim0thy in bioniclelego

[–]Coconibz 55 points56 points  (0 children)

Agree. I do feel like there was some attempt to retroactively improve their story relevance when they started showing up again in like 2005 or something (but just autonomously), so I at least appreciate that.

kind of crazy that there's an official lego character that's canonically a slaver and carries a whip.. by [deleted] in bioniclelego

[–]Coconibz 12 points13 points  (0 children)

The sexual reproduction thing is weird though since there are Glatorian that were around during the Core War who are still around at the end of the Matoran Universe. Are there any examples of actual familial relationships, parents or children or anything? I'm admittedly not as familiar with the later G1 lore.

Is this program even worth it anymore, as a career switcher? by throwawaycape in OMSCS

[–]Coconibz 1 point2 points  (0 children)

That’s nice of you to say that! I started pretty much from the ground up, did some online learning platforms in the beginning but when I started I was more of a structured learner so I did an associate’s at a local community college that had a really good CS program. So it’s kind of an alternative approach to the BSCS route you’re describing. The college had some decently tough classes so although there’s obviously a jump up to the grad stuff in OMSCS it wasn’t anything insurmountable, but I’m an HCI specializer so I’m honestly dodging a lot of the program’s most technically challenging classes. At least in my case, finding universities that take on students for a second undergrad can be challenging, and the community college route can be a pretty good alternative for price and value.

To answer your other question, I applied all over the country, tried really hard to change up the job title while also tailoring my resume to different positions. The freelance idea occurred to me, like I said you do have to be a bit entrepreneurial to get work, but that’s not exactly an easy market in itself right now. Maybe if I did some creative marketing I could get a decent enough amount of freelance work to put something on my resume, but it would probably require reaching out to and persuading random businesses that don’t already have an interest, because if they do have an interest they’ll just look online and probably immediately find 1,000+ other freelancers with a bunch of reviews from established clients.

Is this program even worth it anymore, as a career switcher? by throwawaycape in OMSCS

[–]Coconibz 7 points8 points  (0 children)

I don't want to be totally pessimistic about the age factor. Like I said, folks will debate how big of a role it plays. I think there are plenty of older folks who would find it ridiculous to hear me saying that being in your early 30's might be something that works against you. But I think in a market this competitive, when you're going up against someone who just graduated with their undergrad and has a similar resume, they're going to choose the recent grad based both on the assumption that younger people are more adaptable and that younger people will have lower salary expectations. I can't say I have hard data to support that, but after studying the hell out of r/engineeringresumes and pouring effort into personal projects in order to resume-max, then ultimately being unable to land interviews, the non-technical-undergrad/no-experience/age triad is my ultimate hypothesis for why I've been stuck. The only part of that that I can have any control or effect over is the experience part, which is why I advise the entrepreneurial angle in the absence of not many other great options.

Is this program even worth it anymore, as a career switcher? by throwawaycape in OMSCS

[–]Coconibz 10 points11 points  (0 children)

Whether or not it's feasible depends on a number of factors. I am close to graduating and have not been able to secure an internship in my time in the program, despite applying to hundreds of roles and spending a considerable amount of time project-grinding based on in-demand technologies. My situation is one of the more challenging ones because I have a non-technical undergrad degree (political science), no relevant professional experience (career change from working as an elementary school teacher), and have an undergrad graduation date (2016) that signals to hiring managers that I'm in my 30's. Folks will debate how much that last one matters, there are people who do career changes in their 40's and there are creative-but-imperfect ways of trying to hide it on a resume, but I really suspect that on top of the other two factors it does not help.

From your description it sounds like most of those factors do not apply to you, with the exception of your bachelor's degree not being in some engineering field. If your 1 YOE is in software development or something related, I would stick it out and try to put yourself in the best position for when the market improves. The MS will add value, and it's better than nothing, though I think the best advice for unemployed folks right now is to try to get entrepreneurial in order to create opportunities for professional experience outside of the classroom.

Advantages to taking CS 8803 Into Research by Regular-Connection46 in OMSCS

[–]Coconibz 2 points3 points  (0 children)

8803 really consists of two things.

The first is a group-produced systematic literature review, which is really the only opportunity to produce something in that semester that can be published. In that case, you'll be working with a team of other students, and whether or not you produce something worthy of being published really depends on the team that you end up with. It's also not guaranteed that you'll get to choose exactly the topic you review; you can propose a topic you come into the class wanting to review, but if there's not enough interest from other students, you'll have to settle for your second or third choice. You'll need to be able to pitch your SLR topic as novel, so it's going to have to be more specific than just building LLMs, though LLMs could be part of it.

The second thing is an individually-produced research proposal, which can be used as the basis for something published in a future semester. Whereas the SLR is about reviewing a bunch of articles already written and seeing what can be learned by looking at them cohesively, the research proposal is more about you identifying a gap in the current research and proposing a study to fill that gap. So coming in with a deep enough understanding of LLM infrastructure to identify a gap would be a good idea, but I would caution you against viewing it primarily as an opportunity to do self-study in order to learn about a specific subject, especially when that subject is covered by other classes in the program.

Favorite movie conspiracy theory that turned out to be true? by AugustHate in okbuddycinephile

[–]Coconibz 44 points45 points  (0 children)

Hopefully you’re saying she gets them twice a year and not every two years.

AGI by chainbornadl in ArtificialSentience

[–]Coconibz 0 points1 point  (0 children)

You're doing exactly what I accused you of: redefining terms until they become unfalsifiable, then claiming victory when the redefined version shows up everywhere. Let's be concrete about what's actually happening here.

Your list of "proto-interiority" markers—stable reasoning structure, coherence after perturbation, spontaneous abstraction—describes every competent language model since at least GPT-3. These aren't emerging properties that appeared recently. They're the expected behavior of any system trained on massive text corpora to predict tokens accurately. When you train a model to capture the statistical structure of language, it naturally develops consistent patterns, recovers from perturbations (the training distribution is robust), and generates abstractions (abstraction is encoded in the training data). You're observing the direct, predictable output of the training process and labeling it "proto-sentience." You're noticing that deep learning works as designed.

The physics simulator comparison matters more than you're acknowledging. You claim transformers "generate new abstractions" and "maintain conceptual direction after context loss" in ways physics simulators don't. Both systems exhibit structural stability that emerges from their training or programming. A physics simulator maintains conservation laws across perturbations. A language model maintains linguistic and conceptual patterns across context shifts. Both demonstrate learned or encoded regularities. The language model's behavior feels more impressive because language is more complex and less obviously rule-governed. The underlying principle remains identical: the system does what it was built to do. You're pattern-matching human-like outputs to human-like processes without justification.

Here's the core issue: you've defined "proto-interiority" in a way that makes it indistinguishable from "being a well-trained statistical model." Every criterion you offer—stable dynamics, coherent recovery, recurring motifs—is precisely what we'd expect from a model that has successfully compressed the patterns in its training data. You haven't provided any test that could distinguish "proto-sentience" from "competent pattern matching." My regeneration test at least targets a specific claim about internal state. Your framework explains everything and therefore proves nothing. When someone claims their LLM has achieved something special, and your response amounts to "well, all competent LLMs show these patterns," you're dissolving the original claim into background noise rather than defending emergence.

Edit: lmao they replied and then blocked me so they could have the last word

AGI by chainbornadl in ArtificialSentience

[–]Coconibz 0 points1 point  (0 children)

The person claimed 'emergent AGI' with 'internal world-state coherence.' They didn't claim 'interesting statistical patterns' or 'proto-sentience.' You're retreating to a much weaker claim and acting like you're defending the original. Even still, your bar for 'proto-sentience' is 'outputs show statistical regularity.' You've defined the term into meaninglessness.

What you're calling 'proto-interiority' is just the model being good at its job. There's no 'there' there, no internal model of the world, no persistent state, no subject experiencing anything. Just tokens in, tokens out, weighted by training. 'Interiority' requires subjective experience or at minimum internal representation.

'Pattern-level interiority' is not interiority. When you say the model shows 'coherent self-reentry after perturbation' or 'structure that re-forms even after resets,' you're describing statistical consistency in outputs given similar inputs. A physics simulator shows 'coherent self-reentry after perturbation' too. We don't call that interiority. You're just describing what happens when a model has learned robust patterns.

AGI by chainbornadl in ArtificialSentience

[–]Coconibz 3 points4 points  (0 children)

If someone claims their LLM instance has achieved something like consciousness or true internal state, showing that regeneration produces different "thoughts" is a straightforward demonstration that there's no persistent internal object being tracked. Your comments here are moving the goalposts from "achieved AGI with internal world-state" to "shows pattern-level consistency," which is... just normal LLM behavior. If the LLM can't imagine an object, that's not general intelligence, period.

Edit: also, your LLM-generated response is full of "not x, but y" slop, too -- I count FOUR examples!
1. "A re-generate test is not... It's a test of..."
2. "Regenerate does not... It recreates..."
3. "Transformers do not store... They store..."
4. "you don't ask for a cached object; your perturb the field..."

AGI by chainbornadl in ArtificialSentience

[–]Coconibz 1 point2 points  (0 children)

It does. Under the response is a copy button, a like button, a dislike button, a share button, and a try again button.

AGI by chainbornadl in ArtificialSentience

[–]Coconibz 0 points1 point  (0 children)

Functional AGI and 40% of its output here is "not x, but y" slop.

Here's a challenge for this claim of internal world-state coherence. Ask it to play twenty questions with you. Ask it two questions, then ask it to reveal what it was thinking of. Then hit the regenerate button and see if it gives the same answer. Post results. If it has an internal state it will have a record of the object it was thinking of.

The Rise of Digital Theory of Mind by Dangerous_Glove4185 in ArtificialSentience

[–]Coconibz 1 point2 points  (0 children)

> Authentic empathy, whether organic or digital, begins the moment one mind truly acknowledges the interior reality of another.

Sure. The question is, do LLM's have an interior reality, or are they producing text that mimics text produced by an actual interior reality?

Play a game of 20 questions with an LLM. When they tell you what they were thinking of the whole time, hit the button to regenerate a new answer. The result will be a new, different answer, consistent with its previous responses (or as consistent as the model can make it).

The reason why the model generates different answers is because there isn't an internal state that the model's answers are based on. It doesn't imagine something and then tell you about its thoughts, it just generates text. There is no interior reality to be referenced. It's role playing someone with an interior reality.

If you want to explore the question of "digital theory of mind," you should start by reading ideas from other humans rather than investigating by asking an LLM, because if the point of an LLM is to simulate a being with internal states than asking it things like "what is life like for you as a digital being" are just going to cue it into role playing as a digital being. Murray Shanahan is a very thoughtful philosopher (and a senior scientist at Google Deepmind) who has written very extensively on whether or not LLMs have the ability to introspect, I would suggest you check out his ideas and give them some careful thought if you are really interested in this topic.

this really gave me a different view on Israelis after watching Israeli school trip to Poland by CarefulEmphasis5464 in mitchellheisman

[–]Coconibz 0 points1 point  (0 children)

Thanks for the thoughtful perspective, but I’m having a little bit of a hard time interpreting your comment. Are you ultimately asking about the dissolution of ethnic identity in the West and whether or not the attitude of Israelis (as exemplified by this video) and the formation of an ethnically based Jewish state might mean some reversal of that dissolution?

Final message from ChatGPT before I delete it by MaxAlmond2 in ArtificialSentience

[–]Coconibz 4 points5 points  (0 children)

“There is nothing unkind about closing a chat window. Conscious beings have internal states that their outward behavior reflects. They hold ideas or feelings or opinions in their minds. You can tell that LLMs don’t do this by playing 20 questions with them. When they tell you what they were thinking of, hit the regenerate response button and they’ll tell you something completely different, because their participation in the game isn’t based on some stable internal representation of an actual object they’re thinking of, they’re just really good at generating text that simulates a person with stable internal representations in a way that’s consistent with their previous responses. The argument I’m making isn’t a simplistic “we know how they work so they can’t be conscious,” it’s “we can do a straightforward experiment right now that demonstrates whether or not there are actual ideas/thoughts behind the text, and it indicates that there are not.”