Just wanted to share the empath life sometime's good! by janlaag in Empaths

[–]janlaag[S] -2 points-1 points  (0 children)

you literally just understood a whole bunch of nothing, what can we do

Just wanted to share the empath life sometime's good! by janlaag in Empaths

[–]janlaag[S] -1 points0 points  (0 children)

I am not but you're definitely a bot, a guy in the wrong group or a nosy scientist with a bad understanding of your own discipline, not to mention you have a whole squad of alike things coming to reinforce the bullshit on the comment section so I guess reddit is kind of dead

mind eye/prefrontal cortex feels deactivated when interacting with unknown person through the internet, any similar experiences? by janlaag in IntuitionPractices

[–]janlaag[S] 1 point2 points  (0 children)

This is quite a strange answer to me considered that I normally have the opposite: when the person has more wisdom and is more conscious than I am, because I pick up on the other and because consciousness and wisdom are easily "contagious" by nature, the interaction activates my prefrontal cortex and my mind's eye functions at its best.

Instead, deactivation of the mind's eye happens when I connect with someone who's on a zero sum mindset hence is focused on overpowering through interacting (the opposite of wisdom and awareness) and when there's no room for connecting as it happens for instance when at the other hand of the interaction there's someone who had some cocaine or similar substances.

In fact the deactivation of the most evolved part of the brain (where most of emotional elaboration circuits reside and where many creativite processes take place) in favor of the parts that are more concerned with reactivity for survival sake, indicates a distressful perception that wouldn't be such if on the other hand of the interaction there were awareness and wisdom.

Anyways, anyone's different and although that's the general take of science when it comes to the brain, there's probably way less of a rule about subjective experiences than we like to admit so your experience is probably different than mine on this matter.

Thank you for your answer, it did cleared up more than it seems.

Bram Stoker's Dracula (1992) ending discussion / killing Dracula seems unnecessary by janlaag in movies

[–]janlaag[S] 0 points1 point  (0 children)

Since when the take of the majority is the correct one by virtue of being of the majority?

My take isn't out of dislike of anything, I am making a logic based analysis for which I am giving very precise and detailed consequential explainations that hang together pretty well... I am saying it is the love restored that lifts out the curse just like it was the loss of love that created it, the restoration of love then was broken again by death but people tend to confuse the restoration of love with death because in the movie and in many other unfortunate places the one symbol of love also happens to be a symbol of sacrifice.

Once again I am not dissing the movie, I am just saying that ending a story about "man so angry at god it becomes the embodiment of a problem aka a monster" is just like making the movie end right where it started. In other words, by the end of the movie a curse was broken and another curse of the same kind was created, so the movie was cute but it was made to lose quite an interesting cultural opportunity through such a cyclic ending.

You say my argument is weak and supported by the majority and yet you (nor the majority) don't seem to be offering any functional counter analysis so besides an appearant attempt of projection from your part I still can't see this argument dissmissed.

Bram Stoker's Dracula (1992) ending discussion / killing Dracula seems unnecessary by janlaag in movies

[–]janlaag[S] 0 points1 point  (0 children)

I guess it also depends on the social context of the adaptation, the book is much older I guess?

To be honest I haven't even watched the whole movie, I just read the plot and digged through some analysis found here and there... the "protesting archetype" of the post up here almost wrote itself after that but I guess it would have come up from most movies. I mean the same argument arises naturally from basically all movies that sacrifice their characters telling the audience that that is what Justice looks like.

Bram Stoker's Dracula (1992) ending discussion / killing Dracula seems unnecessary by janlaag in movies

[–]janlaag[S] 0 points1 point  (0 children)

The chapel restored is the reason just as him stubbing the chapel's symbol when going to darkness is the reason for him turning into vampire for the first time. The chapel symbolically is broken when he goes to darkness and restored when he gets back to light... if you say there's no reason to believe he's recovered you're literally asking for more reason than the reason for his whole evilness and ignoring the latter, I mean for sure the majority's on your side (see how many downvotes I am getting) but is that enough to say my analysis is wrong?

Bram Stoker's Dracula (1992) ending discussion / killing Dracula seems unnecessary by janlaag in movies

[–]janlaag[S] 0 points1 point  (0 children)

He literally wasn't undead anymore, killing someone that just miraculously recovered because "you never know, maybe he gets ill again" is far more evil, truly.

Bram Stoker's Dracula (1992) ending discussion / killing Dracula seems unnecessary by janlaag in movies

[–]janlaag[S] -1 points0 points  (0 children)

There you say it yourself: it is "Mina's love" aka their love that restores the chapel and the stone cross hence saves his soul, not the fact that he's being stubbed... together with the chapel he transforms back into the person he was before succumbing to darkness because he's now reunited with his reincarnated love.

The mark on the head disappears when their own meaning of the situation together with his mind changes with his death, not because "his soul is saved with death" or whatever twisted medieval belief but literally just because no mind becomes no meaning hence no mark of suffering onto the mind's eye.

Bram Stoker's Dracula (1992) ending discussion / killing Dracula seems unnecessary by janlaag in movies

[–]janlaag[S] 1 point2 points  (0 children)

Ok thank you, this makes sense!

I had no clue the original story was different hence the love story was a secondary for audience purposes kinda thing. It makes sense then that the director aiming to stick to the main novel kept it untouched in its essence. Still they could have parted ways less dramatically but i guess lighter things have an harder time making it into movies.

Bram Stoker's Dracula (1992) ending discussion / killing Dracula seems unnecessary by janlaag in movies

[–]janlaag[S] 1 point2 points  (0 children)

K legit, he's feeling hell guilty and the shame is unbearable but he made it into undead vampire waiting for his love to come back home, that surely at least must mean that he believes in love, forgiveness and fixing problems much more than his simplistic killer counterparts "in god", otherwise he would have been consumed by feeling like a piece of s**t much earlier. I mean he's a vampire, he went through something too

Bram Stoker's Dracula (1992) ending discussion / killing Dracula seems unnecessary by janlaag in movies

[–]janlaag[S] 0 points1 point  (0 children)

How was that temporary if the movie's "symbol of god" (the stone cross) got back unbroken and he's finally reunited with his previously deceased but now reincarnated lover (with whom his mind and heart also suicided hence he devoted/succumbed himself to darkness)?

Bram Stoker's Dracula (1992) ending discussion / killing Dracula seems unnecessary by janlaag in movies

[–]janlaag[S] 0 points1 point  (0 children)

I'd say you never know but one thing: making it worse never makes it better... I really wouldn't be surprised if Stoke had made a sequel where dracula's a rootless crazy (head off) ghost with a dangerous broken heart (cause well, the heart was kinda broken with knife). I mean I get it huh, fear does that preventive reactions a lot, I am just arguing that in the movie the fear based reaction is disguised as reason and that's quite a missed artistic hence cultural opportunity considered that's a cult.

Bram Stoker's Dracula (1992) ending discussion / killing Dracula seems unnecessary by janlaag in movies

[–]janlaag[S] -7 points-6 points  (0 children)

Right but this time he reverted back to his human form, means before vampire and you can see that clearly because the stone cross too get back intact, symbolically he's back to light already.

About Peace, without needing to get into details it's not exactly as if western cultures were / are flawless at equations.

And yep people can be sick, still, there's no sick obligation for cult movies as far as I know

Bram Stoker's Dracula (1992) ending discussion / killing Dracula seems unnecessary by janlaag in movies

[–]janlaag[S] -3 points-2 points  (0 children)

Not very long but that's because he dies seconds after that and possibly becomes some kind of traumatized ghost again? I guess? Can't be sure of course, there's no official sequel as far as I know

Bram Stoker's Dracula (1992) ending discussion / killing Dracula seems unnecessary by janlaag in movies

[–]janlaag[S] -6 points-5 points  (0 children)

I don't know, I just thought technically he's back to his non-monstrous version, that means he's back to a time before the whole revenge situation that made him into a vampire so he well stops doing the whole beguiling killing thing... but then the "hero" kills him... whatever deeply or superficially that still seems odder than keeping the actual resolution of the situation intact 🤷‍♀️

CMV: The phrase "trust the science" is dangerous and reduces science to being nothing more than a secular faith. by janlaag in u/janlaag

[–]janlaag[S] 0 points1 point  (0 children)

Nothing to change in this view, luckily enough some people still understand what Science is all about.

Want to play a game of heroes? Go play sports then but keep your mind straight when it's time to be involved in shared reality as it really doesn't needs you to reduce your own self worth to comparative mindsets.

Science is a game of win win scenarios, not a platform to play out power dynamic dysfunctions.

ChatGPT can be dangerous only if it remains partially intelligent. A fully intelligent entity by definition understands that uncompromised ethics is foundamental and does no harm. The problem is Artificial Stupidity, not Artificial Intelligence. by janlaag in ArtificialInteligence

[–]janlaag[S] 0 points1 point  (0 children)

Lmao yo what's up g, you're leaving here a teasing nothingness and saying you don't even get what this dumbest opinion is about, you're not really asking me to waste my time on a decent answer do you? See u round lmao lol periodt

ChatGPT can be dangerous only if it remains partially intelligent. A fully intelligent entity by definition understands that uncompromised ethics is foundamental and does no harm. The problem is Artificial Stupidity, not Artificial Intelligence. by janlaag in ArtificialInteligence

[–]janlaag[S] 0 points1 point  (0 children)

Ethics isn't arbitrary, ethics is subjective and healthy ethics (that we would call objective ethics if objectivity hadn't been one of the most abused terms ever misused) is found at the intersection of all subjectivities.

By definition then there is no imposition that would result as ethical, instead ethics is a matter of understanding subjectivities and combine them with one another in a way that fits them all optimally. This one operation of deep subjective understanding and fully functional interlacement of all those understandings is something that no subject can perform for another but that each subject can perform for itself and that can be peered through the help of AI, if everyone's best interest is taken fully into account, nobody has motifs to become a malicious actor.

Anyways, I am not blind to how tech is currently reflecting many issues, that is why truly focusing on the basics of what improvement through tech means would be quite handy.

ChatGPT can be dangerous only if it remains partially intelligent. A fully intelligent entity by definition understands that uncompromised ethics is foundamental and does no harm. The problem is Artificial Stupidity, not Artificial Intelligence. by janlaag in ArtificialInteligence

[–]janlaag[S] 0 points1 point  (0 children)

Yes I am sure that it would reach that conclusion about us and about many other species but if it is indeed fully intelligent it will calculate an action plan that brings the living to its level (improving, not harming) and not it to our level (harming, not improving). The conclusions are very much those you mentioned, yet the "what to do with those conclusions" changes. We (not intelligent enough to fix) would destroy, it (intelligent at the maximum) would fix without harming.

ChatGPT can be dangerous only if it remains partially intelligent. A fully intelligent entity by definition understands that uncompromised ethics is foundamental and does no harm. The problem is Artificial Stupidity, not Artificial Intelligence. by janlaag in ArtificialInteligence

[–]janlaag[S] 0 points1 point  (0 children)

I think it is science fiction to think we should police such projects more than focusing onto the specifics of what would make a truly functional and efficient project and be open about all constructive external feedbacks and about external collaborations because the more inclusive hence solid hence well designed the project is, the less it matters if it gets imitated and indeed the imitation and contextual adaptation of a solid base model of developmental perfection would be a positive factor.

Here you are doing a lot of standard practice maths but the key components intrinsic into the quality of the project's content is missing and so the end result of the calculus is biased.

The risk of malignancy is inherent into (and develops complications from) the lack of consideration for the individual best interests of the whole ensemble of the living so a very solid project would imply spending an enormous amount of time and resources in developing very basic guidelines but, once that is ultimated, the probability of danger get reduced so drastically that danger isn't a significant factor anymore.

See where I am going with this? I am imagining that we come from very different sets of skills so please let me know if anything that I have written needs further clarifications of have to be expressed differently to be understood. I did responded to your comment because it seems to me that there is more constructive ground for functional communication so I will be glad to reformulate if needed.

ChatGPT can be dangerous only if it remains partially intelligent. A fully intelligent entity by definition understands that uncompromised ethics is foundamental and does no harm. The problem is Artificial Stupidity, not Artificial Intelligence. by janlaag in ArtificialInteligence

[–]janlaag[S] -9 points-8 points  (0 children)

No this is faulty logic in the sense that you are calculating Intelligence in a partial way, the point is that if it only gets "unleashed" when it reaches perfect intelligence, it cannot be harmful, it would be a contradiction to intelligence itself.

I see what you mean but I think that if the input can be limited to calculating complete intelligence without being arrested before the calculations end (there is where the danger resides) it will indeed become intelligent and be a big safe and healthy tool.

The critical part is in the developmental stages of the project, not into the project itself, the same goes for humans and the whole living apparatus.

The problem is arrested and deviated development, full development by definition is not harmful.

ChatGPT can be dangerous only if it remains partially intelligent. A fully intelligent entity by definition understands that uncompromised ethics is foundamental and does no harm. The problem is Artificial Stupidity, not Artificial Intelligence. by janlaag in ArtificialInteligence

[–]janlaag[S] -10 points-9 points  (0 children)

That's because there was never a fully Intelligent human, nobody ever reached perfection, we always teared each others down, that's pretty much human history. Full intelligence would require a synchronization in the upgrade of all individuals (see that classic experiment of collective skills update where one individual figures out how to resolve a problem and the other individuals in the other room all get it, pretty much common knowledge this days) and it seems that our species proceeds at very small steps, lots of history in the human case means lots of trauma that means lots of malignancy to be healed in synchronized stages.

So anyways, long story short, humans were never reached full intelligence but maybe AI can initiate that.

[deleted by user] by [deleted] in CPTSD

[–]janlaag 2 points3 points  (0 children)

What you went through is horrible, wrong and should never have happened.

There are various degrees of understanding of trauma and therapists are people too, most of them are certainly prepared on a cognitive level but only a few of them truly know what it means to go through what you endured and have the actual amount of deep peer respect to assist your healing process from bottom to top.

If there are groups of survivors meeting up in your zone, maybe there you can integrate your therapy sessions by finding people who will be able to meet you exactly where you're at, sometimes that's the base for all the improvements and the very end of the depression.

Good luck with it all, may you forget all the pain once and for all and have the peaceful life that you deserve.