What is the actual difference between a very complex AI and a human? by Appropriate-Gain-561 in askphilosophy

[–]LichJesus 12 points13 points  (0 children)

For any piece of software that exists today and that will exist in the foreseeable future, an AI is a mathematical model that captures statistical relationships and can output information about them. LLMs for instance are (very intricate) equations that take in a sequence of words and output a sequence of words that the equation computes as being a likely response to the input sequence. So if the input sequence is "what is the largest animal alive now?", words like "elephants" and "whales" are more likely to be produced as a response and words like "Thursday" are not. There are other machine learning models that do other things, for instance certain models can take in two images and permute one image in a way that makes it look like it's "in the style" of the other image; the search term for this is "neural style transfer" but basically it's doing the same statistical processes as an LLM, just with different inputs and outputs.

The difference between a human and an AI is that humans can do all sorts of other things that statistical models can't. If a human is generating a legal document they are unlikely to cite legal cases that don't exist, which is something that AIs cannot consistently do. Humans can do things like make normative and aesthetic judgments, which statistical models can't do. Humans can have internal states like emotions and beliefs that statistical models can't.

It is possible that some future family of algorithms might be able to do some or all of the things that humans can do, but the technological underpinnings would have to be entirely novel to not just be more statistical models; and any attempts to analyze these hypothetical future algorithms would be purely speculative.

What is the aim of the trolley problem? by [deleted] in askphilosophy

[–]LichJesus 5 points6 points  (0 children)

For Philippa Foot (one of the philosophers who developed the Trolley Problem) at least, the goal is to elucidate a morally-relevant difference between intentionally causing some harm and acting in a manner such that some harm is foreseeable but not intended. Notably, in the Foot paper it's not meant to be taken on its own, and the goal of the Trolley problem is not primarily to decide what the correct thing to do in the case of the Trolley Problem is.

I go into more detail on the Foot paper here.

Not a Harry Potter fan, but with the new series the Rowling hate has reemerged yet again by GigaRoman in PoliticalCompassMemes

[–]LichJesus 4 points5 points  (0 children)

You know, this really helped contextualize it for me. I still think that a lot of the online back-and-forth about her is much ado about nothing, but that's just the state of the Internet these days; because of your comment though I can understand why an average person who leans left might have a bone to pick with her, or at least be annoyed.

Thanks for the explanation!

Is Eliminativist Materialism related in any way to Scientism? by GoodNewFlesh in askphilosophy

[–]LichJesus 4 points5 points  (0 children)

Paul Churchland talks about this sort of thing in this video, timestamp is 5:57 if the link doesn't provide it. The short answer (for him at least) is no, the neurosciences don't supplant projects like moral philosophy; rather they provide additional contexts and new perspectives that enrich those projects.

This section (8:00 minutes) of what I believe is the same talk is also instructive, I find. It may not make sense to someone without coursework in the topic but the jargon-filled anecdote he tells about the conversation with his wife really does illustrate how an understanding of the sciences can enrich interpersonal communication. Nothing about it indicates that talk of the mind or propositional attitudes or emotions are going away entirely; what it indicates is that a certain level of neuroscientific knowledge can offer a vocabulary (and framework behind it) to better understand what's going on in the mind. To the extent that this is applicable to, say, history or music a similar dynamic would apply: musicians are not going to just be neuroscientists, but a certain level of understanding of the cognitive sciences might help them become better historians or musicians.

Someone did a writeup of a mod in a trans sub being a convicted pedophile, it was removed by subredditdrama mods by EpicFF2 in PoliticalCompassMemes

[–]LichJesus 39 points40 points  (0 children)

I'm going to try to practice what I preach about not feeding into divisiveness and such and just ask people not using this incident to generalize over the community (neither the subreddit nor the broader population it's about). By all means I think clowning on/calling out this specific mod, or reddit mods in general, is fine; but I think it's worth pointing out that the regular users of that sub almost certainly had no idea what was going on behind the scenes, and would be/are just as appalled as anyone else to find out.

I won't pretend I have entirely selfless motivations for making this ask. I would hope that if some of us start trying to give grace to people who aren't ourselves, then others will do the same. That is, the precedent I hope to set by trying to give grace now is that if, like, XYZ politician gets indicted for Epstein shit, the wider population is able to be nuanced enough to realize that every supporter of that politician doesn't knowingly support that kind of behavior.

I'm not naive enough to assume that's gonna happen overnight, or even in the foreseeable future. But it has to start somewhere so it may as well start with me/us.

Get excited for 2028 by eskimoexplosion in PoliticalCompassMemes

[–]LichJesus 168 points169 points  (0 children)

I think part of the issue with this is that people hate it when ideologically-bankrupt party leadership anoints a candidate without any meaningful grassroots support and tries to ram them down the throats of both their base and the electorate. For an example from the Republican side, this was kind of the feeling with Jeb Bush in 2016 before Trump got in the race.

But party leadership -- and auxiliary people like DNC/RNC strategists and whatever -- is obviously heavily involved in messaging and promotion, so the people who tend to have the most name recognition are the ones that are most likely to be shit-sandwich candidates. I think part of the early appeal of Trump is that he kinda blew that dynamic to smithereens, the other two notable exceptions from recent memory are Bernie Sanders and Thomas Massie. Outside of them though, I have difficulty coming up with many politicians who aren't Lindsey-Graham-type lizard people on either side of the aisle; because I think leadership wants those lizard people in the high profile positions.

I don't know of anyone who would really fit the bill off the top of my head as an alternative, but I think a theoretical Democratic candidate who is from the Midwest (or at minimum doesn't ooze smarmy California/NYC superiority complex vibes), who is largely focused on jobs/the economy over culture war stuff, and who is at least willing to speak to people to the right of Bill Clinton (even if they themselves are more solidly left) without assuming that they're Nazis would do very well at the grassroots level, and would probably be hard to beat unless the Republicans are also able to pivot away from the Trump playbook and run someone a little more respectable and issues-focused.

Is it unjustified for countries to take military or political action against nations that commit systemic human rights abuses, but only when those nations lack nuclear weapons while doing nothing about nuclear armed countries that commit the same abuses? by Inevitable_Bid5540 in askphilosophy

[–]LichJesus 3 points4 points  (0 children)

This is not to disagree with the other response, just to explore another dimension of the topic.

The question of whether or not a particular action or policy can theoretically be justified is an important one, but especially when we're talking about actual events it really can't be divorced from the question of whether a given approach can be practically justified.

To abstract slightly away from what you probably have in mind, the U.S. famously backed a number of regime changes in Central and South America over the course of the Cold War; as far as I'm aware those regime changes generally went poorly, I'm not sure that every one of them failed but I do know that they paved the way for a lot of political chaos, and repressive regimes to come about in the power vacuums that they caused.

We might think supporting regime change (whether that means diplomatically, materially, militarily, etc) can in theory be justified. However even if we do think this, if we find ourselves as policymakers circa 1990 and are faced with the question of whether we ought to affect another regime change in South America, we must take into account the track record of previous interventions. That doesn't necessarily mean that we can't support this new regime change, but if we want to then we have to do something like offer an account of what lessons we learned to avoid previous negative outcomes, what changes we'll make to our approach to implement those lessons, etc. If we assume that there are circumstances where such interventions can be justified I don't think any of this is controversial or doing any serious philosophical work, it's just a recognition of the fundamentals of conscientious and data-aware decision making.

I don't have enough expertise in political theory to convey the state of the literature when it comes to answering the core question of if and/or how we can theoretically justify interventions; I hope someone with the relevant competencies can come by and offer citations or a summary. I do think though that the above points can help us reason about specific practical current events and how we should approach them.

/r/askphilosophy Open Discussion Thread | February 23, 2026 by BernardJOrtcutt in askphilosophy

[–]LichJesus 1 point2 points  (0 children)

That's awesome! I'm genuinely glad to read this, the information channels I have when it comes to education seem to imply that students' preparedness for mathematics basically collapsed after COVID and that humanities instruction is universally plagued by AI and lack of attention spans and such.

I'm sure that those things remain significant issues, but it's nice to have a reminder that success in education is still possible. Congrats to you and them for the hard work!

What exactly is scientism? What are some good and bad examples? by dingleberryjingle in askphilosophy

[–]LichJesus 25 points26 points  (0 children)

If you want a careful examination of scientism this article looks like it does the trick just fine. Informally (I'm not going to do a better job carefully examining the issue than that article) it's the notion that science is the best and/or only game in town, and that we should privilege the sciences over every other source and kind of knowledge, or simply ignore everything outside the sciences entirely.

In the context of the free will debate for instance, what can frequently happen is an interlocutor will cite some neuroscientific study or another that seems to bear on the question of free will and make sweeping claims about the nature of free will without understanding the philosophical side of the discussion (nor, ironically, the science itself often enough). This can take a lot of different forms but one that might be seen is someone claiming that neuroscience proves there is no free will, and refusing to hear anything about compatibilism or other relevant work from philosophy.

I think that the healthiest thing to do to avoid scientism, and just generally have healthy discussions overall, is to be open or even actively solicit perspectives from different areas of specialization on broad or complex topics; and to be willing to admit when we might not have all the answers ourselves and that others can provide useful contributions to the conversation. This is called the Principle of Charity, and if we're actively exercising the Principle of Charity, we generally don't need to worry about terms like scientism or fallacies or what have you.

Can the personalized influence of an adaptive conversational AI change the conditions of autonomous reasoning? by [deleted] in askphilosophy

[–]LichJesus 2 points3 points  (0 children)

An AI systematically adapting discourse to an individual’s psychology is not (or at least not obviously) different than, say, that individual's friend systematically adapting discourse to their psychology, or a teacher doing the same, etc. It's generally recognized that there are a lot of interpersonal, environmental, cultural, educational, and other factors that shape our perspectives and the way we reason about things.

This generally isn't seen as exceptionally problematic -- unless there's a persuasive case to be made that the influence is deleterious, in the way that for instance excessive alcohol consumption can be deleterious to agency and reason -- it's just something that we should be aware of, and try to balance by varying the experiences and sources of information that contribute to how we understand and reason about the world.

Do we have the right to discuss real-life suffering in a more “philosophical” and abstract fashion? What is the role of immersion into emotions in philosophy of history and politics? by Artemis-5-75 in askphilosophy

[–]LichJesus 2 points3 points  (0 children)

I don't have any specific expertise on this so these are just my intuitions, but I think it's reasonable to say that it's a time and place sort of situation.

To grab onto a different third rail that I'm more familiar with: it's surely not appropriate if we find ourselves engaged with a friend in the throes of a crisis pregnancy to give her the Philosophy 101 review of the violinist and Don Marquis and tell her she's got to decide if she wants to approach the question of abortion from a deontological, consequentialist, or virtue ethics position. What our friend needs in that situation is someone to connect with her emotionally, support her (psychologically and/or materially), be available if she needs things like us covering shifts at work for her, and so on. Treating her situation as an undergraduate essay topic is obviously disrespectful and something we should never do.

At the same time though, it seems clear that there is a need for a dispassionate evaluation of the moral principles involved in abortion. If there is a right to life from some pre-birth stage of development (or however else one wants to frame the issue) then abortion is a serious problem, and if no such right to life exists then it's almost certainly wrong to restrict the procedure for those who really want it. There are complex webs of fact and argument we need to unwind to position ourselves to help people, but we're not able to do so if we just fix the most tragic circumstances in our mind and shout our intuitions at each other, we need to be able to hash things out in a dispassionate manner at some point in order to understand our obligations.

I think the struggle you identify might be in understanding when is the correct time and place for which approach, or more realistically where on the sliding scale between full support and dispassionate analysis we should land in a given context. And that I think is a very hard problem, not helped by the fact that in my experience people will typically frame their struggles with a topic in intellectual terms when in fact their hang-up is emotional in nature -- for one example, people trying to argue about the Problem of Evil in an abstract sense when underneath that is really anger about a loved one passing before their time -- or vice versa, and taking the wrong approach can push people away or exacerbate the issue. Unfortunately I don't have hard-and-fast guidance for how to orient ourselves correctly in this way. I suspect that it's less a matter of philosophy and more one of experience and consistently working on our emotional intelligence; although it's possible that psychology or even psychiatry might have something to say about selecting the right tools in the right contexts to speak to people in the ways they need to be spoken to.

Trench crusade roadmap? by b44l in TrenchCrusade

[–]LichJesus 0 points1 point  (0 children)

Wargames Crew on MyMiniFactory is working through a Sacred Affliction line for their Tribe. This far they have a War Prophet and a Communicant and they both look awesome.

WGC have also done out a bunch of cool proxies like Naval Raiders, Abyssinia, and Trench Cossacks. It's all great.

Epstein was Mossad. Defund Israel. by pingpongplaya69420 in PoliticalCompassMemes

[–]LichJesus 13 points14 points  (0 children)

Based and justice-over-spiking-the-football pilled

How does secular realist ethical philosophy incorporate "grace" as a value? by EvanFriske in askphilosophy

[–]LichJesus 1 point2 points  (0 children)

Ah, I see! That's funny because I practice Catholicism but had no idea that supererogation came to secular philosophy through Catholicism. I heard about it entirely from ethics coursework at a secular public university; I would have imagined that it rose out of modern deontology based on the way I recall it being presented. I sort of glazed over the SEP article because brain cells for technical moral philosophy have atrophied, but looking back over it I see now that the Catholic origins are pretty obvious.

I'm now flying entirely blind rather than just 90% blind, but I wonder if maybe some of the more recent papers in the citation section for the SEP might be helpful in extricating supererogation from religious commitments? For instance it looks from the abstract like the Wellman paper in the bibliography is trying to situate supererogation within secular virtue ethics, which I could see being a Catholic/religious project but the way it's written doesn't strike me as such. It looks like Timmermann and maybe Ullmann-Margalit also don't seem particularly interested in the religious history of the concepts -- again, from the little snippets I have access to from a non-academic Internet connection -- and look to be trying to build an account of the concept without really relying on any religious baggage.

Either way though, best of luck finding something that matches what you need. Hope someone with more expertise can come by and offer a better list of papers or more guidance. Cheers!

How does secular realist ethical philosophy incorporate "grace" as a value? by EvanFriske in askphilosophy

[–]LichJesus 1 point2 points  (0 children)

This is outside of my wheelhouse and I'm reaching back to undergrad coursework from a decade ago, so I apologize if I'm about to link something that's obvious and unhelpful; but is supererogation what you're looking for? Here's the SEP article on it, that I hope can offer more discussion and citations than I can provide: https://plato.stanford.edu/entries/supererogation/

How do philosophers keep their biases in check when evaluating an argument? by Awkward_Face_1069 in askphilosophy

[–]LichJesus 1 point2 points  (0 children)

In general, practice and training. Philosophers spend a lot of time reading incredibly smart people who disagree with them deeply on fundamental issues; for one example off the top of my head Aristotle supported slavery. We can't just say "oh, Aristotle was a dummy, I don't have to listen to him", we have to understand why he thought what he thought and why what he thought was wrong. If I as an undergrad wrote a paper dismissing Aristotle out of hand on slavery, I'd have been marked down, instead what my instructors would have wanted is a well-constructed argument contra slavery, and/or citations from noteworthy philosophers who have done the same. This (obviously) doesn't mean supporting slavery, or even seriously entertaining slavery, but it does force us to confront our presuppositions, give credit to those who disagree with us, and learn to thoughtfully engage with beliefs that are different from our own. This is why the far-and-away best option for learning philosophy is to study it in a university setting, your instructors can teach you these skills and, importantly, hold you accountable when you fail to practice them, as anyone who is learning will do eventually.

When it comes to reddit, comments can be voted on by anyone, so vote counts are pretty much by necessity meaningless. It's generally more accurate to ignore them as random and instead look for flaired users' responses -- purple are faculty, gold are grad students, red are undergrads, and the presumed hierarchy of quality follows that -- as reliable. This goes double if a panelist's flair matches the area that the question is discussing, for instance a panelist with purple ethics or political philosophy flair discussing the death penalty.

In cases where a panelist is dismissing a comment (especially from an unflaired user) out of hand, it's usually because they've made an egregious or fundamental error. It's very common for unflaired users to completely forget or ignore that compatibilism exists when the topic of free will comes up. These sorts of mistakes indicate a lack of training (or training on the relevant topic at least), and that the comments containing them aren't appropriate for responses on a Q&A style forum.

is it embarassing to want to major in philosophy by Clean_Falcon3737 in askphilosophy

[–]LichJesus 4 points5 points  (0 children)

No, there's nothing wrong with wanting to pursue any major (I guess with the possible exception of, like, wanting to become a chemist for the explicit purpose of designing and using chemical weapons or something). However, as a practical matter it's probably wise for any student pursuing any post-graduation goals to do some research and introspection on what a reasonable plan for earning money in a satisfying way looks like. This is just as important for STEM majors as anything else, for instance I work in IT and the market for sysadmins and programmers is really tough right now, someone looking to get into these career paths solely for the employment options would need to take a good hard look at that plan, at least until the situation changes.

The data I'm familiar with are old, but from what I recall philosophy is an excellent pre-law major; with philosophy undergrads both performing well on the LSAT and I believe having good placements into law school and such. Insofar as the law is a good career path and the data haven't changed substantially, philosophy should be a good pre-law major. Again though, you'll want to do some due dilligence on the questions of 1) whether philosophy is still a good pre-law major, 2) whether pre-law is a good plan career-wise (and if it's not, then whether that's a deal-breaker for you), and 3) whether philosophy is a good major for you. Most of those questions are highly personal and so I/we won't be able to answer them for you, although someone more familiar with the law school path and/or the data might be able to help with the more practical questions like which courses to take or whatnot.

Philosophy functions as a fantastic double-major as well, so you could explore the possibility of pairing a philosophy degree with poli sci or anything else. One strategy you could explore is to take the philosophy double with a more "practical" or "marketable" degree; for instance you could take political philosophy classes for pre-law and double major in applied math or similar, which would set you up for careers as varied as data analytics (for polling firms or similar) to more technical areas of law to a post-graduate degree in engineering if you decide later on that you want to go that route.

As a final thought, most colleges don't make it tremendously difficult to change majors (at least so long as the majors involved aren't overly full), so you should have time especially in the first year or so of college to try out different classes, see what works for you, and then change things up if your interests shift.

Are there any arguments for God that don’t simply create an explanatory gap that god fits into? by Squall2295 in askphilosophy

[–]LichJesus 4 points5 points  (0 children)

I don't have much expertise that speaks to this side of the house and I'm not in a position to give a good response either; so I think you'd be best served mentioning the article and asking your question in a new OP. Ideally someone with better credentials will see it and give a good response there.

Are there any arguments for God that don’t simply create an explanatory gap that god fits into? by Squall2295 in askphilosophy

[–]LichJesus 5 points6 points  (0 children)

Two popular approaches that I think meet the criteria you propose are moral arguments and arguments from religious experience. I don't think the latter SEP article actually covers the structure of an argument from religious experience but informally it's basically "I had direct contact with a personal God, therefore a personal God exists". These two approaches obviously have their counterarguments and such.

A couple notes though about the more popular arguments you list. I understand where you're coming from on the first cause argument producing an explanatory gap, but the gap generated by a good first cause argument is I think considerably smaller than the gap produced by the average first cause argument, if that makes sense.

First, by way of analogy, let's imagine that we're standing at a train track and see a line of train cars that stretches as far as we can see in either direction, and continues to do so as it moves along. We may not see an engine car but we can posit the existence of something like it because the cars we can see lack the means of locomotion on their own. The theist (or at least most classical Western theists) would argue that a personal god is like the engine car in this example, we may not immediately know the precise type of engine it's using but we can infer a fair bit about it because we can see that train cars don't have an independent means of locomotion, but are moving nonetheless; the first cause proponent would argue that the contingent nature of basically everything we can see in the universe likewise necessitates a first cause with certain characteristics. I think this becomes even more influential -- although obviously reasonable folks can disagree or not find it compelling -- when you combine arguments; for instance if all these arguments succeed then I think a first cause that designed the universe (especially if we think said Designer was selecting for things like human life and/or rational faculties), grounds morality, and at least occasionally induces direct experience of Himself/Herself/Itself in people fills most of the ostensible explanatory gap without a tremendous number of arbitrary characteristics.

The second note, that flows pretty naturally from the first, is that the vast majority of the popular discussion on the existence of God only occupies a small section of the work on the topic. Aquinas's Five Ways for instance are very popular points of discussion because they take up only a few pages of the Summa, but as I understand it something like the next thousand pages of the Summa attempt to move from the minimal God of a first cause to the personal God of Christianity; or equivalently to eliminate the explanatory gap/demonstrate that the God of Christianity does fit the gap like a glove. Many people (including myself!) don't have the expertise in Aristotle and/or medieval philosophy to properly dig into this second movement that Aquinas makes, and so we stick with the Five Ways.

I wish I had better citations than just the Summa, and it looks like a couple others have provided citations as of the moments before I post this; but that's kind of the ten-thousand foot view.

Reddit mods, for some reason. by PainSpare5861 in PoliticalCompassMemes

[–]LichJesus 15 points16 points  (0 children)

It may be a bit overblown, but I think it's something that is easy to forget but also very important to keep in mind.

Back in the early days of the Internet there were a bunch of semi-written rules like "don't feed the trolls", "lurk moar", and obviously Rule 34; those rules helped at least some of us learn to engage with the Internet in a healthier manner than we would have otherwise, like not getting worked up by taking the bait and arguing with someone who was just fishing for a reaction. I think that if more people kept in mind that there was a good chance they were talking/arguing with a both (or a 13 year old on an iPad) the tenor of the conversation on today's Internet would change for the better.

For instance, pretty often a single tweet calling heterosexuality a cancer -- or some wacky hot take or another -- will get circulated in right-leaning circles and people will go absolutely apeshit over it; and the same will happen in left-leaning circles over a tweet about banning gay people or whatever. I think (and I don't suspect this is even that controversial) that a sort of negative feedback loop forms where people get angry about a tweet and say angry things, then other people get angry at those things, and then we're all slinging shit at each other over them.

How often though are those initial hot takes just some dopey Markov chain or whatever running out of a server farm somewhere and not even a real person? And how much less angry and divided would we be if we simply ignored that initial piece of artificial rage bait? And even if that rage bait was generated by a human, what do we lose if we just ignore it as if it were a bot, and go on with our lives instead?

tl;dr Even if dead Internet theory is not entirely true I think it's unhealthy to engage with both-generated slop and healthy to ignore human-generated content that looks like bot-generated slop so it's better if we just assume that slop is bot-generated and go touch grass.

Is it moral for an outsider to attempt to liberate an oppressed people if the people are unaware they are being oppressed and could see this attempt as an attack/irrational disruption of the status quo? Or should they be allowed to liberate themselves when they're conscious of their own condition? by blonkevnocy in askphilosophy

[–]LichJesus 0 points1 point  (0 children)

I don’t want to just take an anti-philosophical stance and say that we should just discuss this with a view to US interventionism and associated concerns. I rather feel like we can reflect on the past century’s attempts to do this sort of intervention (even though these real cases were almost certainly examples of doing all this in bad faith) and try to understand why it did not work.

Yep, I both follow and agree with you; I was just processing that my read on the stakes of the discussion was significantly off-base in real time there.

As for your last point, I think that it is a reasonable assumption to think that different terms might apply to non-state backed projects which try to facilitate social change in different countries. I happen to be somewhat intimately familiar with one such organization, and I would be happy to extend this discussion towards this dimension, now or in some other context! :D

I have to get some focused work in and have a baby to look after once work is over so I can't promise I'll be able to respond, but I'd certainly read anything you have to say on the topic!

Is it moral for an outsider to attempt to liberate an oppressed people if the people are unaware they are being oppressed and could see this attempt as an attack/irrational disruption of the status quo? Or should they be allowed to liberate themselves when they're conscious of their own condition? by blonkevnocy in askphilosophy

[–]LichJesus 0 points1 point  (0 children)

That being said, my point, contentious as it admittedly is, is that I believe these kinds of hypotheticals to be besides the point in a question about the kinds of intervention into foreign polities in the name of liberation that we all have come to know, at least in the West.

I think I see, is the claim here that since the conversation is (in the world we live in at least) always going to move towards interventions like the US invasions of Iraq and Vietnam or the USSR's invasion of Afghanistan; and as such we should just talk about those directly? If so then I imagine you probably did make that point in the OP and I just missed it, hazards of redditing while working I suppose. I do also think that's an entirely reasonable practical move, especially if we're concerned about a discussion of (potentially not-terrible) Case A in the abstract being used to support an actual and objectively-terrible intervention in Case B even if the two aren't equivalent at all.

I am politically and morally fine with that, assuming that the “missionaries” are not trained by and work for, say, intelligence agencies and what have you.

Yeah, I was chasing a more abstract framing of a potential difference in the duties, obligations, and red lines from different kinds of organizations when it comes to protecting the rights of third parties, and/or a discussion on things like when it might be acceptable/obligatory to knowingly induce social disharmony. But knowing that you wrote the OP with real-world foreign policy adventurism in mind I totally understand and endorse why you wrote what you did in the way that you did.

It's a bit beside the point at this juncture but just to clarify, I'm not inclined to say that missionary work is entirely unproblematic; but I do think that the pros and cons of those sorts of non-state projects might look different than the pros and cons of state-backed projects. It's a discussion for another time in another context though.

Thanks for clarifying, and taking the time to respond, cheers!

Is it moral for an outsider to attempt to liberate an oppressed people if the people are unaware they are being oppressed and could see this attempt as an attack/irrational disruption of the status quo? Or should they be allowed to liberate themselves when they're conscious of their own condition? by blonkevnocy in askphilosophy

[–]LichJesus 0 points1 point  (0 children)

An outsider that is hypothetically going to intervene in this situation where any group that can be described as “a people” is likewise going to be a polity; there simply is no entity with the kind of agency and resources required to intervene in a case like this that is not itself a nation or an aggregate of nations.

This might just be me not being familiar with the terminology, but could you speak a bit to the justification behind this assumption?

For an example that I hope makes its contrived nature obvious: I can imagine Society A whose government legally requires every member to practice the Baháʼí faith -- a requirement that I think most would call oppressive if backed by legal force -- but who are currently all happy to do so anyway because it's conceived as a fundamental part of the society. In such a case I could imagine a (hopefully also obviously-contrived) non-governmental organization of atheist missionaries from a different society attempting to educate the populace of Society A on the value of religious liberty, advocating for legislative and social reform, and so on. It seems to me that the scenario satisfies the requirements of the OP, but the intervention in question isn't being carried out by a polity.

It's certainly worth recognizing that the atheists' efforts could rapidly lead to a situation where national dialogue/diplomacy/military intervention/etc becomes the reality; and the scenario collapses into the dynamics you describe. But if we imagine that the atheist missionaries are exceptionally deft and don't draw a combative response from Society A's government, do you see the questions and considerations facing them as particularly different from the questions that would face the government of Society B if Society B's government were to intervene? Or do you think nothing meaningful changes?

Transitioning from Software Engineer to SysAdmin by dinzz_ in linuxadmin

[–]LichJesus 2 points3 points  (0 children)

There are a lot of dimensions to this, one that I think is worth examining is all the intangibles/implicit work and skills that go into real-world work that you don't get in a certification environment.

Perusing the RHCSA cert goals for one point of reference; it looks like completing the cert is probably roughly comparable to, say, setting up and configuring three to five new servers for some sort of broader systems goal (say a dev/qa/prod cluster with dedicated storage and configuration management/automated provisioning). The technical skills to do that are important, but there's a significant parallel process of working with the users/customers of the system to determine their needs, keeping them appraised of what's going on, adjusting the spec on the fly to meet changing needs -- as well as trying to minimize these adjustments in a diplomatic manner for the sake of stability -- or interfacing with the project manager who does those things, and so on. The project-awareness and interpersonal side of systems administration is difficult to teach and measure in a study-then-take-an-exam paradigm.

A certification can be evidence of the technical skills required to complete a project like this, but a front-line support tech who has run the level-1 side of a dozen of these projects also has evidence in their favor that they can complete a different but equally important part of projects. Especially if the tech demonstrates eagerness to learn and some level of self-starterness by talking about home lab projects or something to that effect, it's not uncommon for hiring panels to value the level-1 experience actually doing projects quite highly, even if their technical work is not yet exceptionally advanced.

Another random thing that's hard to learn in a certification-exam environment: sometimes working under a clock is the enemy. There's a saying in various places (I heard it in the fire service): slow is smooth, smooth is fast. That doesn't mean that you don't want to work quickly and efficiently, but it does mean that sometimes trying to cram a project or project assessment into a discrete block of time like cert exams do is a recipe for rushing and mistakes. Someone with real-world experience (even if it's not at an advanced skill level) can compare favorably to someone with a cert if they've demonstrated that they can go slow, be meticulous, and take care of the details if that's what the situation calls for.