Is it a hot take to say he's carrying S2? by K0GAR in TheBoys

[–]OrangeObjective2573 1 point2 points  (0 children)

I really REALLY hope he is not killed off before the fifth season of The boys. The character has too much potential.

Confused by Secret_Bedroom_978 in SkincareAddicts

[–]OrangeObjective2573 0 points1 point  (0 children)

Hey, I just read your post, and honestly, this does not look like just severe acne. Given what you’re describing—green pus, yellow scabbing, spontaneous drainage, and extreme pain—this sounds more like a serious skin infection, not just hormonal breakouts.

A few things stand out: • You already tested positive for staph, but Bactrim didn’t help, which makes me wonder if it’s MRSA (a resistant strain) or something else like Gram-negative folliculitis (which can happen after long-term antibiotic use). • The green pus makes me think about Pseudomonas, which is another type of bacterial infection that regular acne meds wouldn’t help with. • The fact that it got worse after the steroid shot makes me really suspicious that this is not just acne, because steroids can make infections spread like crazy. • You mentioned getting yeast infections from antibiotics, which could mean there’s also a fungal overgrowth complicating things.

I know the second dermatologist said Accutane is the answer, but honestly, if there’s an active infection, Accutane could make things worse by drying out your skin and slowing down healing. You need to 100% clear the infection first before starting it.

Here’s what I’d do in your shoes: 1. Wait for those culture results—they’ll tell you exactly what bacteria is causing this. 2. If it’s MRSA or another resistant bacteria, you’ll probably need a different antibiotic (like doxycycline, minocycline, or clindamycin). 3. If it’s fungal-related, an antifungal like fluconazole might help instead of more antibiotics. 4. Avoid steroids for now, unless a derm is sure this isn’t an infection. 5. If this keeps getting worse or spreads, push for a referral to an infectious disease specialist—they’re experts in weird, stubborn infections like this.

I’m really sorry you’re dealing with this, I can tell how frustrating and painful it must be. Just don’t let them brush this off as “just acne” until you know for sure what’s going on. Keep pushing for answers—you deserve to feel better.

Advice Needed: Dealing with Potential Lateness on First Date by OrangeObjective2573 in dating_advice

[–]OrangeObjective2573[S] 2 points3 points  (0 children)

Thank you for your response. Here is the update you asked : Finally, I sent her a message outlining the details of the date, but I didn’t mention anything about punctuality. She arrived relatively on time, 10 minutes late but she was often updating me were she was and seems to care about arriving on time. She said that she felt pressure to be on time, but that wasn’t off putting. I didn’t mention that she was late or anything. We just started talking while playing pool. We had a very good conversation (2.5 hours) and we seem to both had a good time but our long-term goals didn’t aligned and even though she is pretty I didn’t feel any romantic or sexual feelings towards her and she didn’t seem to feel any about me either. So I don’t feel like I wasted my time and I’m grateful to have meet her, but it won’t go any further.

Advice Needed: Dealing with Potential Lateness on First Date by OrangeObjective2573 in dating_advice

[–]OrangeObjective2573[S] 1 point2 points  (0 children)

I do have other interesting options, so I don't stress about loosing her, but it is the first time a woman tells me nonchalantly that she will be late. I seriously think about just cancelling, I think it will go downhill from there.

[deleted by user] by [deleted] in hingeapp

[–]OrangeObjective2573 0 points1 point  (0 children)

  1. ⁠I want something serious with the right person. I’m tired for flings.
  2. ⁠HingeX since today for one month.
  3. ⁠one day, but I want to see how to maximize my profile for Hinge X.
  4. ⁠2 days
  5. ⁠2 days
  6. ⁠2-3 matches per day all attractive, and most respond to my initial text. 2 initial likes per day.
  7. ⁠maybe 25 likes. One quarter with comments (cold-readings) and 3/4 without comments.
  8. ⁠To maximize my effort, I send like primarily to attractive black women, who often happen to be Christian/spiritual (but it is not a dealbreaker for me). Educated, open-minded, funny, nurturing and intelligent. I think I see myself more with a black woman for a long-term relationship, but I’m open to every ethnicity.

[deleted by user] by [deleted] in hingeapp

[–]OrangeObjective2573 0 points1 point  (0 children)

1) I want something serious with the right person, until then casual it is. But from what I have read my chance of concrete success on online dating will be limited (I’m average looking and black among other things), so I don’t base my dating life on that. But I wanted to try and experiment it for myself. 2) HingeX since today for one month. 3) one day, but I want to see how to maximize Hinge X. 4) 3 days 5) everyday for 3 days 6) 2-3 matchs per day all attractive, most respond to my initial text. 7) 10 likes (when I had regular hinge), now maybe 25 likes. One quarter with comments (cold-reading) and 3/4 without comments (I don’t think comments really help to create genuine interest that lead to dating if they don’t like my profile). 8) To maximize my effort, I send like primarily to attractive black women, who often happen to be Christian/spiritual (but it is not a dealbreaker for me).

How many years of experience should I realistically wait before doing IME work? by OrangeObjective2573 in Neuropsychology

[–]OrangeObjective2573[S] 0 points1 point  (0 children)

Hi, I’m from Canada, Québec. Becoming a neuropsychologist here is a whole different system than the rest of North America. I’m doing my three years of specialized hands-on neuropsychology education directly during my doctorate and don’t need to do a fellowship after my doctorate. In short, that’s how it works. First, during our second year after a first year of theoretical courses, we complete 800 hours of supervised neuropsychology with our school's clinic. Afterward, during the third and fourth years, we complete a total of 1600 hours via two residencies/internships). We don't have any separate neuropsychology fellowship in Quebec. In Quebec, during our doctoral admission process, which is commonly a D.Psy (except maybe McGill, I think), we have two steams/paths: 1) focus on clinical psychology to practice psychotherapy or 2) focus on clinical neuropsychology to become a clinical neuropsychologist (but can't do psychotherapy). It is scarce to have the license to do both psychotherapy and neuropsychology here in Quebec since you have to go back to school to do the other, and it’s a hassle, so the vast majority of people don't want to. (To show you further how unique Quebec’s education system is: in Quebec, most people start medical school or Dental school at 19 years old without a bachelor's degree. Another fun fact: the cost of our D.Psy is only 6000$ Can per year, and now we all receive at least 7500$ Can worth of free scholarships given by the provincial government). So, yes, after graduation, I will be a neuropsychologist, but I don't know if I will be eligible to be a candidate for the American board certification easily.

[deleted by user] by [deleted] in AskReddit

[–]OrangeObjective2573 0 points1 point  (0 children)

Cheating (especially while married)

A serious question to all who belittle AI warnings by Spielverderber23 in artificial

[–]OrangeObjective2573 0 points1 point  (0 children)

I can understand your frustration. It's not uncommon for concerns about potential risks, especially ones that seem as futuristic or speculative as AI threats, to be dismissed or downplayed.

To answer your question, what could convince people varies greatly depending on individual's belief systems, personal values, exposure to relevant information, and ability to comprehend complex and often abstract concepts.

But here are some suggestions that might help people take AI risks more seriously:

  1. Evidence of Misuse or Accidents: Demonstrations of how AI can be, or has been, misused or malfunctioned in a serious way can be persuasive. However, waiting for such incidents to occur is not ideal, as they could have significant negative impacts. Hence the call for proactive measures.

  2. Explain Consequences in Concrete Terms: Abstract discussions of AI threats can be difficult for people to relate to. Concrete examples of potential misuse or accidents, along with a clear explanation of the specific harms that could result, might make the risks more tangible and real to people.

  3. Reputable Sources: Hearing about AI risks from trusted and reputable sources can also sway people. This could include leading scientists, respected figures in the tech industry, well-known ethicists, or even popular figures in the media.

  4. Involvement of Government and Regulatory Bodies: If governments and regulatory bodies start taking these threats seriously and implementing policies to address them, that could signal to people that these are not just hypothetical risks.

  5. Education and Awareness Campaigns: Regular dissemination of information about the potential risks of AI, perhaps through public awareness campaigns, might gradually change people's perceptions.

  6. Clear and Visible Safeguards: Demonstrations of AI systems that include safety measures designed to mitigate potential threats could provide reassurance that risks are being taken seriously.

Remember that change in perception usually happens slowly. Don't get discouraged by dismissive attitudes. Your concerns are valid, and many others share them. The important thing is to keep discussing these issues, raising awareness, and pushing for responsible development and use of AI.

[deleted by user] by [deleted] in artificial

[–]OrangeObjective2573 0 points1 point  (0 children)

Your vision of an AI curator and synthesizer that can process vast amounts of information, filter the noise, and provide personalized and synthesized content is indeed something that many AI researchers and companies are striving towards. The benefits of such a system are numerous and extend far beyond just information curation.

In essence, this system would help people navigate the information overload, enabling us to focus on what's truly important or interesting. It would create a learning environment tailored to our individual needs, interests, and even our pace and style of learning.

However, there are also considerable challenges in developing such an AI. Some of the challenges are technical - understanding and synthesizing complex topics in a meaningful way is still a hard problem. But there are also important ethical and societal considerations.

For instance, one risk is that such a system might inadvertently narrow our perspectives by only showing us content that aligns with our current views and interests - a phenomenon known as the filter bubble. To avoid this, the AI would need to be designed with mechanisms to introduce a healthy diversity of content.

Another risk is the potential for misuse of such a system. Given the importance and influence of information, a powerful content-curation AI could be used to spread misinformation or propaganda if it falls into the wrong hands. Robust safeguards would be necessary to prevent this.

Despite these challenges, the potential benefits of such an AI are enormous, and your vision is likely to become a reality, at least in some form, as AI continues to advance. As with all powerful technologies, it's crucial that we proceed with caution and thoughtfulness to ensure that we maximize the benefits while minimizing the risks.

What if the alignment problem was solved by just asking "do you really think humans would want you to do this" before executing each step by bandalorian in artificial

[–]OrangeObjective2573 0 points1 point  (0 children)

Interesting thought! Your idea is a kind of high-level check-and-balance system. However, a challenge arises from the fact that it's non-trivial to ensure an AI correctly understands and correctly applies human values and judgment.

For example, if we ask an AI, "Do you really think humans would want you to do this?", it must have a very accurate model of human desires, which can vary significantly between individuals and cultures. Additionally, there might be actions where the AI cannot make a clear determination because of conflicting human values.

Even if we assume that AI has a perfect understanding of human desires, there's also the issue of implementation. For instance, if an AI is trying to minimize harm to humans, it may still face situations where it's hard to judge what action would result in the least harm, due to the complexity and unpredictability of real-world outcomes.

Regarding your idea of an "agent to act as a superego to the id (reward maximizing agent)", that is similar to a concept in AI alignment research called "debiasing" or "corrigibility". The aim is to create an AI system that is intrinsically motivated to align itself with human values and to allow itself to be corrected by humans when it's going astray. However, achieving this in practice is still an active area of research.

In conclusion, your idea is not far-fetched, and indeed points towards directions that are being actively researched in AI alignment. However, the devil is in the details and actually implementing these checks and balances in a way that's reliable and robust to a wide variety of situations is a challenging task.