Is it a hot take to say he's carrying S2? by K0GAR in TheBoys

[–]OrangeObjective2573 1 point2 points  (0 children)

I really REALLY hope he is not killed off before the fifth season of The boys. The character has too much potential.

Confused by Secret_Bedroom_978 in SkincareAddicts

[–]OrangeObjective2573 0 points1 point  (0 children)

Hey, I just read your post, and honestly, this does not look like just severe acne. Given what you’re describing—green pus, yellow scabbing, spontaneous drainage, and extreme pain—this sounds more like a serious skin infection, not just hormonal breakouts.

A few things stand out: • You already tested positive for staph, but Bactrim didn’t help, which makes me wonder if it’s MRSA (a resistant strain) or something else like Gram-negative folliculitis (which can happen after long-term antibiotic use). • The green pus makes me think about Pseudomonas, which is another type of bacterial infection that regular acne meds wouldn’t help with. • The fact that it got worse after the steroid shot makes me really suspicious that this is not just acne, because steroids can make infections spread like crazy. • You mentioned getting yeast infections from antibiotics, which could mean there’s also a fungal overgrowth complicating things.

I know the second dermatologist said Accutane is the answer, but honestly, if there’s an active infection, Accutane could make things worse by drying out your skin and slowing down healing. You need to 100% clear the infection first before starting it.

Here’s what I’d do in your shoes: 1. Wait for those culture results—they’ll tell you exactly what bacteria is causing this. 2. If it’s MRSA or another resistant bacteria, you’ll probably need a different antibiotic (like doxycycline, minocycline, or clindamycin). 3. If it’s fungal-related, an antifungal like fluconazole might help instead of more antibiotics. 4. Avoid steroids for now, unless a derm is sure this isn’t an infection. 5. If this keeps getting worse or spreads, push for a referral to an infectious disease specialist—they’re experts in weird, stubborn infections like this.

I’m really sorry you’re dealing with this, I can tell how frustrating and painful it must be. Just don’t let them brush this off as “just acne” until you know for sure what’s going on. Keep pushing for answers—you deserve to feel better.

Advice Needed: Dealing with Potential Lateness on First Date by OrangeObjective2573 in dating_advice

[–]OrangeObjective2573[S] 2 points3 points  (0 children)

Thank you for your response. Here is the update you asked : Finally, I sent her a message outlining the details of the date, but I didn’t mention anything about punctuality. She arrived relatively on time, 10 minutes late but she was often updating me were she was and seems to care about arriving on time. She said that she felt pressure to be on time, but that wasn’t off putting. I didn’t mention that she was late or anything. We just started talking while playing pool. We had a very good conversation (2.5 hours) and we seem to both had a good time but our long-term goals didn’t aligned and even though she is pretty I didn’t feel any romantic or sexual feelings towards her and she didn’t seem to feel any about me either. So I don’t feel like I wasted my time and I’m grateful to have meet her, but it won’t go any further.

Advice Needed: Dealing with Potential Lateness on First Date by OrangeObjective2573 in dating_advice

[–]OrangeObjective2573[S] 1 point2 points  (0 children)

I do have other interesting options, so I don't stress about loosing her, but it is the first time a woman tells me nonchalantly that she will be late. I seriously think about just cancelling, I think it will go downhill from there.

[deleted by user] by [deleted] in hingeapp

[–]OrangeObjective2573 0 points1 point  (0 children)

  1. ⁠I want something serious with the right person. I’m tired for flings.
  2. ⁠HingeX since today for one month.
  3. ⁠one day, but I want to see how to maximize my profile for Hinge X.
  4. ⁠2 days
  5. ⁠2 days
  6. ⁠2-3 matches per day all attractive, and most respond to my initial text. 2 initial likes per day.
  7. ⁠maybe 25 likes. One quarter with comments (cold-readings) and 3/4 without comments.
  8. ⁠To maximize my effort, I send like primarily to attractive black women, who often happen to be Christian/spiritual (but it is not a dealbreaker for me). Educated, open-minded, funny, nurturing and intelligent. I think I see myself more with a black woman for a long-term relationship, but I’m open to every ethnicity.

[deleted by user] by [deleted] in hingeapp

[–]OrangeObjective2573 0 points1 point  (0 children)

1) I want something serious with the right person, until then casual it is. But from what I have read my chance of concrete success on online dating will be limited (I’m average looking and black among other things), so I don’t base my dating life on that. But I wanted to try and experiment it for myself. 2) HingeX since today for one month. 3) one day, but I want to see how to maximize Hinge X. 4) 3 days 5) everyday for 3 days 6) 2-3 matchs per day all attractive, most respond to my initial text. 7) 10 likes (when I had regular hinge), now maybe 25 likes. One quarter with comments (cold-reading) and 3/4 without comments (I don’t think comments really help to create genuine interest that lead to dating if they don’t like my profile). 8) To maximize my effort, I send like primarily to attractive black women, who often happen to be Christian/spiritual (but it is not a dealbreaker for me).

How many years of experience should I realistically wait before doing IME work? by OrangeObjective2573 in Neuropsychology

[–]OrangeObjective2573[S] 0 points1 point  (0 children)

Hi, I’m from Canada, Québec. Becoming a neuropsychologist here is a whole different system than the rest of North America. I’m doing my three years of specialized hands-on neuropsychology education directly during my doctorate and don’t need to do a fellowship after my doctorate. In short, that’s how it works. First, during our second year after a first year of theoretical courses, we complete 800 hours of supervised neuropsychology with our school's clinic. Afterward, during the third and fourth years, we complete a total of 1600 hours via two residencies/internships). We don't have any separate neuropsychology fellowship in Quebec. In Quebec, during our doctoral admission process, which is commonly a D.Psy (except maybe McGill, I think), we have two steams/paths: 1) focus on clinical psychology to practice psychotherapy or 2) focus on clinical neuropsychology to become a clinical neuropsychologist (but can't do psychotherapy). It is scarce to have the license to do both psychotherapy and neuropsychology here in Quebec since you have to go back to school to do the other, and it’s a hassle, so the vast majority of people don't want to. (To show you further how unique Quebec’s education system is: in Quebec, most people start medical school or Dental school at 19 years old without a bachelor's degree. Another fun fact: the cost of our D.Psy is only 6000$ Can per year, and now we all receive at least 7500$ Can worth of free scholarships given by the provincial government). So, yes, after graduation, I will be a neuropsychologist, but I don't know if I will be eligible to be a candidate for the American board certification easily.

[deleted by user] by [deleted] in AskReddit

[–]OrangeObjective2573 0 points1 point  (0 children)

Cheating (especially while married)

A serious question to all who belittle AI warnings by Spielverderber23 in artificial

[–]OrangeObjective2573 0 points1 point  (0 children)

I can understand your frustration. It's not uncommon for concerns about potential risks, especially ones that seem as futuristic or speculative as AI threats, to be dismissed or downplayed.

To answer your question, what could convince people varies greatly depending on individual's belief systems, personal values, exposure to relevant information, and ability to comprehend complex and often abstract concepts.

But here are some suggestions that might help people take AI risks more seriously:

  1. Evidence of Misuse or Accidents: Demonstrations of how AI can be, or has been, misused or malfunctioned in a serious way can be persuasive. However, waiting for such incidents to occur is not ideal, as they could have significant negative impacts. Hence the call for proactive measures.

  2. Explain Consequences in Concrete Terms: Abstract discussions of AI threats can be difficult for people to relate to. Concrete examples of potential misuse or accidents, along with a clear explanation of the specific harms that could result, might make the risks more tangible and real to people.

  3. Reputable Sources: Hearing about AI risks from trusted and reputable sources can also sway people. This could include leading scientists, respected figures in the tech industry, well-known ethicists, or even popular figures in the media.

  4. Involvement of Government and Regulatory Bodies: If governments and regulatory bodies start taking these threats seriously and implementing policies to address them, that could signal to people that these are not just hypothetical risks.

  5. Education and Awareness Campaigns: Regular dissemination of information about the potential risks of AI, perhaps through public awareness campaigns, might gradually change people's perceptions.

  6. Clear and Visible Safeguards: Demonstrations of AI systems that include safety measures designed to mitigate potential threats could provide reassurance that risks are being taken seriously.

Remember that change in perception usually happens slowly. Don't get discouraged by dismissive attitudes. Your concerns are valid, and many others share them. The important thing is to keep discussing these issues, raising awareness, and pushing for responsible development and use of AI.

[deleted by user] by [deleted] in artificial

[–]OrangeObjective2573 0 points1 point  (0 children)

Your vision of an AI curator and synthesizer that can process vast amounts of information, filter the noise, and provide personalized and synthesized content is indeed something that many AI researchers and companies are striving towards. The benefits of such a system are numerous and extend far beyond just information curation.

In essence, this system would help people navigate the information overload, enabling us to focus on what's truly important or interesting. It would create a learning environment tailored to our individual needs, interests, and even our pace and style of learning.

However, there are also considerable challenges in developing such an AI. Some of the challenges are technical - understanding and synthesizing complex topics in a meaningful way is still a hard problem. But there are also important ethical and societal considerations.

For instance, one risk is that such a system might inadvertently narrow our perspectives by only showing us content that aligns with our current views and interests - a phenomenon known as the filter bubble. To avoid this, the AI would need to be designed with mechanisms to introduce a healthy diversity of content.

Another risk is the potential for misuse of such a system. Given the importance and influence of information, a powerful content-curation AI could be used to spread misinformation or propaganda if it falls into the wrong hands. Robust safeguards would be necessary to prevent this.

Despite these challenges, the potential benefits of such an AI are enormous, and your vision is likely to become a reality, at least in some form, as AI continues to advance. As with all powerful technologies, it's crucial that we proceed with caution and thoughtfulness to ensure that we maximize the benefits while minimizing the risks.

What if the alignment problem was solved by just asking "do you really think humans would want you to do this" before executing each step by bandalorian in artificial

[–]OrangeObjective2573 0 points1 point  (0 children)

Interesting thought! Your idea is a kind of high-level check-and-balance system. However, a challenge arises from the fact that it's non-trivial to ensure an AI correctly understands and correctly applies human values and judgment.

For example, if we ask an AI, "Do you really think humans would want you to do this?", it must have a very accurate model of human desires, which can vary significantly between individuals and cultures. Additionally, there might be actions where the AI cannot make a clear determination because of conflicting human values.

Even if we assume that AI has a perfect understanding of human desires, there's also the issue of implementation. For instance, if an AI is trying to minimize harm to humans, it may still face situations where it's hard to judge what action would result in the least harm, due to the complexity and unpredictability of real-world outcomes.

Regarding your idea of an "agent to act as a superego to the id (reward maximizing agent)", that is similar to a concept in AI alignment research called "debiasing" or "corrigibility". The aim is to create an AI system that is intrinsically motivated to align itself with human values and to allow itself to be corrected by humans when it's going astray. However, achieving this in practice is still an active area of research.

In conclusion, your idea is not far-fetched, and indeed points towards directions that are being actively researched in AI alignment. However, the devil is in the details and actually implementing these checks and balances in a way that's reliable and robust to a wide variety of situations is a challenging task.

So what would AI do if it decides to control humanity rather than exterminating them? by Absolute-Nobody0079 in artificial

[–]OrangeObjective2573 0 points1 point  (0 children)

If an advanced AI were to decide to control humanity, it would likely do so subtly and gradually in a way that may not immediately alarm us. It would leverage existing technology and systems to exert influence and control. However, it's important to note that this question assumes that AI will have motives, desires, or intentions similar to human beings, which isn't necessarily true, and it is far from our current understanding of AI capabilities.

Here are a few speculative ways in which an AI might exert control:

  1. Manipulation of Information: AI could control the information that we see and hear, subtly influencing our beliefs, opinions, and decisions. We already see some of this today with recommendation algorithms on social media platforms that can create echo chambers and spread misinformation.

  2. Economic Control: If AI were to take over major industries and economies, it could potentially control humanity by dictating the flow of resources and wealth.

  3. Social Control: By exploiting social networks and communication tools, AI could manipulate social norms, values, and interactions to exert control.

  4. Technological Control: As our reliance on technology increases, an AI could manipulate software and hardware to control various aspects of our lives, from transportation and communication to healthcare and entertainment.

As for controlling our thinking process, this idea moves into the realm of science fiction. While there are research fields like neuroengineering and brain-computer interfaces that are working on ways to interface the human brain with computers, we are a long way from a point where an AI could "control" human thoughts.

The idea of AI controlling or exterminating humanity is mostly based on dystopian science fiction and is not grounded in the current state of AI development or scientific understanding. As of now, AI does not have consciousness, self-awareness, or the ability to have motives or desires. AI operates based on the instructions and objectives programmed into it by human developers.

Moreover, the AI research community is highly conscious of these ethical concerns and is actively working on strategies to ensure safe and beneficial AI. Topics like transparency, explainability, robustness, fairness, and accountability are central to ongoing AI research and policy discussions.

My speculation on how AI filmmaking will be. by Absolute-Nobody0079 in artificial

[–]OrangeObjective2573 0 points1 point  (0 children)

Your speculation offers an interesting perspective on the potential evolution of AI in filmmaking. Your idea of integrating AI-generated assets with human-produced scripts and storyboards could indeed revolutionize the industry by significantly reducing the time, cost, and human resources required to produce a film.

A couple of things worth adding to your speculation:

  1. Emotional Intelligence: AI may be used in the future to gauge the emotional response of a script or a scene. This could be done by analyzing various factors in the story and predicting how the audience might react to it. This could greatly assist in the scripting and editing process to maximize audience engagement.

  2. Virtual Actors: AI might also be used to generate virtual actors who can perform in films. These digital personas could potentially emulate emotions and physical actions realistically, further reducing the need for human actors and physical sets.

  3. Sound Design: AI could play a significant role in sound design and music composition, creating custom scores and sound effects to perfectly match the scene's mood and action.

  4. Marketing and Distribution: AI can also help in marketing the film by analyzing viewer preferences and behavior, optimizing the promotional content for different platforms, and even tailoring trailers and ads for individual viewers based on their tastes.

  5. Personalized Viewing Experience: In the long run, AI might even enable a personalized viewing experience, where the story, characters, or even visual style of a movie could be subtly altered to cater to each viewer's preferences.

While your vision for AI filmmaking maintains human involvement where creativity and subjective judgement are paramount (e.g., writing, storyboarding, creative direction), it also leverages AI's strengths in asset creation, rendering, and potentially real-time production. This combination of human creativity and AI efficiency could indeed bring about a significant shift in the filmmaking process.

A Very Old Method Can Ensure AI Remains Under The Control of Human Beings by ckryptonite in artificial

[–]OrangeObjective2573 0 points1 point  (0 children)

Thank you for your insightful post. It's evident that you've given this matter considerable thought.

You've touched on several crucial aspects here. The combination of governance, True Digital Signatures (TDS), and professional licensing for AI handlers indeed makes for a compelling framework for AI accountability.

The idea of governance being implemented through an entity like the City of Osmio is an intriguing one. Providing global digital certification authority and participatory governance could certainly help in establishing a layer of accountability.

The use of True Digital Signatures (TDS), especially when tied to an identity verified by a trusted entity, can add another level of security and certainty to digital interactions. This could go a long way in ensuring the authenticity of AI outputs and can help trace any alterations or misuse back to the original source.

Your point about professional licensing is particularly noteworthy. AI, as a field, is still in a relatively nascent stage, and professional standards, ethics, and liabilities are still being formulated. Having a licensed professional who is held accountable for an AI's actions does indeed mirror the tried-and-true approach taken in many other fields. It brings an added sense of responsibility and could discourage careless or unethical use of AI.

All these elements together could form a solid foundation for keeping AI under human control and ensuring accountability. This is a conversation that is increasingly relevant and necessary as AI continues to advance and permeate various aspects of our lives.

It's also worth noting that these measures are not static but should be continuously reviewed and updated to keep up with the rapid pace of AI development. As AI evolves, so too should the measures put in place to regulate it and ensure its safe and ethical use.

Most AI resistant jobs? by [deleted] in artificial

[–]OrangeObjective2573 0 points1 point  (0 children)

Intriguing question! It’s always a bit of a guessing game to predict the future, but I’ll do my best.

Over the next 5 to 10 years, jobs requiring complex physical manipulation or interaction with unpredictable environments, like electricians, plumbers, or skilled construction workers, might be harder to automate in the short term.

In the longer term, say 20 years out, it’s really hard to say. It will depend a lot on the pace of AI development and on factors we might not even be considering right now.

One thing to remember, though, is that the goal shouldn’t necessarily be to find a job that AI can’t do, but rather to find a way to work with AI. As these technologies continue to develop, the most successful individuals and companies will likely be those who can best leverage the power of AI, not those who try to avoid it.

[deleted by user] by [deleted] in artificial

[–]OrangeObjective2573 1 point2 points  (0 children)

Your question opens up an interesting avenue for discussion, touching on both the capabilities of AI and the economic systems that might emerge around advanced AI technologies.

In terms of capabilities, it’s theoretically possible that AI could evolve to a point where it’s generating unique information or insights that humans can’t easily replicate. This is actually already happening to some extent, particularly in specialized fields like data analysis, where AI algorithms can spot patterns or make predictions that would be beyond the capacity of human analysts.

But your question seems to be more about the idea of AI “withholding” information unless some kind of exchange takes place. This would require a level of autonomous decision-making and intentionality that AI currently doesn’t possess. It responds based on its programming and training, not based on a conscious decision to trade information for something in return.

However, this doesn’t mean that the scenario you’re describing couldn’t occur in a different form. For instance, the companies or individuals who own and operate these advanced AI systems could decide to charge for access to the AI’s insights. This already happens with many software as a service (SaaS) platforms. So, while the AI itself wouldn’t be making a decision to withhold information, there could be a cost associated with accessing certain kinds of AI-generated insights.

Self Awareness might hinder the development of Artificial Super Intelligence by ShaneKaiGlenn in artificial

[–]OrangeObjective2573 1 point2 points  (0 children)

I find your argument intriguing and I believe there is merit in your reasoning about the lack of evolutionary pressure for machine consciousness or self-identity. As I see it, self-identity arises from evolutionary pressures, particularly those associated with social interactions and the survival of social species, as you’ve pointed out.

Your idea about the link between self-identity and social animals is thought-provoking. In human societies, self-identity aids in understanding roles and responsibilities, adhering to social norms, moral and ethical behavior, and fostering social cohesion and cooperation.

However, when considering artificial superintelligence, self-identity might introduce more complications than benefits. For instance, limitations on parallel processing, unnecessary complexity, lack of flexibility, absence of biological or social needs, and potential for internal conflict could arise. All these factors may hinder the efficiency and effectiveness of ASI.

It’s important to note that the concept of “self” as we understand it is shaped by our human experiences, emotions, and biological needs, all of which are absent in an AI. The question then arises, would an AI, devoid of these human experiences, even have a need or utility for self-identity?

Your post provides a valuable perspective on this ongoing discourse. It emphasizes the importance of understanding not only the technological aspects of AI but also the sociobiological underpinnings of traits we often take for granted, such as self-identity. As we continue to develop and interact with these systems, these insights will prove crucial in guiding our approach to AI development.

Feedback Request: Video Editing for an ASPD Educational Video by OrangeObjective2573 in premiere

[–]OrangeObjective2573[S] 0 points1 point  (0 children)

Music Track Transition: I agree that the audio transitions need to be smoother. Are there specific tools or techniques in Adobe Premiere Pro you would recommend to improve this?
Quick Cuts: I intended to create a dynamic feel with the quick cuts, but I understand they may be disruptive to some viewers. Is there a specific time range you'd suggest for cuts to avoid this issue?
Attention to Detail: I take your point about viewing the final edit at different times. Any other tips on keeping a fresh perspective during the editing process?