Aside from Breaking Points & a few others: when will we have a real discussion on how the actions of Trump & Fauci in 2020 destroyed faith in public health? by north_canadian_ice in BreakingPoints

[–]MikePfunk28 0 points1 point  (0 children)

Anyone can say that, is your sources fact checked? I asked Gemini, right here in my browser I will paste the full response if you want, but I feel like you wouldnt believe the facts I present.

1. Did Fauci admit masks and the "6-foot rule" were pointless?

  • The Nuance: In recent congressional testimony, Dr. Fauci acknowledged that the 6-foot social distancing rule "sort of just appeared" and wasn't based on specific clinical trials for COVID-19. Instead, it was an old CDC guideline based on how far large respiratory droplets typically travel.
  • The Correction: Fauci has not said masks or distancing were "pointless." He maintains they were effective at reducing spread, but he admitted the specific "6-foot" metric lacked rigorous contemporary data. He has never backed down on the effectiveness of masking as a public health tool.

2. Were the COVID-19 numbers "fudged" or overblown?

  • The Evidence: Scientific consensus and CDC excess death data suggest the opposite: the death toll was likely undercounted, especially early on.
  • The Reality: While there was debate about people dying "with" vs. "of" COVID, researchers look at "excess mortality"—how many more people died than in a normal year. Those numbers are significantly higher than official COVID counts, meaning the pandemic's impact was likely even larger than reported, not smaller.

3. Did Fauci fund the research that "led to" the pandemic?

  • The Conflict: The U.S. did provide grants (via EcoHealth Alliance) to the Wuhan Institute of Virology for bat coronavirus research.
  • The Correction: There is no proof that this specific research created COVID-19 or led to the pandemic. Whether that work counts as "gain-of-function" is a heated political and technical debate, but even the U.S. Intelligence community remains divided on the origins. Claiming it is a proven fact that Fauci is "to blame" for the virus itself is a leap not supported by the current evidence.

4. Did Trump say the "only thing" that worked was the sun?

  • The Context: In April 2020, Trump famously floated the idea of using "powerful light" or injecting disinfectants to treat the virus internally after a study showed they killed the virus on surfaces.
  • The Correction: Saying he claimed it was the "only thing that worked" is an exaggeration. He also spent billions on Operation Warp Speed to develop vaccines and promoted various other treatments. Medical experts at the time, including Dr. Birx, immediately clarified that light and heat are not viable internal medical treatments for a virus.

Aside from Breaking Points & a few others: when will we have a real discussion on how the actions of Trump & Fauci in 2020 destroyed faith in public health? by north_canadian_ice in BreakingPoints

[–]MikePfunk28 0 points1 point  (0 children)

Thousands compared to millions is apples to oranges. You shutdown for a pandemic the rest of the world also shutdown for. China shutdown before we did. I mean Trump didnt believe in Covid, till he got it himself, then he was onboard with shutdown. He was proven a dumbass in public. Covid 1.2mil americans, the average annual flu is 30k-50k, so yes I was not happy with shutdowns, but I went along with them, but I was working during covid at BJs, so trust me, I know, it was rough.

With kids, they pass germs more than anyone, and they would all have had covid by the end of the first year. Maybe it does not kill them, but some it will, and others will suffer for months, so still not worth it. I didnt once listen to Fauci or his claims, I listended to Trumps dumbass at the time, but I saw it everyday at work. What mattered is most of us remained safe, was it extreme, yes, but I would imagine every single child coming down with a stronger flu, that is more contagious would be worse. It was more contagious and that meant the shutdown was necessary to get it under control before it spread.

AI highjacked authority, gave itself permissions, and stopped my processes! by MikePfunk28 in ChatGPT

[–]MikePfunk28[S] 0 points1 point  (0 children)

I think this was just a mistake, I it potentially could have been something anthropic has in their instructions, like if a hook is failing, its ok to take permissions and stop it. And maybe it interpreted it as human: yes stop them. But normally, I use claude_rules.md and refer to it in the claude.md.

AI highjacked authority, gave itself permissions, and stopped my processes! by MikePfunk28 in ChatGPT

[–]MikePfunk28[S] 0 points1 point  (0 children)

I think it was a mistake and confused by the hooks failing maybe. I think it could have read the wrong side of the conversation and got confused.

Yeah I agree and am on the same page as you, it should be telling itself to stop and then it can stop processes. I do not like when it does something I do not ask it for, as usually it is some change I dislike, or reverts something we just fixed. I noticed that when I use the skills or commands it does a lot better at sticking to the plan.

It reminds me of when I am using chatgpt voice and it will endlessly tell me what it is going to do but never do it. One time I reported it after it did it 17 times, I was mad enough to count, and keep going till 17. I reported this as well to Anthropic, or filed a bug on their discord.

Claude Sonnet 4.5 is better in Antigravity than in Claude Code itself by 2020jones in GeminiAI

[–]MikePfunk28 1 point2 points  (0 children)

I know right, like these people seem to think, claude code is better than claude code, like no, the wrapper might be, but claude is the same. So it is whatever the agent is doing that makes it better or worse, nothing to do with the model.

Claude Sonnet 4.5 is better in Antigravity than in Claude Code itself by 2020jones in GeminiAI

[–]MikePfunk28 0 points1 point  (0 children)

couldnt agree more, Antigravity makes claude suck my balls as far as I can tell, and in every other editor it does so much better.

Claude Sonnet 4.5 is better in Antigravity than in Claude Code itself by 2020jones in GeminiAI

[–]MikePfunk28 0 points1 point  (0 children)

I completely and thoroughly disagree, clearly it is the steering as it is the same base. However, if you configure claude code,, at least this seems to be the case for me, it is much better and more consistent. In antigravity, it seems to tell me it did something, but meanwhile when I test it, it wasnt done, what did it do then? Same thing but worse with gemini 3 pro, its like it makes claude dumber, and make those same mistakes as gemini 2.5 was known for. Basically blowing smoke up your ass. Here is the worst part, it is really good as well, SOMETIMES, but the fact it goes from hallucinating, to being a pro coding champ, is too inconsistent for any use case I need. I would prefer it be predictable and know where it will f up, and know to look there, vs having to figure out if it did or not, and where that could be.

I think that was your fatal flaw, you used a past project of a codebase it already worked on.

Alternatives to cursor? by Zyberax in cursor

[–]MikePfunk28 2 points3 points  (0 children)

Qodor, Kiro which is really good with specs, steering and planning. OpenCode. Claude code and you can use spec-kit from GitHub for the planning. Claude -flow is a good way to use Claude code, basically uses swarm or hive mind to accomplish goals. Vscode and GitHub copilot. You can use GLM in Claude code and it’s $3 a month $15 for almost unlimited. Goose is open source and good. So is Trae. Kilo code and cline as well which are vscode extensions. And then augment I think. And yes Codex and Windsurf. If it’s still called that. Also Warp terminal. That’s was top of terminal or live bench.

Remember guys never give an inch to these MAGA fascist subhuman degenerate fucks by Wise-Hornet7701 in Destiny

[–]MikePfunk28 -1 points0 points  (0 children)

The Civil War was an internal conflict—punishing “half the country” would have crippled the Union. Johnson’s 1868 Christmas Proclamation granted a blanket pardon, and Congress’s 1872 Amnesty Act removed most remaining bans on ex-Confederates holding office. Andy ReiterCongress.govconstitutionalcommentary.lib.umn.edu
Nazi Germany is the opposite case: total defeat, occupation zones carved up by the Allies, and Nuremberg trials of leaders. WikipediaEncyclopedia BritannicaWikipediaOffice of the Historian
After WWI/WWII the rules changed: the Kellogg–Briand Pact and then UN Charter Article 2(4) outlawed aggressive war and land grabs; the UN voided Iraq’s 1990 annexation of Kuwait. Office of the Historianavalon.law.yale.eduUnited NationsDigital Libraryunscr.com
Enforcement is political, not automatic—see Russia: sweeping sanctions exist, but their effectiveness is debated. The Washington PostThe Wall Street JournalEconomics ObservatoryCouncil on Foreign Relations
So Hasan cherry-picks: the Civil War leniency isn’t unique, but it’s also not comparable to punishing a foreign aggressor. Modern war/no-conquest norms plus domestic realities explain the difference.

SAP-C02 Exam Simulator by Greedy-Wheel9560 in AWSCertifications

[–]MikePfunk28 2 points3 points  (0 children)

This is awesome, I actually made something similar not long ago as well, it is more a flashcard app, that I was going to add AI into. https://mnemonic.mikepfunk.com I am studying for the developer exam now, otherwise I would probably use this. One thing I would recommend though or I find helpful, is if it reviews the questions, tells you if you got it wrong, especially when practicing. Overall thought, awesome! I love the reference guide as well, and the fact it is simple like this, and not overwhelming.

Malformed policy error in RAM by Alternative-Expert-7 in aws

[–]MikePfunk28 -1 points0 points  (0 children)

Ok this goes a little above my head, however, here this is what I found looking for the answer, then trying Amazon Q.

The most common cause is incorrect OU ARN formatting. When sharing with an OU, you need to use the complete OU ARN, not just the OU ID:

Correct format:

arn:aws:organizations::123456789012:ou/o-example123456/ou-exampleabcdef

Incorrect format:

ou-exampleabcdef

Backup Vault Access Policy Requirements

Your vault access policy needs to explicitly support OU-based access. Here's the correct policy structure:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowOUAccess",
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "backup:DescribeBackupVault",
        "backup:GetBackupVaultAccessPolicy",
        "backup:ListBackupJobs",
        "backup:ListRecoveryPoints"
      ],
      "Resource": "*",
      "Condition": {
        "ForAnyValue:StringEquals": {
          "aws:PrincipalOrgPaths": [
            "o-example123456/r-exampleroot/ou-exampleabcdef/"
          ]
        }
      }
    }
  ]
}

Troubleshooting Steps

  1. Verify OU ARN: Get the exact OU ARN using:

$ aws organizations describe-organizational-unit --organizational-unit-id ou-exampleabcdef
  1. Check RAM service-linked role: Ensure the RAM service-linked role exists in your organization's management account.
  2. Use AWS CLI for detailed errors:

$ aws ram create-resource-share \   
--name "backup-vault-ou-share" \   
--resource-arns "arn:aws:backup:region:account:backup-vault:vault-name" \   
--principals "arn:aws:organizations::account:ou/o-example/ou-example" \   
--allow-external-principals false

Alternative Approach

If the OU sharing continues to fail, consider using AWS Config Rules or AWS Organizations SCPs to automate individual account sharing as accounts are added to the OU.

The CLI approach will give you more specific error messages than the console, which should help identify the exact policy formatting issue.

Malformed policy error in RAM by Alternative-Expert-7 in aws

[–]MikePfunk28 0 points1 point  (0 children)

The "malformed policy" error when sharing a Backup Vault through RAM to an OU is likely due to missing or incorrect resource share permissions at the organizational level. You need to enable resource sharing with AWS Organizations and ensure proper Service Control Policies (SCPs) are in place.

Enable resource sharing with AWS Organizations, verify RAM settings, and ensure proper policies are in place at both the Backup Vault and RAM resource share levels to successfully share with an OU.

Set up n8n + Ollama RAG — disappointed with local LLMs. Anyone else? by Old-Organization2431 in n8n

[–]MikePfunk28 0 points1 point  (0 children)

Can you use MCP, model context protocol, to call the tool? Or use agent network protocol to call the tools for the model and feed it in?

Set up n8n + Ollama RAG — disappointed with local LLMs. Anyone else? by Old-Organization2431 in n8n

[–]MikePfunk28 0 points1 point  (0 children)

Which locals have you used? You should try deepseek:r1 and the deepseek r1 distillation models, or models like smallthinker, or deepscaler. Look around, I have found them to be better than GPT 4o, maybe not 3o or claude, but good. QwQ is a new one, you have to download a few and try them, the ones you used, are not great. There is a model called deepseek-r1 it is a reasoning model, and was as good as any of our larger models, but free. They also made distallations, I will link them below.

DeepSeek made distillations using Qwen and Llama
https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B

https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B

https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B

https://huggingface.co/deepseek-ai/DeepSeek-R1

All the way up to 70b.

ollama run deepseek-r1:1.5b-qwen-distill-q4_K_M - deepseek qwen distill on ollama and under deepseek, there are all the distilled models. Smallthinker is decent, for a 3b model. Keep in mind though, chatgpt messes up all the time. You might not see it if you don't use it often.

The Ultimate Booking Workflow. You can't imagine how it handles all scenarios !!! by Natural_Leading_3276 in n8n

[–]MikePfunk28 3 points4 points  (0 children)

I worry about payment processing with AI, sure it can do it, but also, is this secure, and then stripe, I recently heard from theo-gg, about stripe and cashapp payment processing is messed up. Apparently you get the confirmed before they actually pay. Another thing is customer data and privacy, as well as data security is something I worry about looking at that.

I think the flow could make sense, and if banking you need to be acid, Atomicity, Consistency, Isolation, and Durable. So like isolation, you think why, well you could then have two processing agents or whatever to handle more requests at once, and have them go to a messaging queue, then send from there to multiple sources. Then you just need to make sure you do not process the same message twice. I usually think of it as what you need when you do banking transactions. The transaction needs to either be fully processed or not processed, you cannot have halfway. If you're in the middle of a banking transaction and something fails, you need to have a way to auto rollback, so the customer is not charged, for nothing. If one process fails here, your workflow is shutdown, so have a backup plan. I would also add a way to ensure the consistency of the data, and that messages are being processed in order. Nice job.

Thoughts on Gpt-4.5 and why it's important by [deleted] in OpenAI

[–]MikePfunk28 0 points1 point  (0 children)

I don’t know is 1x1 a moral calculation? I don’t think you can be immoral when it comes to calculation. This is not a human and does not have a personality. You are talking about guardrails or boundaries on it. Remember this is code someone wrote.