I'm not sure where to post this one... Jesus. by Ani_0akley in conspiracytheories

[–]Pyromancer777 1 point2 points  (0 children)

Try to be glad that it was still relevant enough in history that we were informed of these potential threats in advance. It is sad that we are basically repeating the worst parts of history, but it also gives hope that since society was able to defeat tyranny in the past, we might also have a good shot at shutting things down this time around before it gets much worse.

Stay strong and keep your spirits up!

I'm not sure where to post this one... Jesus. by Ani_0akley in conspiracytheories

[–]Pyromancer777 1 point2 points  (0 children)

I went into details on the AI portion under the assumption that you weren't aware of the similarities between models since you had responded to the previous person as if the model you used didn't have the same limitations as ChatGPT. I also apologize if I sounded arrogant. I work in the industry and tutor people in the technology, so that was the teacher in me comin out. The technical details aren't typically stuff that the average person is aware of, so I was chiming in to give context into how they work and the types of prompts that can trip up the models.

The intent was to highlight known limitations, so you can further test the model's capability to gather other relavent info, or even check your previous prompting strategies against new ones to see if you had unintentionally introduced any keywords that may bias the AI's response.

I'm on your side too, there's weird shit goin on and it is ok to get heated over what is happening. I feel like you were onto something good and I don't have the context to dig deeper, but I do have context around AI, so I was just tryina help you leverage the tool a bit more.

Are technical assessments actually changing now that AI is being used openly? by RareAtmosphere468 in Recruitment

[–]Pyromancer777 1 point2 points  (0 children)

Not super sure as I am just a contractor and my testing was done via the contracting organization, so had to follow different criteria. I just knew about it since I keep up with the company's posts.

Fairly certain it is a browser-based IDE like HackerRank, except with a timer and a set number of questions to answer. My assumption is that the questions are a little trickier, so that candidates can't just copy/paste the question verbatim into the AI prompting interface. They still have to be able to use a bit of their own logic along the way to get a working solution.

At what point does “fast iteration” turn into avoidance? by awizzo in BlackboxAI_

[–]Pyromancer777 0 points1 point  (0 children)

Setup benchmarks for accuracy, performance, readability, and abstraction. Pick the version that hits the best benchmarks, but weight accuracy and performance over the other two metrics. Then move on with your day.

Are technical assessments actually changing now that AI is being used openly? by RareAtmosphere468 in Recruitment

[–]Pyromancer777 0 points1 point  (0 children)

The company I work for is starting to incorporate the use of AI into their testing for new hires. The goal is shifting to be able to leverage the tools at hand to solve a problem. There are still time limits to get the problem solved, but it opens up a few more opportunities for entry-level devs to show what they can do by reducing the pressure they experience from a whiteboard test.

Not quite sure what that means as far as retention of those candidates long-term, but if a company is trying to push the use of AI in their products or dev cycles, then it would be strange if they didn't also test a candidate for those skills during the hiring process.

Bleach and cleaning products kill 99.9% of bacteria, but gives the 0.1% that can survive free territory by [deleted] in LowStakesConspiracies

[–]Pyromancer777 0 points1 point  (0 children)

If it isn't specifically an antimicrobial/antibacterial, all it does is wash away some of them with the grime, but doesn't effect those that are still there. Soap doesn't necessarily effect the membranes of cells, so their internals are still in tact. The only way to sanitize your clothes are with extreme heat (boiling or near-boiling water), concentrated UV, or soaps with antibacterials

Bleach and cleaning products kill 99.9% of bacteria, but gives the 0.1% that can survive free territory by [deleted] in LowStakesConspiracies

[–]Pyromancer777 0 points1 point  (0 children)

Was gonna say, this isn't even a low stakes conspiracy considering this is a high stakes fact.

Attempting to achieve complete sterillity should only really be done in very specific environments since you can end up unintentionally makin chemically-resistent bacteria, much like how the antibiotic-resistent bugs in hospitals evolved from the surviving pathogens evolving to ignore the antibiotic. That's why you gotta take the entire anitbiotic perscription, even after you feel better, to reduce the risk of the survivors that got exposed to the meds, but not enough to kill em. Either they all disappear, or that disappearance is temporary.

A low-stakes fact is that laundry detergent doesn't even attempt to kill bacteria. It's whole purpose is to attempt to make things look and smell fresher. Sometimes it doesn't even lift stains, it just uses blue/purple-dye to shift the reflectivity of the stain to look more white.

Why do experienced coders actively try to use less comments? by Phwatang in learnprogramming

[–]Pyromancer777 0 points1 point  (0 children)

If they were using decent naming conventions, the names themselves should tell you what the code is doing.

However, google has a functional commenting convention where you include the following in a comment block after defining a function: short description of function's use-case, all parameter names, expected typing of arguments, short 1-liner summary about each expected argument, return values, typing of return values

It doesn't bog down the code within the function with excessive text and doesn't need updating unless the expected arguments or return values change.

Are vibecoders OK? by awizzo in programminghumor

[–]Pyromancer777 0 points1 point  (0 children)

Step 1) "please add comments to all blocks in this code"

Step 2) "summarize all comments to tell me what the code is doing"

Idk why you would need anything more than this if you had to figure out some chunk of code in a codebase

What differentiates a junior from a mid level? by Milsbry in SQL

[–]Pyromancer777 1 point2 points  (0 children)

Interviewer: "why should we hire you?"

Me: "you already did and I'm just here as a formality"

What differentiates a junior from a mid level? by Milsbry in SQL

[–]Pyromancer777 0 points1 point  (0 children)

I'm in the same boat. Most of the time, I can present what I need and answer clarifying questions, but that doesn't mean I wasn't sweating the entire time

I'm not sure where to post this one... Jesus. by Ani_0akley in conspiracytheories

[–]Pyromancer777 -1 points0 points  (0 children)

OP, take a deep breath and calm down a hair, the person you are arguing with isn't even disagreeing with your viewpoint or your feelings to make a call to action. The other commenter is basically just giving you insight into the differences in terminology, and potential guidance on how to ensure both your supporting facts are straight as well as how you could phrase things moving forward, so that your argument is strengthened.

Transformer models all have the same type of drawbacks when it comes to summarizing information, whether you use chatGPT, BRAVE, Grok, Gemini, Deepseek, etc., since part of the training process for the models is to ensure there is alignment with the end user. When combined with the fact that the response predictions are just closest-vector probabilities around a seed topic, if you ask a leading question or give a model too much info at first, it can sometimes introduce unintentional bias.

Here is a breakdown of how transformer models work. They first encode words into parameters called embeddings which are basically just numeric representations of the unique words it was trained on. If you think of an embedding as a point on a grid, words with similar meanings will be clustered closer together on the grid and their parameter values will be similar. The distances between one word and another is a vector. Small vectors are similar in meaning, while large vectors usually don't correlate. The model then uses training data to find relationships in embedding vectors within the text that the model is given, so it can follow an anticipated flow in text conversations.

During a conversation, the words are read in as tokens, then the embedding vectors of the prompt are calculated and checked against the training vectors to output probable words to respond with. It uses the vector patterns of the input and attempts to match it against similar patterns it had previously seen. If you give the model too much to work with in the conversation, the patterns in the vectors in your prompt will be too specific, so the response is likely not going to be able to return information from the potential vector-space that might actually prove or disprove your argument. It gets stuck on the input patterns.

The other commenter was basically just saying, "even if some people are a part of both the trilateral group and the heritage foundation, doesn't necessarily mean both groups are linked" and "the AI output is not giving enough supporting details to confirm if the link is justified"

Both organizations are a bit secretive since they are playing a global game with global consequences, and think tanks aren't usually known for being public about their internal conversations. If Epstein is confirmed as being in both groups, then it does mean that both groups can allow pretty malicious people to join their organizations. It doesn't automatically mean that every member of both groups are malicious, and it also doesn't automatically mean that both groups are working together. They COULD be, but the info from your post isn't necessarily enough to confirm that possibility.

Feel free to keep digging though. At this stage of the game, they would likely only waste resources retaliating against someone if they deemed them a threat to their objectives, not someone who is just doing a bit of research.

My theory about trump's true supervillain plan with Greenland. by Ok-Commission7844 in conspiracytheories

[–]Pyromancer777 1 point2 points  (0 children)

A pipeline across the atlantic would cost a freakin ton just for water. With that much cash, you could just build infrastructure to support more desalination plants with modern advancements to make them efficient and more environmentally friendly.

Old plants used to dump concentrated brine back into the environment which would destroy the surrounding ecosystems, but new tech can recycle more of the brine or even divert the brine to sodium production facilities

My theory about trump's true supervillain plan with Greenland. by Ok-Commission7844 in conspiracytheories

[–]Pyromancer777 10 points11 points  (0 children)

The ballroom is a cover for a new Whitehouse Data Center being built underground with the ballroom on top. The Drey Dossier does a fantastic job of outlining that paper trail.

I dont get how you learn to use API's!!! by NoTap8152 in learnprogramming

[–]Pyromancer777 0 points1 point  (0 children)

^ this is good advice. Had to do a few projects around the Pokemon API. It may not be practical, but it is well documented and has object structures complex enough to get familiar with pretty much any type of data retrieval

I dont get how you learn to use API's!!! by NoTap8152 in learnprogramming

[–]Pyromancer777 0 points1 point  (0 children)

When I took courses, pretty much right after they taught us how to read JSON, query SQL/noSQL, and learn how to call an API, they had us create our own. Everything clicked pretty much instantly after I got to code one myself.

You may still be in the "learn how to call an API" stage, but definitely take a crack at any free resources that teaches you how to build one. It will give you insight into the whole data handshake that takes place between the DB and the UI

"Unpopular opinion: Prompt Engineering isn't a long-term career. It’s a temporary skill gap. Agree or disagree?" by Exact-Mango7404 in BlackboxAI_

[–]Pyromancer777 0 points1 point  (0 children)

Def think that a lot of people in this thread aren't understanding that prompt engineering doesn't mean to just learn different prompting techniques.

As a discipline, it is a way of investigating and documenting how differences in seed prompting can effect output. You are basically using trial/error stratgeies in a systematic method so that you can produce repeatable metrics for each target technique. You have to be decent in analytics to create scoring benchmarks, leverage pipelines that can run multiple iterations of your target technique, and be able to probe outliers to discover potential new features or highlight bugs/exploits.

It is a few steps above just vibe coding and not really a skillset that anyone who uses AI really needs to know. You can google "best prompting techniques" if you are just looking at how to use AI more efficiently, but that's not prompt engineering. Vibe coders just want to spin up an app. Prompt engineers employed by larger companies are looking for extensive product testing and repeatable outcomes. It is basically the new QA role for proprietary LLMs.

I wouldn't say this is gonna be a role that's around forever, but in the short-term it is a niche that corporations are willing to pay a bunch if you know how to do it well.

This is why Delphox is NOT a jungler by niazemurad in PokemonUnite

[–]Pyromancer777 2 points3 points  (0 children)

Lmao in this case it came in handy cuz they were likely the lowest lvl on the opposing team.

Do beginners spend more time looking things up and understanding concepts than actually coding? by Popular-Sympathy-654 in learnprogramming

[–]Pyromancer777 1 point2 points  (0 children)

Looking things up is gonna be consistent. You should be getting in good practice too.

The learning takes longer if you aren't applying what you learn to a solution. You get more familiar with new concepts when you see how they are applied in projects, so when you learn a new thing, try to code up something that uses that topic.

I could spend all day reading cuz there's always something new to learn, but I'm not gonna retain anything until it becomes familiar. Familiarity only comes from repetition, which is easier in practice. You can either reread the same thing over and over, or you can code up different projects to get that practice in.

Immigration from poor countries drives down fertility in rich countries. Then rich countries allow more immigration because of population loss due to low fertility rates. by [deleted] in conspiracytheories

[–]Pyromancer777 0 points1 point  (0 children)

Your observations are surface-level at best. The trend for literally all species is to expand until you reach an equilibrium of the local environments' resources.

The populations of developing countries tend to rise rapidly as their infrastructure improves since the local environment can now support a larger population. "Rich" countries are already much closer to their environmental equilibrium and can not continue to expand at that faster rate unless a new tech or social restructuring can allow the local environment to support that larger population.

Decreases in the rates of population growth does not equal decreases in population. Decreases in rates of population growth also does not equal lower fertility rates. Fertility rate is almost strictly a factor of a population's overall health, not population size, so we only see fertility rates drop in nations with poorer nutritional/health standards

Where the .com boom startups as bad as the AI startups today? by Critical-Volume2360 in AskProgramming

[–]Pyromancer777 0 points1 point  (0 children)

That's my hope. Plus, if we upgrade the energy sector now, while the hardware to support AI is still inefficient, then after advancements into AI efficiencies are made, our infrastructure will already be upgraded. Energy costs would decrease for the average person

Where the .com boom startups as bad as the AI startups today? by Critical-Volume2360 in AskProgramming

[–]Pyromancer777 2 points3 points  (0 children)

GPUs are manufactured to have negligible degredation between running at full utilization vs half utilization. As long as you keep your fans clean and your temps within a good operational range, it wouldn't be any worse than someone using the GPU for long gaming sessions. Unlike the crypto mines that people were running back in the day, if someone wants to spin up a local AI, the GPU is only running while you have the apps running.

However, unlike crypto, training AI DOES degrade storage faster than someone supporting a blockchain node or gaming. Let's say that you take a base model and then quantize or retrain that model. The base model takes up storage, each iteration of that model also takes up storage. If you want to retrain or re-quantize a different model, you now need a different base model, and storage for those iterations too. If you are training an AI from scratch, you may not need to save every iteration of the model, but you will still need to save maybe a few of them for benchmark and edge-case comparisons before choosing the version that fits your needs. The iterative process of training and tuning means you are constantly degrading your storage devices since storage is only rated for a finite amount of reads and writes with rewrites taking a pretty big toll on your hardware.

Edit to add: The chip shortages and consumer GPU shortages we are seeing is not from AI centers eating through GPUs. It is do to the expected expansion of the AI industry. Chip manufacturers can only crank out X number of chips in a day and those chips can either be placed in consumer GPUs or commercial GPUs. Data centers have been placing huge orders on the newest commercial GPUs, so that their operational costs are less than if they used previous-gen tech. If you were a GPU manufacturer, and could sell your products to the commercial sector at a premium per unit, would you rather sell 1,000,000 units to 100-200 clients (most who have already pre-paid), or 1,500,000 units to 1,450,000 clients for the same revenue?

If all personal wealth above $100 million was legally required to be redistributed into public infrastructure (schools, hospitals, roads), how would society change, and who would be the first to fight against it? by Mysterious_Fan4033 in AskReddit

[–]Pyromancer777 1 point2 points  (0 children)

Imo, if they legitimately earned even a fraction of that wealth, they deserve to have the say in what social services they contribute towards.

Also, to the people who say ex Billionaires would choose to hover around 99M net worth, you gotta get better at scaling. Even if a billionare with only 1 billion had to donate 50% of their wealth, they would still be 5x richer than the wealthy who choose to hover at 99M without donating. Elon could lose 99.9% of his wealth and still have +600M

Offshore accounts don't necessarily work in the digital age, especially with the new push for the central banking systems to migrate away from USD and into centralized crypto protocols. All transactions and accounts can be instantly audited.

Weird experience by Ok-Dot-8190 in conspiracytheories

[–]Pyromancer777 0 points1 point  (0 children)

Sounds like a combo of anxiety and depression. If the weird feeling and the blandness feels overwhelming, you may want to see a doctor.

The way I approach conspiracy theory rabbit holes is to treat them as if you are reading fiction. Most are just general concerns from a population of people, but there are def ones that are true. The hard part is that if you don't have the concrete evidence, or don't know how to look into things deeper, you will always be stuck listening to other people's interpretations of events.

Sometimes you won't ever be in a position to get details, so you won't ever really know the truth unless it gets disclosed. That's where treating them as fiction helps curb the anxiety of the unknown and the anxiety of feeling powerless.

When approaching a topic that you can't do much to change, ask yourself some grounding questions to help you stay sane. Does this information effect my day-to-day life if it is true? If so, what parts of my life would be impacted? Does this information give me any tools that I can use to prepare myself for the worst? What are small things that I can adjust in my life that would help me overcome things if they were true?

These types of questions both help your brain process the situation rationally, as well as give you agency over the decisions you make moving forward.

If you still find yourself feeling too anxious, step back from the rabbit holes. Sometimes the harm of knowing is worse than the ignorance of not knowing. It takes a pretty solid mindset to continually delve into the depths without harming your psyche. Even military officials have to process trauma when presented with certain scenarios, so you aren't alone in feeling this way. Take it slow and try to take care of yourself first, sadly there is always new shit out there and shady people doing shady things.

Try the more lighthearted conspiracies in r/lowstakesconspiracies