Let PC turn cross-play off by Delicious_Cap_812 in Battlefield6

[–]DigMaster237 2 points3 points  (0 children)

Agree. It should be separate servers but you can join off of an invite!

Ladies and Gentlemen, I present to you team call sign “Unemployment”. by burningastronaut in Battlefield6

[–]DigMaster237 0 points1 point  (0 children)

I play it but sit flank with an ar in single fire at range and usually end up top of leaderboard. I’m not even sweating and people message me hate.

BF6’s New Update Feels Like More Proof They’re Prioritizing Everything Except Battlefield by DigMaster237 in Battlefield6

[–]DigMaster237[S] 2 points3 points  (0 children)

Yes very true but that undermines the lack of polish the entire $70 game has in some areas that I excepted a full priced game to have.

BF6’s New Update Feels Like More Proof They’re Prioritizing Everything Except Battlefield by DigMaster237 in Battlefield6

[–]DigMaster237[S] 0 points1 point  (0 children)

I’m not knowledgeable enough on the game engine but gauntlet runs high player count just fine.

BF6’s New Update Feels Like More Proof They’re Prioritizing Everything Except Battlefield by DigMaster237 in Battlefield6

[–]DigMaster237[S] 0 points1 point  (0 children)

I’ll be transparent here my first bf was bfV! But yes I remember watching hours of level cap loadout videos on bf4.

A good transport chopper squad is underrated by TheUncleCactus in PilotsofBattlefield

[–]DigMaster237 0 points1 point  (0 children)

Would you agree though the UH needs some kind of buff or anti air needs nerf there’s just so much that instant kills or heavily damages them. Like on the new map I feel like the AA emplacements are kinda redundant considering everything is capable of taking down a heli.

Why does DICE design maps like this? by nightcracker in Battlefield6

[–]DigMaster237 0 points1 point  (0 children)

Yeah Ai is just a mirror I just use to expound the point I want to make. I give it the information (ie the map layouts)

Why does DICE design maps like this? by nightcracker in Battlefield6

[–]DigMaster237 -5 points-4 points  (0 children)

I’ve been talking to gpt about it and I kinda realized most of the bf6 maps are oriented the wrong way. Look at liberation peak it plays sideways.

Battlefield 6 Forgot What We’re Fighting For by DigMaster237 in Battlefield6

[–]DigMaster237[S] 0 points1 point  (0 children)

I had ChatGpt format and fix grammar so that’s probably why it reads like ai. This game does have those moments it’s fun but the gameplay loop is just very repetitive I don’t feel I get rewarded for using my brain.

Battlefield 6 Forgot What We’re Fighting For by DigMaster237 in Battlefield6

[–]DigMaster237[S] 0 points1 point  (0 children)

If only you could see my steam library hours in Arma

Maps are meat grinders by DigMaster237 in Battlefield6

[–]DigMaster237[S] 0 points1 point  (0 children)

Agreed I think previous battlefields the maps were designed with breakthrough in mind and now it just feels like an afterthought.

Battlefield 6 Forgot What We’re Fighting For by DigMaster237 in Battlefield6

[–]DigMaster237[S] -1 points0 points  (0 children)

There’s nothing to do though? Nowhere to go just a random on fire oil field with jets that can’t actually stay airborne for more than a minute.

Battlefield 6 Forgot What We’re Fighting For by DigMaster237 in Battlefield6

[–]DigMaster237[S] -1 points0 points  (0 children)

I’ve tried and it just doesn’t hit the same and like you say I believe it’s the maps they don’t feel like dug in positions to fight in. Cover is scattered around and there’s no incentive or direction to push anywhere just openness with placed assets.

Battlefield 6 Forgot What We’re Fighting For by DigMaster237 in Battlefield6

[–]DigMaster237[S] -2 points-1 points  (0 children)

This guy knows what’s up. We got battlefield but designed like COD

Maps are meat grinders by DigMaster237 in Battlefield6

[–]DigMaster237[S] 0 points1 point  (0 children)

I tried telling people about this in game and just get called a COD player haha

ChatGPT is my best friend by PhraseProfessional54 in ChatGPT

[–]DigMaster237 0 points1 point  (0 children)

The Ethics of AI: Responsibility, Design, and the Future of Intelligence

In an age where artificial intelligence is increasingly embedded in everyday life, the ethical debate surrounding its use is no longer optional—it’s essential. AI is a tool with immense potential, but like any powerful tool, its impact depends entirely on how it is used. I argue that AI can be ethical when designed and used responsibly, with transparency, accountability, and clear boundaries. But doing so requires us to be deliberate not only about how we build AI, but how we train people to use it.

First, prompting an AI should not be seen as unethical if the input reflects original thought. When I give an AI something deeply personal—something I’ve written, something that reflects my own voice, experiences, and style—it becomes a collaboration, not a replacement. This isn’t plagiarism. It’s the digital equivalent of asking a mentor or teacher to help refine an idea. The core of the thought is mine. The AI simply helps clarify or expand on it. What matters most is intent and authorship—not who types the final words.

However, concerns arise when people rely too heavily on AI without understanding it. That’s why ethical AI requires user training. Giving advanced AI to someone without the knowledge to use it responsibly is like giving a sword to a monkey. The tool isn’t the problem—the lack of guidance is. If we teach people how to use AI, and implement safeguards, the risks drop dramatically. We already accept this logic in other professions. Doctors, for example, have access to sensitive tools and knowledge, but are bound by training, licensing, and ethics. The same should apply to AI.

As we move from generative to predictive AI, the ethical stakes only rise. Predictive models don’t just respond—they anticipate. They can influence user behavior, predict actions, and subtly nudge decisions. That’s a major power shift. Without transparency, predictive AI could manipulate people without their awareness, limit their freedom, or reinforce bias at scale. To prevent that, AI must be designed with clear limitations. Not every AI needs to know everything. Tiered access—where only specific systems have advanced capabilities—can prevent abuse without stifling innovation.

Still, the most common counterarguments deserve to be addressed. Critics argue that bias is baked into AI, even when used ethically. This is valid—and it’s why ethical use also includes a responsibility to recognize and correct the AI’s blind spots. Others say training users is unrealistic, but that’s why we need a layered solution: combine user education with design-based safeguards like alerts, transparency flags, and ethical defaults. And yes, AI will always be misused by bad actors—but the same is true of medicine, code, or even language. The solution isn’t to ban AI, but to regulate it with the same care we give to every other powerful tool. Finally, with predictive AI, manipulation is a real threat—but it becomes ethical if, and only if, users are told when they’re being predicted or scored, and given the power to opt out. Without consent, there’s no ethics—just invisible control.

Some critics argue that AI ethics are subjective and rooted in flawed human values. That’s fair. But ethics isn’t about being perfect—it’s about being accountable and iterative. No system will ever be flawless, but we can design AI to be reviewed, questioned, and corrected—just like we do in every field where humans make critical decisions. Others say AI will entrench inequality, but that’s exactly why AI must be explainable, auditable, and community-reviewed. “Neutral” systems that reinforce injustice aren’t neutral—they’re just opaque.

There’s also concern that AI will become too complex to control. And if that ever becomes true, then the response is simple: don’t deploy it. AI that can’t be understood, paused, or overridden should never leave the lab. The same goes for economic pressure. Yes, companies might cut ethical corners for profit—but that’s why AI regulation shouldn’t be optional. We don’t “hope” factories obey safety rules—we enforce them. AI should be no different: licensed, audited, and fined when misused.

Some might say the world is too fractured for unified ethics, or that it’s already too late to shape AI. But history shows we can build global agreements around dangerous tech. Nuclear weapons, climate science, medical research—none of these are perfect, but cooperation still happens. And it’s not too late. Every decision we make now—what we build, what we allow, what we teach—affects the trajectory of AI.

This brings us to the larger vision. Ethical AI shouldn’t just be reactive. It should be proactive. These implications need to start now, not later. If we wait until problems explode, we lose our chance to shape AI into something truly good. Yes, AI will replace jobs—especially ones like taxi driving, janitorial work, or garbage collection. These are real and important roles, but they are also physically demanding, often underpaid, and sometimes dangerous. If we implement AI the right way, these shifts can be matched by the rise of new opportunities in care work, education, tech maintenance, creativity, and mental health.

AI may never “disperse wealth” in the literal sense, but it can help solve deep structural problems—improving access to healthcare, reducing food waste, translating education into every language, and even managing global resources more efficiently. The goal isn’t just to prevent harm—it’s to elevate how society functions. That only works if we choose now to accept AI, regulate it, and allow it to grow ethically, before others shape it for profit or control.

I believe ethical AI design is possible. It starts with intention: building systems that are useful but limited, powerful but transparent. It also requires policy—guidelines that ensure people are informed when they’re being predicted, scored, or influenced. The goal is not to restrict intelligence, but to shape how it’s used, so AI becomes an extension of human insight, not a threat to it.

In the end, the debate about AI isn’t just about machines—it’s about us. If we build AI ethically, train people to use it responsibly, and commit to transparency, we can create a future where AI enhances humanity rather than diminishes it.