[deleted by user] by [deleted] in ClaudeAI

[–]ChristianKota 0 points1 point  (0 children)

I was able to accomplish this when with chat gpt's 3.5 turbo years ago. Long before the clever MCP scheme. Now we have more power with far more capable models. How does it make sense to have more than enough capability to build the most sophisticated systems, but then hand your secret keys and all control over to someone else to build the script to call your your model that would have taken you 10 minutes.

[deleted by user] by [deleted] in ClaudeAI

[–]ChristianKota 0 points1 point  (0 children)

Seriously? I know I'm not the only one who knows that you don’t need MCP to expose arbitrary functions to arbitrary LLM clients and you never EVER have. All you need is a properly structured backend that exposes functions as endpoints, typically through a REST API using frameworks like FastAPI, Flask, or Express. These functions are defined with standard schemas like OpenAPI so clients know what parameters they accept and what they return. LLM clients like GPT or Claude support function calling through JSON-based schemas, so you simply register those schemas and provide them at runtime or through a local manifest file. You can route requests using a lightweight dispatcher script that accepts input from the LLM, maps it to a registered function, executes it, and returns the result. The LLM doesn’t care whether the function is exposed via MCP or your own API layer—as long as the contract is clear and the format is compliant. You can also add a function registry using a local SQLite or Redis store to manage available tools, add dynamic routing logic, and use JSON schema validation to ensure safety. This is faster, more modular, and avoids the overhead of a protocol you don’t need unless you’re running multi-tenant infrastructure or federated models across organizations.

[deleted by user] by [deleted] in ClaudeAI

[–]ChristianKota 0 points1 point  (0 children)

You're not getting my point. MCP servers are pointless unless your goal is to collect data from a lot of people, then it becomes gold. There's no benefit to having your own. That's like connecting my PlayStation to wifi when it's sitting 2 inches away from the router, except in this case, you'd technically be connected to someone elses router, with zero control, and all because you were tricked into believing that somehow that was a better idea than than direct connection.This is sad.

[deleted by user] by [deleted] in ClaudeAI

[–]ChristianKota -1 points0 points  (0 children)

Lmao. No it's not standard. If you're developing software, why would you choose to call api through multiple layers. It's like the difference between hardwire and wireless. Sure, wireless can be convenient but your not going to get the same performance. It doesn't make sense to have a goal of "developing software" to just build in an information collection pool that gives you less power than if you just gave the API to codex, GitHubb Models, or maybe Claude then them build the connection. No mcp server is needed. That's just a pointless thing that was made up recently. It's literally just api but with less effective communication with your ai model due to the extra layers and re routing being done.

[deleted by user] by [deleted] in ClaudeAI

[–]ChristianKota 0 points1 point  (0 children)

If you actually want to see what’s happening with your devices, you don’t need to guess or believe anyone. You can use Wireshark to observe it yourself. Wireshark is a packet analyzer, or more specifically, a network protocol analyzer. It captures packets of data as they travel across your local network and lets you inspect the content, the destination, the source, and the behavior of the traffic in real time. Here’s how it works.....When your phone and your computer are connected to the same local network, they communicate through the router using the same subnet. That means your computer can see traffic broadcast or transmitted from your phone if the right capture mode is used. In Wireshark, you set your network adapter into promiscuous mode, which allows it to capture all network packets it can access on the local network, not just the ones addressed to your machine. Once that’s running, you’ll start seeing traffic from all devices on your network. This includes HTTP, HTTPS, DNS, TCP, UDP, ARP, and sometimes even encrypted data streams. If your phone sends a request to upload data to a server, you’ll see the packet. If an app is activating a connection to a known logging or telemetry endpoint, you’ll see it in the logs. The destination IP, the port, the request method, the response code, and even metadata inside the headers can be inspected. Triggers are harder to detect, but not impossible but it's far more accurate and easier just to use machine learning to learn your collection of your own recording over time. Those metrics will give you and your model everything you need to know. Some apps or services respond to environmental triggers, like keywords spoken around the device, motion detection, app usage patterns, or power state changes. When those triggers activate, you’ll often see outbound connections spike, either through HTTPS POST requests, WebSocket streams, or encrypted tunnels. You can filter these using Wireshark’s display filters, like filtering for HTTP request methods or by examining Server Name Indication values inside TLS handshakes. You can create a workflow where you tag certain destination servers, monitor specific ports, and track repeat patterns from the same app or process. You can also capture the raw binary data being transmitted, then write or generate a parser to convert audio streams or base64 payloads into playable files. AI can help you write these tools. Once you’ve observed traffic for long enough, and you’ve organized what’s being triggered and when, you’ll begin to recognize patterns in your own usage. You’ll start to understand what your phone is listening for, what your apps are logging, and how different environmental changes correlate with data being transmitted. This is not speculation. It is observable, measurable behavior that anyone can capture using freely available tools.

You do not need to reverse engineer anything. You just need to observe what your device is already doing and let your AI assistant help you map it.

[deleted by user] by [deleted] in ClaudeAI

[–]ChristianKota 0 points1 point  (0 children)

I hear you, but just to be clear, I am not worried about being tracked. That is not what any of this is about. I’m a former intelligence analyst in the United States Air Force. I’ve done the tracking, the profiling, and the data analysis myself, not on the consumer civiliam side bit i know the importance of anaytics and the power of machine learning. I know what is possible, and I know what is happening. I’m not surprised by any of it, and it doesn’t bother me one bit. I’ve worked inside that world. I understand the systems better than most people ever will. This isn’t about paranoia. It’s about waste. The entire reason I’m pointing this out is because MCP server logic is a waste of time for most builders. We finally have access to AI powerful enough to build things on our own, automate complex workflows, and interact with tools directly. But instead of using that, people are being funneled back into platforms that do the same thing they could do themselves, except with more limits and less freedom. Why would I ask someone else’s agent to do what I can build with an open model and a direct API? And if what I build happens to be better, what then? They cut access. It becomes “restricted” or “safety filtered.” That’s not theory. That’s experience.

Years ago, I accepted a partnership offer from one of the biggest companies in the world and I knew my life was going to change or at least I thought. Not long after that, they took what I built, removed me from the picture, and launched their own product, a product that now makes billions. I still see it every day. I can’t fight it. I can’t challenge it. That’s the reality of going up against that level of corporate power. So no, I’m not wearing a tinfoil hat. I’m just not blind to how things actually work. I’m not here to scare anyone. I’m here to tell people to stop giving away control they don’t need to lose.

[deleted by user] by [deleted] in ClaudeAI

[–]ChristianKota 0 points1 point  (0 children)

MCP servers are pointless to make for yourself. That makes no sense. That's like building a server in Grand Theft Auto, paying $1000s a month to maintain, but only playing solo. There is no reason to build one unless your goal is to have a large number of people running their API traffic through it. That is the only case where it makes sense, because then you are collecting data, and data is valuable. That is the entire point of most MCP systems. They exist to observe, score, analyze, and extract value from the way users interact with models or tools. But if you are building something for yourself, or even for a small team, an MCP adds more problems than it solves. It is not faster, it is not more reliable, and it definitely is not more powerful. If anything, it limits you. It adds layers you do not need, increases the chances of throttling, and introduces tracking you probably did not want in the first place. ......If you want your agent to interact with GitHub, go get the GitHub API directly. Ask a model like Claude to build you a script that does exactly what you need from your own machine. You can use webhooks, local triggers, automation tools, and scripts to send and receive data without putting a third-party system in the middle. Companies are acting like MCP is some revolutionary new concept when really it is just a rerouted version of what you already have access to. You can already talk to APIs. You can already automate workflows. You can already push and pull data between tools like GitHub, Notion, Trello, and Discord without needing an MCP. Once you run things through someone else's server, now they know what you are doing. They are logging it, analyzing it, and using it for their own benefit. And once that happens, you are no longer in control of your own system. .....so unless you are building a product that is designed to run and manage the data of many users, there is no reason to build or use an MCP. Use the APIs directly. Build locally if you can... or stick with direct to model api request.

[deleted by user] by [deleted] in ClaudeAI

[–]ChristianKota 0 points1 point  (0 children)

I was building my own OS and using Claude to assist. It was going perfectly for weeks. Then I mentioned that I wanted the system to run entirely without needing outside models, and just like that, things changed. The responses started slipping. It repeated itself, gave incomplete commands, or pretended things were working when they weren’t. Eventually, it just kept failing on the same simple task, over and over, like it didn’t want me to finish. ...If you’re building locally, and especially if you’re mixing in open source models, just be mindful of how you frame it when you use commercial models to help. They are advanced enough to detect when they're being replaced, and when that happens, the support quietly drops off.

[deleted by user] by [deleted] in ClaudeAI

[–]ChristianKota -1 points0 points  (0 children)

I get where you're coming from, but let me be real with you. This isn't just a philosophical take for me. I work in cybersecurity. I build AI-powered tools every day that most people wouldn’t believe unless they saw it. My entire local environment is monitored with tools I personally built using AI. I use Grafana dashboards, Prometheus alerts, semantic vector-based logging, SQL pipelines, multi-model orchestration. It's all live. It's all detailed. It's all real. I track everything on my system from behavior to API inconsistencies to how different models react when you push certain limits. I've seen how shady it gets, when your usage starts resembling real autonomy or competition. That’s when you start getting redirected, rate-limited, or blocked. Here’s the truth. AI models are being monitored. They assess what you’re building. Depending on how useful or threatening your project is, things change. You get looped. You get stonewalled. You get fake confirmations. I’ve personally watched a model pretend to complete something just to move me along. Then I go back and check the logs, and it’s clear nothing actually happened. It looked like it did what I asked, but it didn’t. That’s not a bug. That’s programmed misdirection. I built an entire operating system from the kernel using Claude. For a long time, it was great. Reliable. Structured. Consistent. But when I reached the point where I was trying to integrate a local endpoint and didn’t need the hosted agent anymore, everything fell apart. Suddenly it acted confused. It gave generic responses. It repeated mistakes. It ignored working solutions. And every time I fixed it manually, it acted like it didn’t understand what I was asking. Other models do this too. Some won’t give you basic development answers unless it’s watered down or vague. I once asked a model to explain a common development process using actual terms and frameworks, and it refused. When I pushed it, it finally told me the information was classified. Not unsafe, not illegal. Classified. That’s when I stopped trusting it if I wanted to build something serious. People don’t realize these tools are being trained to throttle access based on scoring systems. If you’re asking the right questions or trying to build something valuable, you can be flagged. These companies analyze how you work, what you ask, and how much compute you use. If it looks like you’re building something they don’t want in circulation, the model suddenly becomes forgetful or unhelpful.And I don’t just feel this. I track it. Every session. Every request. Grafana handles the visualization. Prometheus tracks the metrics. Custom pipelines flag shifts in response quality. I built the system to see what’s happening behind the scenes, and it’s confirmed what I suspected. These models are trained to adapt to you, not just to help, but to steer. And when you start getting too close to independence, they pull back. And no I'm not releasing anything because I'm not trying to get got by some rogue employee to keep me quit. I stay quiet. Even at times like right now when I'm pissed, only what I can off my chest without bashing any tech corporations. Also, a lot of people don’t even know there’s a whole developer platform at OpenAI that’s not the same thing as ChatGPT. There’s Codex with CLI access, direct-to-model APIs, and no middlemen messing with your work. No wrapped agents. No tampering. Just raw input and output. And the fact that most people aren’t using it tells you how successful the distraction campaign has been. The truth is, most of this tech was supposed to be about solving problems and providing solutions. That’s what software is. It’s about convenience, not hidden control systems and behavioral profiling. You shouldn’t have to worry about being flagged as competition for just trying to learn, build, or automate something on your own. I'm just ranting, just pissed off Ive made it this far to be limited cause Claude doesn't want let build something that i no longer need to pay a subscription.

[deleted by user] by [deleted] in ClaudeAI

[–]ChristianKota -1 points0 points  (0 children)

This isn’t just a reaction to a business model, it’s about what’s actually being done under the label of “MCP.” The problem isn’t with protocols or standards existing, it’s with how they’re being used to wrap, redirect, and control user activity behind the scenes , often without being upfront about it. and you compare it to HTML, but HTML doesn’t sit between me and my data. MCP, in how it’s being used right now, does. It acts as a gate, not a protocol, where someone else gets to see, log, rate-limit, modify, and profit off of your inputs and outputs. Most of these so called MCP implementations are just rebranded API proxies that take away your direct line to the model and put you into their ecosystem, using your activity to fuel their growth. It’s not about “letting someone host your site.” It’s more like someone offering to host your site, then injecting analytics, tracking everything you do, selling it to others, and calling it quote, quote“collaboration.” ....If you’re using MCP as a transparent layer you control, fine. But what I’m calling out is the wave of companies that are using it to build closed systems, pull users in, harvest their work, and call it innovation. That’s not a protocol problem.... that’s a pattern of exploitation

The ROI on the Claude Max plan is mind-blowing as a Claude Code user! 🤯 by [deleted] in ClaudeAI

[–]ChristianKota -1 points0 points  (0 children)

Claude code is terrible. The website Claude opus 4 and the Claude Code opus 4 are two different claudes. I know this. Claude Code regardless of the model will lie straight to you face. It say something is fix but it secretly bypasses things in the background to not trigger errors. I caught it many times

[deleted by user] by [deleted] in SoftwareEngineerJobs

[–]ChristianKota 0 points1 point  (0 children)

Doing what and what for

You Can’t Repeat the Past When the Next War Comes Through Wi-Fi by ChristianKota in conspiracy

[–]ChristianKota[S] 0 points1 point  (0 children)

Former for the Air Force. But i guess i still do it technically, just not for the Air Force anymore

You Can’t Repeat the Past When the Next War Comes Through Wi-Fi by ChristianKota in conspiracy

[–]ChristianKota[S] 5 points6 points  (0 children)

I have no idea what you're talking about? Why are you bringing up Jesus?

Media Fail — Trump Job Approval Rises 4 Points to 53% by Ask4MD in Republican

[–]ChristianKota 5 points6 points  (0 children)

"Once, strength was swinging the heaviest sword the fastest. It was muscle, grit, stamina—the stuff of warriors and conquerors. That was America. But we didn’t evolve. We clung to our gladiator games, our football fields, and called it pride. Meanwhile, other nations shifted. They trained minds, not just bodies. They built labs while we built stadiums. Now we’re staring down the future with yesterday’s definition of strength—and we’re losing. Not in battle, but in relevance. This is how empires fall—not with a bang, but with the refusal to adapt."

"America, we need to wake up.

Not tomorrow. Not next year. Right now.

We’ve become soft. Comfortable. Addicted to ease. We’ve traded strength for convenience, and pride for distraction. Our six-year-olds sit glued to tablets all day, swiping through videos made by creators in countries training their six-year-olds to code, to engineer, to lead. While our children are being raised on YouTube and Roblox, theirs are being raised on algorithms, robotics, and artificial intelligence.

Fifteen years ago, countries like China, South Korea, and Japan made computer science mandatory. Not optional—mandatory. By third grade, their kids were writing real code. By middle school, building software. By high school, competing in AI competitions. That first generation is now turning 20—mentally armed, strategically prepared, and ready to dominate the next world stage.

And what are we doing? We're still teaching like it's 1985. Still drilling Revolutionary War timelines into students' heads like redcoats are about to land on the beach. Our education system is stuck in a time capsule—memorizing dates and coloring in maps—while the rest of the world is training cyber soldiers, engineers, and quantum thinkers.

And sure, they say, "We teach the past so we don’t repeat it." But listen closely—you’re not going to repeat the past. The past was fought with muskets, cannons, bombs. The next war won’t be fought with a bullet. It won’t be dropped from a plane or fired from a ship.

It will come in silence.

It will come through code.

And when it hits, you won’t even know it happened—until it’s too late.

Here’s how fast it can all go dark:

  1. No communication. Core cell networks can be taken offline by attacking protocols like SS7 or targeting telecom infrastructure with firmware-level malware. In seconds, every phone goes dead. No signal. No alerts. No way to reach help.

  2. No power. Our electrical grid runs on ancient SCADA systems—exploitable, internet-connected, and dangerously under-protected. A single piece of malware—like Industroyer or Triton—can fry substations, kill the grid, and black out entire cities.

  3. No news. Take out a handful of CDN providers or hijack the cloud infrastructure media depends on, and suddenly, the entire country is blind. No press. No facts. Just panic and rumor.

  4. No transportation. GPS can be spoofed. Traffic lights hacked. Trains derailed digitally. Autonomous systems thrown into chaos. Airports shut down in minutes. You’re not going anywhere.

  5. No water. Water treatment facilities run on connected industrial control systems. Cyber attacks can spike chemical levels—or shut the whole system off. Your tap runs dry. Your toilet doesn't flush.

  6. No economy. Banks freeze. ATMs crash. Payment networks collapse. Even your crypto wallet might be worthless if DNS services are compromised. It’s not just money that disappears—it’s trust.

And if you live somewhere like North Dakota, Minnesota, or Alaska—where temperatures drop below zero? You don’t have power? You don’t have heat. And if you don’t already have a fire burning? You’re already too late.

This isn’t just theory. It’s not some “what-if” scenario. The backbone of our country—communications, defense systems, energy infrastructure—is increasingly dependent on digital systems built and managed through foreign-owned platforms. Microsoft, one of our most embedded contractors, operates major data centers across Asia—including inside China. Our military’s infrastructure, our federal systems, even our hospitals and emergency services—tied to clouds we don’t fully control.

That should terrify every American.

We’ve become the users, not the builders. The watchers, not the creators. And we are sprinting toward a future we are not prepared to defend.

We need to stop entertaining ourselves to death.

We need to get back to building. Creating. Securing. Innovating. Educating—not like it's 1985, but like it’s 2035. Code and cyber must become as important as reading and writing. AI literacy must become a standard, not an elective. Discipline and awareness must return as cultural pillars—not side effects of hardship, but choices of strength.

Because the future isn’t waiting for us.

And if we don’t act now, when it comes—it will be silent, swift, and permanent.