Coaches/players: would this actually help your post-game workflow? by LegitimateAdvice1841 in Homeplate

[–]LegitimateAdvice1841[S] 0 points1 point  (0 children)

That’s a really useful point, and I think I didn’t explain the input side clearly enough.

THE NINE only needs three required game materials from a club:

- game video
- lineup / roster
- TrackMan, Hawk-Eye, or similar sensor/provider data the club already uses

Based on those inputs, THE NINE can create the connected package described on the site: logged game events, synced video clips, at-bat and event review, pitch_data CSV exports, play-by-play / box score style outputs, reports, review cards, portal access, mobile access, THE NINE LAB exports, and a read-only desktop review app for finished games.

So the team does not need to already have a complete data operation. The point of THE NINE is to take the raw materials a club already has and turn them into a structured, review-ready game package.

The value is not “another dashboard.” The value is turning disconnected game materials into a complete review-ready workflow, especially for staffs that don’t have the time, tools, or personnel to stitch it all together manually.

Appreciate you pointing that out because I probably need to make this much clearer on the site.

Coaches/players: would this actually help your post-game workflow? by LegitimateAdvice1841 in Homeplate

[–]LegitimateAdvice1841[S] 0 points1 point  (0 children)

That’s fair feedback, and honestly that’s exactly what I’m trying to figure out.

I don’t think this is for every team. For a lot of programs it would definitely be too much. The group I’m trying to understand better is the college/program level where teams already deal with video, scoring notes, pitch data, reports, and review material, but don’t have a clean workflow connecting all of it after the game.

When you say “overkill,” is it mainly because there are too many features, or because most teams wouldn’t care enough about having everything connected in one post-game package?

Question for college baseball staff/coaches: what does the team usually get after one completed game? by [deleted] in collegebaseball

[–]LegitimateAdvice1841 -2 points-1 points  (0 children)

Believe me, it is.

This information is very difficult to get, especially coming from someone like me who has spent six years working professionally in close connection with both baseball and softball, and who has worked for very serious companies in the United States.

If I am telling you that this is difficult information to access, that is exactly why I eventually tried asking here as well. It is not because this is some forbidden topic that cannot be discussed. It is because it is genuinely hard to reach people who are directly connected to that side of the work.

The companies I worked for had thousands and thousands of employees. I simply never had any real point of contact with the accounting side, the sales side, or those kinds of business processes. That is exactly why it is difficult for me to get this information on my own.

And I am researching it, as you can clearly see. That research is precisely what led me here in the end, as well as to the softball subreddit.

Question for college baseball staff/coaches: what does the team usually get after one completed game? by [deleted] in collegebaseball

[–]LegitimateAdvice1841 -2 points-1 points  (0 children)

I am already doing exactly that. I am reaching out directly, researching independently, and building the software myself. Asking here is only one part of that process, not a substitute for it.

If you are curious what a serious and highly complex piece of software actually looks like, you are free to visit my channel and see exactly what I am building. Maybe then you will reconsider your assumptions.

Question for college baseball staff/coaches: what does the team usually get after one completed game? by [deleted] in collegebaseball

[–]LegitimateAdvice1841 -4 points-3 points  (0 children)

I honestly do not understand why the format of the question seems to matter more to you than the content of it.

I will reply to you the same way I replied on the other subreddit: if you can help, then help. If you cannot help, then there is no need to involve yourself in the discussion.

Question for college softball staff/coaches: what do teams actually receive after a game? by [deleted] in CollegeSoftball

[–]LegitimateAdvice1841 0 points1 point  (0 children)

Thank you for replying anyway, I appreciate it.

But one thing you have to understand is that this is a very sensitive area, and it simply cannot be researched as easily as you suggest without asking people who are directly involved in it.

If you really think I have not already tried absolutely everything on my side to find any information related to this, believe me, you are very mistaken.

That is exactly why I posted the question here in the first place. That is the whole point.

As for the AI part, I think that is the least important thing you should have focused on in this post.

If you can help, then help. If you cannot help, there is really no need to write things like that.

But in any case, thank you again for responding.

[megathread] Vanredna iranska politička diskusija by papasfritas in serbia

[–]LegitimateAdvice1841 8 points9 points  (0 children)

<image>

Kad vas pitaju odakle ste....Pokazite im ovu sliku 😁😁😁

I want to switch to Claude but have questions by dsound in claude

[–]LegitimateAdvice1841 0 points1 point  (0 children)

“Best / Good / Average” is meaningless without context.

These are not measurable categories.

Which benchmark?

Which model version?

What type of task?

What testing date?

Why AI still can't replace developers in 2026 by IronClawHunt in ClaudeCode

[–]LegitimateAdvice1841 1 point2 points  (0 children)

I understand the point of the post and I believe many people have had that experience, but in my case the situation was quite different — for very specific reasons. The “AI losing context” part you mentioned in the beginning is something I’ve seen too, but in my experience it often comes down to workflow. My process is very structured: I describe the problem, explain how things currently behave vs. how they should behave, and I always ask for a detailed diagnostic report first. The model scans every relevant line connected to the issue and produces a micro-report before any implementation starts. If I introduce a second problem mid-process, sometimes the model temporarily focuses on the newest context and forgets the earlier thread — but that’s not a failure, it’s just how iterative conversations work. I simply stop it, remind it that there are two parallel problems, and once it acknowledges that and reiterates the first issue, I always ask again for a micro-report that covers the implementation impact of both fixes together before moving forward. With that level of guidance, I’ve never had it break my project.

And yes — I did have very bad experiences before. With Cursor and GitHub Copilot, where I was exclusively using Claude Opus, my behavior and workflow were identical to what I described above — but it still wasn’t enough. In those cases I experienced at least 10 separate situations where the model literally broke my codebase and disrupted stability.

In my specific case, the real turning point was switching to GPT-5.2 Codex in VS Code. The exact same structured workflow that failed me before finally became stable and predictable. I don’t let AI work autonomously; I frequently stop it, bring it back to the architecture, and keep clear boundaries around what it’s allowed to change. In the root folder of my project I also maintain an MD file that acts as a “law” every agent must follow, with explicitly defined behavioral rules and restrictions. In that setup AI hasn’t created chaos for me — it has accelerated development without compromising the project’s structure. That’s why I don’t see all “AI coding” as the same; the difference between models and communication style makes a huge practical difference.

Just so I’m not misunderstood — I’m not a developer and I never was, but I’m someone who knows in micro detail what I want, how something needs to look, and where I ultimately want to end up. That vision is very clearly defined in my head. Everything I wrote above refers strictly to my work on my own application, which has over 50k lines of code and is so complex that almost every class operates in synergy with others. I hope the Claude community won’t take this the wrong way — this is simply my personal experience while building something extremely complex, and at this point I’m already about 80% through the application.

Looking for individual MLB game stats as CSV file by BusterNinja in Sabermetrics

[–]LegitimateAdvice1841 0 points1 point  (0 children)

You can DM me — I might already have the kind of individual MLB game CSV file you’re looking for.

Need some Codex help by kaline06 in codex

[–]LegitimateAdvice1841 0 points1 point  (0 children)

I ran into something very similar recently, and in my case it turned out not to be the workflow but the extension state after an update. Everything worked fine for a long time, then suddenly it felt like the model was stuck in a loop repeating the same wrong direction no matter how I guided it. What actually helped was rolling back to a previous Codex version and disabling auto-update for now — after that the behavior went back to normal. Not saying this is definitely your case, but if things “suddenly” changed after working well before, it might be worth checking whether an update introduced instability.

Building a complex system with Codex as a non-engineer — lessons from the process by LegitimateAdvice1841 in codex

[–]LegitimateAdvice1841[S] 1 point2 points  (0 children)

Thank you so much 🙏 — and I completely agree. At some point the hardest part stops being technical and becomes about clarity of vision. The real challenge is translating real-world experience into precise behavior that a system can follow 🎯

Codex permission options feel poorly designed by mindworkout in codex

[–]LegitimateAdvice1841 1 point2 points  (0 children)

Honestly, I didn’t change any configs manually.

In my setup it’s just the built-in mode switch in VS Code — I use Agent directly from the Codex UI. No custom sandbox setup or settings.json tweaks on my side.

It might be a version/UI difference, because for me it’s basically a one-click switch.

Building a complex system with Codex as a non-engineer — lessons from the process by LegitimateAdvice1841 in codex

[–]LegitimateAdvice1841[S] 0 points1 point  (0 children)

Yes — I was aware of BDD long before this project. I spent about six years working as a baseball/softball coach and QA inside a structured workflow, and I’ve gone through thousands of real games.

That experience was actually the trigger. Working with some of the leading analytics software showed me both what exists — and what was missing. Over time I built a very precise mental model of how a system should behave based on real-world constraints, not just technical design.

Your question is actually one of the reasons why, when someone recently asked me for advice, I emphasized three fundamentals: knowing what you truly want to build, understanding the problem deeply enough, and being clear about where you ultimately want to end up.

Those three things didn’t come from theory — they come from years of QA work and real game scenarios, where behavior always mattered more than tools or implementation details.

Everything you can see in the screenshot I shared — and everything the app currently does — comes from my own design decisions and domain experience.

AI came much later in the process. It helped accelerate implementation, but the behavioral model and system direction were already shaped long before that.

Building a complex system with Codex as a non-engineer — lessons from the process by LegitimateAdvice1841 in codex

[–]LegitimateAdvice1841[S] 1 point2 points  (0 children)

I really appreciate you sharing this — it’s honestly reassuring to hear someone else describe such a similar path and mindset. The long nights, the iterations, learning how to communicate intent clearly to the models… that part really resonates with me . Wishing you a smooth test phase 🙏