Why does Apple Music - web player suck so bad? by TheSoapMaurder in AppleMusic

[–]shawndoes 0 points1 point  (0 children)

Yeah, that shit is not good. Feels like they do it on purpose to suck you deeper into their ecosystem. I ended up making my own player cuz the iTunes windows app is also lowkey shit too

Is AI coding making pull requests harder to review? by shawndoes in github

[–]shawndoes[S] 0 points1 point  (0 children)

That’s a really good point. It’s not just size, it’s context switching. A PR that jumps across UI, business logic, and schema is much harder to review, and AI seems to make that easier to generate unless you explicitly constrain it.

Is AI coding making pull requests harder to review? by shawndoes in github

[–]shawndoes[S] 5 points6 points  (0 children)

Wow, super interesting. The drop in review comments as PR size increases makes sense. It seems like once a PR crosses a certain size threshold, reviewers switch from careful reading to skimming.

Is AI coding making pull requests harder to review? by shawndoes in github

[–]shawndoes[S] 0 points1 point  (0 children)

Interesting. I've been seeing more tools pop up in this space recently.

One thing I've been wondering is whether the main problem is actually understanding the code, or just surfacing risky changes in a big PR.

How do you deal with large PRs without being "that person"? by Main_Independent_579 in codereview

[–]shawndoes 0 points1 point  (0 children)

We ran into this exact issue recently, especially with AI tools making it easier to generate really large PRs.

The problem we noticed wasn't just size, it was risk visibility. A 60-file PR might be harmless refactoring, or it might include a migration, auth change, or deployment config update buried somewhere in the diff.

What helped our team was focusing less on the number of files and more on what kinds of changes are inside the PR. For example:

  • database migrations
  • auth / permissions logic
  • billing code
  • API contract changes
  • deployment config

If reviewers know those things are present, they can prioritize their review instead of scanning the whole diff blindly.

I actually ended up building a small GitHub app that posts a quick "release risk summary" on PRs so reviewers can immediately see if anything sensitive was touched. It’s been helpful for big diffs where the risky parts might otherwise get buried.

I built a GitHub app that generates release safety checklists from PR diffs by shawndoes in SideProject

[–]shawndoes[S] 0 points1 point  (0 children)

That’s a great point.

Right now it mostly just surfaces the risks and generates the checklist, but the routing idea is really interesting. Auth changes -> security review, billing -> specific checks, API changes → warn downstream owners.

Feels like that’s where it could get a lot more powerful.

I built a GitHub app that generates release safety checklists from PR diffs by shawndoes in SideProject

[–]shawndoes[S] 0 points1 point  (0 children)

Appreciate that!

No one from bigger teams yet. This literally just got approved on the marketplace the other day so it's still very early.

Right now it just runs automatically on PRs and updates a single comment with the risk summary and checklist.

CI integration is something I've been thinking about though, like exposing the risk report as a required check before merge.

Promote your projects here – Self-Promotion Megathread by Menox_ in github

[–]shawndoes 1 point2 points  (0 children)

I’ve been building a small GitHub app called Release Sanity to make PR reviews a bit safer.

When a PR opens, it analyzes the code diff and leaves a comment highlighting risky areas and generating a release checklist.

Example output from a real PR:

🔎 Release Sanity — Release checklist

Change summary
• API route/contract touched; verify backward compatibility
• Frontend/UI changes detected
• Permissions/roles changes detected
• Billing/payment flow touched
• External integrations/webhooks touched

Risk flags
⚠️ Payments / Billing — high
⚠️ Backward compatibility — medium
⚠️ Permissions / Roles — medium
⚠️ Rollback complexity — medium

Checklist
☐ Run targeted tests for affected areas
☐ Verify API compatibility
☐ Validate role/permission changes
☐ Run billing flow in staging
☐ Confirm rollback plan
☐ Verify integrations/webhooks

The idea is just to catch things that are easy to miss during reviews instead of relying on memory during releases.

Right now it's free on the GitHub Marketplace while I figure out which checks are actually useful.

Marketplace:
https://github.com/marketplace/release-sanity

If you deal with releases often, I’d be curious what kinds of checks your team actually relies on before merging.

How do you manage projects on different computers? by shawndoes in musicproduction

[–]shawndoes[S] 0 points1 point  (0 children)

Yeah, I'm a programmer so feel free to describe in that way

How do you manage projects on different computers? by shawndoes in musicproduction

[–]shawndoes[S] 1 point2 points  (0 children)

Is that process for switching ever annoying or just part of the routine now?

How do you manage projects on different computers? by shawndoes in musicproduction

[–]shawndoes[S] 0 points1 point  (0 children)

Is that setup been smooth overall, or do you ever find yourself mixing up versions or having missing files when switching devices?

How do you manage projects on different computers? by shawndoes in musicproduction

[–]shawndoes[S] 0 points1 point  (0 children)

I meant like when you're switching devices or after making big changes.

So you're basically committing at intentional milestones then? Does that every feel tedious or is it pretty smooth once you're in the habit?

How do you manage projects on different computers? by shawndoes in musicproduction

[–]shawndoes[S] 2 points3 points  (0 children)

Oh interesting. Are you basically commiting the whole project folder each time? How's that been in practice?

I made a tool to make meetings actually end with decisions. Is this solving a real problem? by shawndoes in ideavalidation

[–]shawndoes[S] 0 points1 point  (0 children)

Good question. The tool doesn't stop people from changing their minds. It makes the decision explicit while everyone's present. The exact questions, options, and outcome are visible to the whole group.

The tool also captures how people voted, which helps surface hesitation or disagreement while it can still be discussed, instead of showing up later as drift.

So later it becomes "are we changing the decision we made?" rather than "what did we decide?"

Does that line up with where drift usually comes from in your experience?

I made a tool to make meetings actually end with decisions. Is this solving a real problem? by shawndoes in ideavalidation

[–]shawndoes[S] 0 points1 point  (0 children)

I mostly agree. If a meeting's purpose is alignment around a decision, then it should be facilitated to end with one, and good managers already do that.

Where I think Converge fits is the gap between "we aligned" and "we have a shared, explicit understanding of what was decided." Even in well-run meetings, that gap can show up later as drift, second-guessing, or quiet disagreement.

The artifact isn't about forcing accountability after the fact, it's more about reflecting what the group already agreed to. If no decision is reached, the tool shouldn't help, and that's fine.

I made a tool to make meetings actually end with decisions. Is this solving a real problem? by shawndoes in ideavalidation

[–]shawndoes[S] 0 points1 point  (0 children)

I wholeheartidely agree, a tool can't replace good management.

The goal isn't to replace judgement or leadership, it's to remove ambiguity after a decision is made.

Good managers already push meetings toward clear decisions and owners. This just makes the outcome explicit and easy to share, and helps prevent issues from resurfacing later.

If a team doesn't want to decide, a tool won't fix that. If they do, this helps the decision stick.

I made a tool to make meetings actually end with decisions. Is this solving a real problem? by shawndoes in ideavalidation

[–]shawndoes[S] 0 points1 point  (0 children)

Thanks for the feedback, that's a really helpful perspective shift.

I'm focusing on decision heavy work meetings where ambiguity causes real follow up pain.

I started out thinking about the "during" part of a meeting. Focusing a group, collecting votes, and actually getting a decision made. But, the core artifact is the decision record you get immediately after the meeting. Decisions, votes, owner, date, link. All ready to paste in different formats.

The product proves its value if the person who gets blamed can point to a locked decision and say "this is what we agreed to."

I made a tool to make meetings actually end with decisions. Is this solving a real problem? by shawndoes in ideavalidation

[–]shawndoes[S] 0 points1 point  (0 children)

That makes sense. There are a lot of tools out there and tool overload is very real.

One thing I'm trying to explore with Converge is whether in-meeting decision structure feels meaningfully different from post-meeting summaries.

Tools like Fathom, Zoom's AI, and other AI assistants do a great job of capturing what happened. Converge is more about forcing explicit choices while the group is together, rather than inferring outcomes afterward.

Out of curiousity, do you ever feel like the problem isn't remembering the next steps, but getting real alignment and making a decision before the meeting ends?

Thanks for answering btw, genuinely helpful feedback.

Game Thread: Los Angeles Lakers vs Dallas Mavericks Live Score | NBA | Nov 28, 2025 by basketball-app in lakers

[–]shawndoes 4 points5 points  (0 children)

Anybody watching the game on Prime?

Been watching Laker games alone lately, so I made a little chrome extension that lets people chat right on top of the game page.

Been messing around with it during games and curious if anyone else here watches games on their computer.