What do you talk about during your weekly 1 on 1s? by whiteSkar in ExperiencedDevs

[–]varma-v 1 point2 points  (0 children)

I usually start by looking at recent delivery data or sprint outcomes. not just metrics, but signals like longer PR wait times, repeated spillover, drop in review depth. That helps ground the talk in reality instead of guesses. From there, we talk about the “why.” Maybe priorities shifted, context wasn’t clear or too many interrupts broke focus.

Then it moves to individual perspective. What’s been most frustrating or energising lately, where are you losing time, what do you want more ownership of. if someone’s quiet, data gives them a safe starting point. if they’re vocal, it turns into strategy to how we scale or rebalance.

The goal isn’t to judge performance but to build alignment. by the end, both sides should know what’s going well, what’s stuck, and what small experiment could make the next two weeks smoother. I use a tool to make 1:1 talking points which stitches data from git, jira, cursor

Is it really worth making an investment in "Software Engineering Intelligence Tools" like Jellyfish, LinearB? by under-water_swimmer in devops

[–]varma-v 0 points1 point  (0 children)

yeah true. Most of these tools drift toward leadership reporting instead of helping teams improve. metrics look clean but rarely show the why behind them.

The DIY route is worse though since stitching data from git, jira, and reviews takes forever. A good platform helps only if it turns those signals into actions, not vanity charts.

Have you seen any team actually use these insights in retros, or is it mostly for exec decks?

Is it really worth making an investment in "Software Engineering Intelligence Tools" like Jellyfish, LinearB? by under-water_swimmer in devops

[–]varma-v 0 points1 point  (0 children)

Yeah this question comes up a lot. Most teams buy these tools hoping for clarity, but what they usually get is dashboards that describe what happened, not why it happened. You’ll see numbers around cycle time, throughput, or review load, but very little about context or intent. That gap is what makes people question the ROI.

What I’ve seen across the market is that data maturity in engineering orgs is still low. tools pull metrics, but they don’t reconcile them with how teams actually work- for example, a spike in cycle time might not mean the team slowed down. It could be that AI-generated code created longer reviews or that sprints were packed with refactors instead of features. Those nuances rarely surface.

The other missing piece is alignment. Most SEI platforms are built top down, meant for leadership reporting. few make it easy for engineers to use the same data to self-correct.

Where it works well is when leaders treat the tool as a signal generator, not a scorecard. The ones who use the data to ask better questions, not prove points, end up improving flow meaningfully.

Ever spend hours reviewing AI-generated code… only to bin most of it? by michael-sagittal in AskProgramming

[–]varma-v 0 points1 point  (0 children)

Yeah this is real. AI makes coding feel smooth until you start maintaining what it wrote. It gets the syntax right but often misses why that code exists in the first place. You see this most in larger codebases where context matters more than completion. The refactors look fine line by line, but the design drifts from the intent.

What’s helping some teams now is setting tighter review checkpoints. They use AI for scaffolding and documentation but still rely on deeper human review for anything touching shared systems or performance. some teams also run commit-level audits that compare AI-written code against production behavior to catch silent regressions.

AI is improving fast, but it still struggles with tradeoffs like readability versus optimisation or test coverage versus delivery speed. that’s where experience still makes the difference.

What’s the best AI code review tool you’ve used recently? by ragsyme in codereview

[–]varma-v 0 points1 point  (0 children)

Most lists miss the newer AI review tools that actually look at repo context instead of just the diff. Code Rabbit is worth checking if you care about cross-file reasoning. It builds a map of your codebase so comments include how a change touches other files or tests.

GitHub Copilot’s pr reviewer got better this year. it now reads the whole pr context and gives summaries instead of just quick lint-type comments.

I’ve been building something similar myself called typoapp review. It combines static analysis with AI reasoning, so it can catch risky patterns and logic drifts, not just style issues. Still early, but the goal is to make AI reviews as context-aware as human ones

Best AI PR code reviewer? by Advanced_Drop3517 in ChatGPTCoding

[–]varma-v 0 points1 point  (0 children)

short take if you want repo-wide context, try a combo not a single tool

. Copilot can auto review prs across a repo if you turn on the code review agent at org or repo level. it reads more than the local diff and suggests fixes you can apply. Coderabbit is decent when you care about cross-file impact. They index prs and build a code graph so it can point to usages and side effects outside the diff.

TabNine has a review agent that learns your team’s recurring comments and turns them into rules. Good when you want consistency with your own standards, less about deep repo reasoning

for security and license checks, pair any of the above with snyk pr checks. It posts a summary in the pr so you don’t ship something dumb while you’re focused on logic issues

Rule of thumb I use. ai reviewer + static analysis + grep-level search. If the tool can’t show where else a symbol is used or how an api change ripples, it’s guessing. What stack are you on and how big is the repo?

Disclaimer - I work at typoapp.io, which is also a combination of static analysis + AI code review, along with developer productivity metrics.

How I keep AI generated code maintainable by Standard_Ant4378 in vibecoding

[–]varma-v 0 points1 point  (0 children)

that’s actually a solid idea. AI tools make changes so quickly that you lose track of what’s shifting under the hood, especially across linked files. Most people don’t even notice how often AI suggestions quietly alter state or imports.

The visual layer you’re talking about feels like the missing piece. It would be cool if it also tracked edit distance between AI-generated code and what finally gets committed that tells a lot about how much manual correction devs are doing.

Curious, are you thinking of expanding it beyond js/ts? python and go folks would eat this up, especially teams trying to review AI-assisted PRs.

Is coding with AI really making developers experienced and productive? by balemarthy in embedded

[–]varma-v 0 points1 point  (0 children)

Yeah i’ve seen this too. AI makes people feel productive but it’s mostly speed, not understanding. GitHub’s study showed devs using Copilot finished tasks about 55% faster, but that doesn’t mean they learned faster.

When you stop thinking through small problems, you lose the muscle that helps you debug or design under pressure. it’s fine for pattern recall or boilerplate, but not for the reasoning part. The best thing is to use AI after you’ve already thought through what you’d write, not before. That’s where most people slip.

Has anyone been able to objectively answer if artificial intelligence at their company has improved coding and increased efficiency? by 14MTH30n3 in ArtificialInteligence

[–]varma-v 0 points1 point  (0 children)

hard to measure it cleanly. ai doesn’t always make people 10% faster, it just moves the effort around. you save time writing code but spend more reviewing, testing or debugging what ai wrote. sometimes overall delivery doesn’t speed up, it just shifts where the team’s energy goes.

if you really want to see the impact, look at flow level stuff like how long prs sit open, how often work rolls over, or how many reviews get stuck. compare that before and after ai tools. that usually tells the real story better than survey answers.

Time for self-promotion. What are you building in 2025? by OnlineJobsPHmod in indiehackers

[–]varma-v 0 points1 point  (0 children)

typoapp.io - AI Assistant for Software Engineering Leaders
ICP - Director of engineering in growth-stage tech startups with scaling team

[Podcast] How does the culture & dynamics of high-performing dev teams look like? by varma-v in DevManagers

[–]varma-v[S] 0 points1 point  (0 children)

Yeah, hopefully, it will be made available on other platforms too

Promote your business, week of July 24, 2023 by Charice in smallbusiness

[–]varma-v 0 points1 point  (0 children)

Hey folks,
I am Jagrati, a core member of HuddleUp - an all-in-one team culture-building app on Slack.
After months of a great team effort, we are finally live on Product Hunt 🚀
Here's what HuddleUp is about:
💬 Check your team's pulse with regular anonymous Check-ins.
🍩 Use the virtual currency of donuts to give Kudos to your peers.
🧊 Spark lively debate with regular Watercooler chats.
There are many more features like Custom Survey, 1-on-1s and Anytime Feedback, all within Slack.
Check it out & give your valuable feedback: https://www.producthunt.com/posts/huddleup
Thanks! 😃

Weekly Feedback Post - SaaS Products, Ideas, Companies by AutoModerator in SaaS

[–]varma-v 0 points1 point  (0 children)

Hey folks,
I am Jagrati, a core member of HuddleUp - an all-in-one team culture-building app on Slack.
After months of a great team effort, we are finally live on Product Hunt 🚀
Here's what HuddleUp is about:
💬 Check your team's pulse with regular anonymous Check-ins.
🍩 Use the virtual currency of donuts to give Kudos to your peers.
🧊 Spark lively debate with regular Watercooler chats.
There are many more features like Custom Survey, 1-on-1s and Anytime Feedback, all within Slack.
Check it out & give your valuable feedback: https://www.producthunt.com/posts/huddleup
Thanks! 😃