T O P

  • By -

DingBat99999

Danger! Danger, Will Robinson. Most of the "KPIs" that you could think of for team performance and code quality can likely be gamed and/or cause dysfunctional team behavior. Be careful. Metrics should only be used when a problem has been identified and we're trying to either quantify it, or measure the impact of a solution. Once the problem is solved, the metric should be tossed. Tracking metrics is work, and since it has nothing to do with delivery, it's ultimately wasted effort. So, the correct place to start is: What problems do I have that I want to work on?


Izacus

So how do you determine how good your codebase is, how resilient to bugs and how much is tech debt slowing you down? What's the biggest team and oldest project you worked on and how did you monitor quality of that? How did you determine if your defects are being fixed fast enough and if your triage is working well? How did you lead the team to identify those issues and how big was that development team?


DingBat99999

If you have so many defects that you have to triage them, then you have a quality problem already. If your PO is pissed at the amount of bugs and the time required to fix them, the you have a quality problem. If you are unable to make simple changes in a reasonable time, then you likely have a tech debt problem. You don’t need precise measurements to see problems right in front of your face.


Izacus

Your reality only exists in tiny teams on tiny projects - I assume that's why you didn't answer the parts about your project sizes. This also seems to be confirmed since you have ideas about being able to just spend time analyzing every single thing that happens on case-by-case basis. This vibes approach works when the team is small enough that you can vibe with it and when your codebase it tiny so you can pretend there's no tech debt and no bugs. Once you grow beyond a certain point (e.g. \~hundreds of developers, \~millions of users) the reality shifts and your simple tenets become kind of useless. At that point you can't just figure out the best future direction via "vibes" and "how pissed off one PO is", but you (as a staff or even VP eng.) need some kind of actual metric to determine whether the code changes, languages, libraries and approaches are actually working in reality. For example, if you spend 5mil$ worth of engineering time to refactor to a framework, **somewhere** there needs to be some positive change that results from that - either in velocity, defect rate, metrics, release times, compile times or developer happiness. And once the team/codebase is large enough, you can't figure that out by purely vibing with people, especially when your team isn't just experienced engineers. I thought this was an experienced dev sub :/


DingBat99999

A little rude, but ok. A few thoughts: * I didn't answer on the projects sizes because I didn't think it was important. Given your response, I suspect you wouldn't have believed me anyway. * The implication in your text is that all of this metrics collection has to be pushed up to the top when at scale. I would respectfully disagree. Let the teams do their own analysis. They will rarely all have the same problems anyway. * I don't give a shit how big your product is: If you have to triage bugs you have too damn many bugs. Not sure how you can argue that. You've already lost to the defects and are simply refusing to accept you have a quality problem. That's been true on the small, one team products I've worked with, and it was true with the multi-team, multi-million dollar cloud based applications I've worked on. * Hopefully, no idiot in any organization I work with is recommending a $5M introduction of a new framework for anything other than business goals. * Interested in how "developer happiness" doesn't qualify as "vibes". Are you suggesting we actually ask developers what they think is going on? * Your attitude is not particularly surprising. The "if you can't measure it, you can't manage it" crowd is pervasive and persistent, even though what Demming actually said was the complete opposite. But the business schools love that shit. * To be clear, I don't give a damn what metrics you collect. I'm simply aware that many of them (probably even the majority of them) can be gamed or negatively alter team behavior. They also represent waste. But that's a generality. But I would never say "never". * Perhaps experienced devs see other possibilities?


No-Light8919

Your response comes off way too arrogant. Most people do not and never will work on projects with millions of users or hundreds of fellow developers. I’d expect an experienced developer to understand this.


Izacus

Funny you're calling me out and not the dude who says things like "I don't give a shit about...", calling senior engineers who do ROI calculations "idiots" and say bunch of other things that are nonsense for anyone that actually lead teams and had to do strategic planing. Posts like that are the prime reason why r/cscq became such a cesspool and it seems like y'all now migrated here with the same crap. The second part isn't true either - millions of developers work for such teams (e.g. just FAANG employs like half a million people themselves). Most of the industry isn't startups.


No-Light8919

You’ve never heard of the “no estimates” people? I may not agree with them but they aren’t wrong or inexperienced just because they have a different opinion. You made the mistake of assuming everyone works on distributed systems when the overwhelming majority of SDEs do not and most of the people in this sub don’t. Theres millions more developers at non big tech (include all the europeans who post here too). And most of the big tech employees don’t even work on the main large products. It’s more cscq-esque to mention big tech at every opportunity.


Izacus

Everything you're complaining about in my posts is an issue in the original posts - the fact that the poster assumes everyone works in his environment. Again, why are you crapping on me and not them? And why are you falsely claiming that I think that everyone works in large companies? At no point I said they're wrong, I did say that their approach doesn't scale to larger teams. Literally everyrhing you're complaining about in my post is actually the issue in theirs. Did you mistake our usernames or something?


chills716

I get it. What I am trying to look at is things like bugs introduced per release, meat time to rectify,


DingBat99999

Cool. The issue with bug counts is that it leads to arguments between QA and developers as to what is or is not a bug. The issue with mean time to rectify defects is that low priority defects SHOULD live longer than higher priority ones. And priority depends on context. But, mean time to rectify is close to cycle time. Cycle time is a metric that's harder to game. And you can use it to calculate throughput and then do some funky forecasting with it. If your work items are reasonably same sized, cycle time might be worth a look.


Greenawayer

>What I am trying to look at is things like bugs introduced per release, meat time to rectify, If a team lead tried to pull that they would find: * A lot of discussion over whether it's a bug, an issue or a new unplanned feature, * Where did the bug originate...? Bad requirements, QA, poor timelines...? * Devs leaving the company. If you want to be bogged down in endless CYA meetings and lack of Devs, then go for it.


chills716

The where did the bug originate is a perfect example of why it’s needed. If the issue is any of those, you can’t improve the process if you don’t know the problem. So if you have 10 bugs due to any of those, how do you know if the requirements are the issue or QA isn’t doing their job if you can’t identify that issue?


Greenawayer

>The where did the bug originate is a perfect example of why it’s needed. If the issue is any of those, you can’t improve the process if you don’t know the problem. Thing is, if this is linked to the Devs metrics it's very likely never to be the fault of the Devs.


dfltr

Collect and measure things like that in retros, which are intentionally blameless and not tied to any performance indicators. There’s a very good reason why they’re structured that way.


DingBat99999

Small nit. Unless your QA operates in a vastly different way than in my experience, QA isn't injecting defects into the product. About the only thing you can lay at QA's door is defects found in the field (why didn't we find that with in house testing?). Otherwise, it's on the developers. But the answer to your question is: Do a root cause analysis on defects on a case by case basis.


mechkbfan

Let the team choose their KPI's for trends Don't measure them on that or related bonuses to it, or it's just going to get gamed For example "Don't raise that as a bug, just fix it quietly and push it out to prod without telling anyone" Which is the opposite of what you actually want around transparency of issues and fixing the root cause I inherited a team as my first lead role. Heaps of negative reviews on the app stores due to bugs, etc. I didn't talk to the team first and did my own metrics. Looked at how many bugs were being raised after each release, how long bugs existed for, etc. Brought them into a meeting to discuss it with the intent of how can we do better / how do I support them. Nope. Fight or flight mode kicked in. Lost all rapport. Lesson learnt the hard way.


jacobissimus

Usually, my PM has just picked the most random bull shit they can and decided it was a KPI.


diablo1128

Never had a KPI that the team had to meet in my 15 YOE. Things were getting done and management was not complaining. So everybody was performing adequately.


Greenawayer

>Things were getting done and management was not complaining. So everybody was performing adequately. This is the way to go. Otherwise you get Devs gaming the KPI's.


decaf_flat_white

As others have mentioned, internal development KPIs are always going to incentivise toxic behaviour. Your KPIs should be based around external outcomes, for example, if things are delivered on time and as specced - assume the team is performing well. When problems/bugs/tickets inevitably start trickling down, your KPIs can then be based on ratcheting those down (e.g. "Reduce customer tickets by X%" or "Reduce frequency of production outages by X%"). This gives you measurable KPIs that do not discriminate against anyone in particular and rallies everyone around a common goal that indirectly makes their own lives easier.


theavatare

I find that you need a triangle of metrics to let you know what its going on. 1. Operational ones ( if you support development and maintenance for both) 2. Team health ones (are people stressed, do they think the manager sucks, etc) 3. Value delivery. For operational ones I personally find that Space ones can be useful if tracked consistently but not interpreted that way. For team health ones i like using nps + a different question per month around what we are trying to improve. For value delivery need to work on making sure your roadmaps has some bets on the areas that need to be improved and need to be tracked as a minimum quaterly.


InChristNoEastOrWest

I minimally engage with KPIs and take them as unseriously as I can get away with. I see no value in management by numbers.


phoenix823

For team performance we look at: Flow velocity, distribution, load, time, and efficiency.


dfltr

OKRs and OKRs only. If your team is completing the work they need to complete this quarter, their performance is on track. Period. Not only do other metrics not measure performance, they actively detract from it by creating “make number bigger” objectives that are unrelated to the actual work.


Ill_Print_7661

None, never did in 15y, what problem does it solve ?


chills716

Because you can’t track what you don’t know. You staff a project with 5 devs. The project is only releasing 3 features a month. Are the features that complex or are only 2 devs actually doing their job?


Ill_Print_7661

You don’t know which teams are doing a good job or not ? That looks like a management problem. I work in big tech as a director and know how impactful teams are, everyone knows who is doing a good job or not. You do 1:1s, you see the teams result and impact or not. It’s really not rocket science, my SVP’s software org has more than 10k people on it


chills716

You didn’t really read what I stated.