Benchmarking:
Pluralsight Flow and Gitential in Review

Run-through

AI-Powered Chatbots in Customer Service and Engagement

Using AI for customer service in your company is a definite method to save time and money. If you’re like most business owners, you’re constantly searching for fresh, creative ways to improve your enterprise. We’re here to inform you that improving AI customer service is a simple and rapid win.

Read More »


So, how’s your team doing? How much better should it be doing? It’s nice to have performance metrics for your software development team, but it only gives you a one-sided view. The ability to readily compare performance by team or project with others provides you with more actionable insight on what your team can improve upon. Let’s take a look at what Pluralsight Flow and Gitential have to offer when it comes to team and project benchmarking.

Pluralsight Flow for Benchmarking

Pluralsight is a really good resource for software developers to skill up on their coding, but Pluralsight Flow is for team development. But, the only benchmarking mentioned by Pluralsight Flow is the “Industry Benchmarks” article in the support part of their website.

In their own words:

“Industry benchmarks are reference points based on the software development industry. Use industry benchmarks to see how your team compares with the industry. This can help you and your team identify potential areas of growth.”

Where does this data come from? They reference having analyzed 7 million commits by about 88k software engineers throughout 2016. Their analysis examined four main metrics: Active Days per Week, Commits per Active Day, Impact, and Efficiency.

Ok, so you can compare your teams against the industry in the mentioned above areas.
But how can you verify what’s going on inside your own company? Do you have to collect your metrics scores per team and upload them to an excel table to actually get some actual insights?

What does Pluralsight Flow offer in this area?

There’s a review and collaboration package providing three sets of metrics: Submit, Review, and Team Collaboration, effectively defined by the following metrics:

Submit

  • Responsiveness: How fast a developer responds to comments.
  • Comments Addressed: How frequently a developer responds to comments by reviewers.
  • Receptiveness: How frequently a developer follows up comments with code revisions.
  • Unreviewed PRs: Frequency of uncommented pull requests.

Review

  • Reaction Time: Measures reviewer response time to developer’s comments.
  • Involvement: The percentage of pull requests a reviewer participated in.
  • Influence: Measures the number of subsequent commits following a reviewer’s comments.
  • Review Coverage: What percentage of hunks a reviewer commented upon.

Team Collaboration

  • Avg. Time to Resolve: Measures how fast pull requests are closed.
  • Avg. Time to First Comment: Average of how long it takes between when a pull request is opened and first reviewer comments on it.
  • Avg. Number of Follow-on Commits: Measures how many revisions are made after the PR is reviewed.
  • Raw Activity: A straight count of total comments and commits on a per PR basis.
  • PR Activity Level: A relative measurement of PR activity based on number and recency of comments, and other factors.
  • Recent Ticket Activity Level: Tracks tickets like the PR Activity Level.
  • Knowledge Sharing Index: A relative measurement of how information is being shared according to who is reviewing PRs.
  • Number of PRs Reviewed: A count of how many PRs have been reviewed.
  • Number of Authors Reviewed: A count of how many developer’s PRs were reviewed.
  • Available Reviewers: Number of developers who submitted and reviewed PRs in a given time frame.
  • Active Reviewers: Number of people who reviewed a PR in a given time frame.
  • Submitters: Number of people who made a PR in a given time frame.

Data sets are interesting but is there any comparison on company level included? Not really.

Gitential

Gitential.com provides you with an actual Benchmarking feature that calculates your company average and compares projects and teams against it. But also, you can compare projects and teams between them. Just pick whatever project, teams and metrics you’d like to verify and enjoy the instant, custom comparison solution.

What kind of questions can you answer thanks to the Benchmarking feature?

Top level:

  • What is the average performance of my company for any commit and PR activity metric?

Mid-level:

  • How are my projects / teams doing compared to the whole company or compared to each other?
  • Which of my projects / teams are performing the best in code writing, reviewing, etc activities?
  • Who are the fastest teams in code writing and pull request releasing?
  • Which projects / teams need attention and support to improve their performance?

Low level:

  • What are we doing well and should keep doing to stay on top inside our organization?
  • What are we doing wrong and what should we do to catch up with other projects/teams?
  • What can we learn from other projects/teams to improve our workflow/performance?

So what kind of data does it contain?

You can use Benchmarking for both your Projects and your Teams.

You can adjust the time range for the analysis, as well as filter out specific data (e.g. filter on just a couple of projects or specific repositories only).

Projects part content

Currently, there are six main types of reports that you can automatically generate for your projects. These include:

  1. Project vs. Baseline Averages
  2. Project vs. Project Comparison
  3. Coding Productivity by Project
  4. Deployment Productivity by Project
  5. Project Commit Activity
  6. Project Pull Request Activity

We’ll show you below how simple it is to generate each of them.

1. Project vs Baseline Average chart

It compares the project of your choice against the average scores of the chosen metric of all your projects.

Pick the project you want to compare

Pick the metric you’re interested in

Get instant results

2. Projects Comparison

This chart shows the progress of all your projects according to the chosen metric, also against the baseline average which is marked by the light blue chart background

For the comparison, you can pick any metric you need

The calculations are immediate

3. Coding Productivity by Project

It compares the coding productivity of all your projects in one chart.

Understand the coding behavior of your projects. Which projects are the most productive in code writing and commit activities? The size of the buggle represents the total Code Volume contributed to the particular project.

  • Unicorns: Fast code writers with high coding efficiency. Use them as an example to grow.
  • Untapped Potential: Slow code writers but their commits are quality commits with a low rewrite rate.
  • Startups: Very fast in writing the code but they need to fix their code very often. Support them to focus on reducing their code churn.
  • Juniors: Slow code writers with a lot of code rewrite. Support them by understanding the complexity of the project and bringing senior people aboard.

4. Deployment Productivity by Project

It compares deployment productivity of all your projects in one chart.

Understand the deployment performance of your project based on how fast they are releasing new features.

The size of the bubble is the number of pull requests merged.

Coding Optimized: Projects being fast in both coding and pull request review process. Optimal team with optimized processes. Use them as an example to grow.
Juniors: They have a fast PR Review process but they are weaker in code writing.
Agile: Fast committers but they have room to improve their PR and review activities.
Process Optimized: They need assistance to improve their coding skills and set up KPIs to their PR and review processes.

5. Project Commit Activity

In this chart, you can compare your projects side by side, and metric by metric, based on commit activity.

Each of the metrics can be sorted by ascending or descending data. Just click on the metric to reverse the values. The small percentages below the results are indicators of the current scores vs scores of the previous period, according to the scale below:

6. Project Pull Request Activity

In this chart, you can compare your projects side by side, and metric by metric, based on pull request activity.

Each of the metrics can be sorted ascending or descending. Just click on the metric to reverse the values. The small percentages below the results are indicators of the current scores vs scores of the previous period, according to the scale below:

Teams part content

You can also generate team reports, just like you did for comparing projects:

  1. Team vs. Baseline Averages
  2. Team vs. Team Comparison
  3. Coding Productivity by Team
  4. Deployment Productivity by Team
  5. Team Commit Activity
  6. Team Pull Request Activity

Let’s take a look!

1. Team vs Baseline Average

This chart will show you scores of the chosen team according to the chosen metric compared to the company’s average

Pick your team of interest

Choose a metric you’d like to check

2. Team vs. Team Comparison

Choose a metric you’d like to check

Pick the metric that interest you

And get your results right away

3. Coding Productivity by Team

Understand the coding behavior of your teams. Which teams are the most productive in code writing and commit activities.

Size of the bubble represents the total Code Volume contributed by the particular team – with similar correlations of developers as defined under the project views.

4. Deployment Productivity by Team

Understand the deployment performance of your team based on how fast they are releasing new features.

The size of the bubble is the number of pull requests merged – with similar correlations of developers as defined under the project views.

5. Team Commit Activity

In this chart, you can compare your team side by side, and metric by metric, based on commit activity.

Each of the metrics can be sorted ascending or descending. Just click on the metric to reverse the values. The small percentages below the results are indicators of the current scores vs previous period’s scores, according to the scale below:

6. Team Pull Request Activity

In this chart, you can compare your team side by side, and metric by metric, based on pull request activity.

Each of the metrics can be sorted ascending or descending. Just click on the metric to reverse the values. The small percentages below the results are indicators of the current scores vs previous period’s scores, according to the scale below:

What’s coming next?

Gitential already provides you with the internal benchmarking for Projects and Teams, but it’s not stopping there. By the end of 2021, the Gitential team plans to introduce Industry Standards (a.k.a. Industry benchmarks, just like Pluralsight Flow). At that point you’ll be able to track your development progress internally, and compare it against the industry, too! Benchmarking is one small, though important, part of Gitential’s automated software development analytics. Previously on our blog, we discussed the four main drivers of software development – Speed, Quality, Efficiency, and Collaboration.

Software development value driver framework

• Lead time
• Cycle time
• Velocity
• # of active days
• Coding hours

• Churn
• Technology churn
• Test volume
• Bug fixing rate

• Code efficiency
• Code complexity
• Waste
• # of PRs vs. multiple review cycles

• Responsiveness
• Co-authoring
• Review coverage
• Unreviewed PRs

Objective software development KPIs

Did you like our content?

Spread the word

Subscribe to Our Newsletter

Don't miss our latest updates.
All About Software Engineering Best Practices, Productivity Measurement, Performance Analytics, Software Team Management and more.

Did you like our content?

Spread the word

Subscribe to Our Newsletter

Don't miss our latest updates. All About Software Engineering Best Practices, Productivity Measurement, Performance Analytics, Software Team Management and more.