Big Data

Don't tune out user feedback when measuring data scientists' effectiveness

Read about an approach that John Weathington says will do wonders for knowing and improving your data science team's effectiveness.

Image: Steve Guttenberg/CNET

How good is your data science team at building solutions? Are they better today than they were a year ago? How do you know?

These are very important questions to ask if you're running a data science team. However, when I ask most leaders this series of questions, they get embarrassingly stumped on the last one. Most leaders have a sense for how well their data science team is performing, but they don't know.

If you've made a sizable investment in data science, your responsibility is to have more than just a sense for how well your data science team can produce. Furthermore, as you experiment with ways to improve your data science team, it's important to gauge whether your ideas are working. There are two critical success factors that should always be measured and monitored on your data science team: effectiveness and efficiency. The focus of this column is helping you measure effectiveness. My next column will address how to measure efficiency.

Is my team effective?

Of the two main measurement categories, effectiveness is more important than efficiency. It doesn't matter how fast your data scientists are solving problems and building solutions if their solutions aren't useful to end users. That's why I'm constantly harping on the topic of requirements.

Effectiveness in this context is your team's ability to deliver what end users want. End users assess effectiveness — it's their gauge of how well their requirements are represented in the data scientists' solution.

It's quite a risky area for data scientists and oftentimes effectiveness is rarely discussed, let alone measured. Measuring effectiveness will force the data science team to confront gaps when they surface and have an honest conversation with end users before trust and confidence are destroyed.

To measure effectiveness, I suggest a simple survey that asks end users how well the solution meets their requirements. You could use a five- or six-point Likert scale, depending on whether you want to force a choice (an odd-point scale will allow a neutral response). Administer it to an accurate sampling of actual end users (don't substitute end users with proxies of any sort), and make sure your scale is balanced (e.g., there's an equal number of agreements and disagreements). Simply asking end users what they think in a structured and measurable way and then displaying the results in a public scorecard will do wonders for knowing and improving your team's effectiveness.

Fine-tuning the measurement

To get the most value from your effectiveness measurement, you should take it as frequently as possible. This is another great advantage of an iterative or agile software development lifecycle (SDLC) over a waterfall approach.

If user acceptance testing (UAT) is at the very end of the project and this is the first time end users get to play with the solution, this is the first opportunity you have to collect feedback on the solution's effectiveness. If you have a two-week or 30-day iteration cycle, you can administer your effectiveness survey at every iteration meeting — that way, the team can make adjustments if necessary before the project is over. In fact, you may want to build survey administration, root-cause analysis, and a brainstorm on improving effectiveness into the agenda of every iteration meeting.

Another way to fine-tune effectiveness is with good change control. There are three reasons why code is changed: the team didn't understand the requirement, the end user requirement changed, or the code needed refactoring. The first reason is the only one that contributes to effectiveness; it's important to have this resolution in your software change control process, or you run the risk of having an inaccurate measurement.

For instance, if you collect requirements in January and then administer your effectiveness survey in June, business conditions may have changed. Two things will happen at this point, neither of which is good. End users will honor the spirit of the survey and respond based on their original requirement, which is not valuable to them today, or they'll respond based on the current level of effectiveness, which includes requirements the team doesn't even know about. By categorizing the reason for changing requirements, you can weed out the reasons that don't apply to the measurement.


Team effectiveness is one of the most important measures to know about your data science team. Not only does it tell you how well your investment in data science is doing, but it also gives you an objective indicator of how well team improvement ideas are working. Take some time today to develop a team effectiveness survey, and then incorporate its administration into every interaction you have with end users.

It's better to find out now how well end users think your team is doing. Otherwise, you might not have a team, and you won't even know why.

Also read

About John Weathington

John Weathington is President and CEO of Excellent Management Systems, Inc., a management consultancy that helps executives turn chaotic information into profitable wisdom.

Editor's Picks

Free Newsletters, In your Inbox