icon_play

12 Steps to Better Analytics Teams

I recently stumbled across an old blog post “The Joel Test: 12 Steps to Better Code,” outlining a quick “yes or no” test that measures a software team’s performance. In my reading, I couldn’t help but note a number of parallels between analytics teams and software engineering teams. Specifically, they are each comprised of intelligent, highly educated people. They both write code, and work on difficult, often poorly defined problems.


With these overlaps, it seemed that the Joel test could be modified for similarly measuring the performance of analytics teams.


To gauge an analytics’ team’s performance, I devised my own set of “yes or no” questions as a guide:


1. Do you use source control?

Most analytics teams aren’t using a source control tool, whether out of habit or because of ignorance. Instead, many people simply rely on clever filenames or folders labeled with a particular date or version and eschew any formal control tools. Arguably, this may be acceptable if one person is working on only one project and your IT team has ensured proper backups in case of computer damage, but I would still argue a proper version control tool is superior. Truly, for anything requiring minimal collaboration, or any degree of complexity, a proper version control tool is a must.


2. Can you deploy your model or analysis in production in less than a week?

High-performing analytics teams can quickly integrate their models or analysis in production in under a week, enabling them to seamlessly translate analysis into value for their company. What qualifies as “production” may vary, from a report summarizing findings to a nightly batch to a production API. Regardless of use-case, quickly deploying your analysis and capturing that business value is imperative.


3. Do you track the impact of your analysis and compare expected versus actual performance?

Top-performing analytics teams realize that simulating the impact they will have is useful but only half the picture. The best teams follow a strict process for:

a. Tracking expected impact
b. Tracking actual impact
c. Explaining material discrepancies

Teams that follow this process will find this is where the rubber truly meets the road, where computer simulations and assumptions are tested in a real-world environment. Teams will find some of their best learning will derive from examining these discrepancies.


4. Do you have an internal code repository?

Analytics teams should maintain an internal code repository that the team heavily leverages for most projects. This repository should automate repetitive tasks such as data pulling, and it should contain best-in-class code and a standard way of calculating key statistics used in your peer review process. If projects on your team begin with a blank text editor, then you’re doing something wrong.


5.  Do you have an established internal peer review process?

Think about this scenario: a junior analyst on your team claims to have a new model that can increase volume of new customers, with only a modest increase in marketing costs. It sounds great, but you’re concerned about potential marketing cost overruns. If you don’t already have an established and documented internal peer review process, then you’re failing in a key area.

Having an established internal peer review process provides a playbook to evaluate new recommendations, ensuring you’re capturing this new potential marketing opportunity, while guarding against faulty assumptions, a fat-fingered value, or a potential error in code. Additionally, this process ensures best practices are communicated across your team.

To read the rest of Adam’s strategies, visit “12 Steps to Better Analytics Teams” on the International Institute of Analytics blog.