Changing behavior is difficult – even when the new things you are trying to do are relatively simple. Changing the behaviors of others is doubly challenging, and finding the right balance of incentive and penalty to encourage people to adopt a new behavior adds an additional layer of complexity. Being a data-obsessed, psychology-fascinated bunch, we recently self-experimented with a variety of incentive programs to encourage employee time tracking.
Gone are the days of having to log time by punchcard and time clock, programs like Harvest allow instant tracking at the touch of a button, providing customizable data about what the individual workdays of your team look like. While accurate time tracking helps you to manage your most valuable resource (people), the resulting data (and the decisions based on it) are only as reliable as they are. Though Harvest has made the process of time tracking incredibly simple, it is still a non-native behavior for most of us. The question becomes, “How do I get my team to track their time?” Or really, “How do I get my team to do any necessary behavior?”
Behavior change is something we’re interested in figuring out not only for ourselves, but also for our clients – we’re all working to achieve the same thing, one way or another. To that end, over the last two years, we’ve tested and revised four different incentive programs around time tracking at Undercurrent. Though we’re certainly not finished, we’ve decided to share our findings-to-date and next steps – complete with some pretty infographics.
We started from a null position. Our first experiment offered no official incentive or consequence for “Harvesting” time. If an employee failed to log her hours by a specified time each day, the only penalty was an email from the office manager reminding her to do so. This little “nudge” provided no firm incentive, and people were motivated only by pre-existing routine. Conducted in the Harvesting dark-ages, we have no data around how successful these “nudges” were, but needless to say our data was bad enough to act as a catalyst for this whole experiment.
We realized tracking our successes and failures would be an interesting and fruitful pursuit, and so the “Harvest Bot” was born. Built by one of our own brilliant Strategists, the bot went into the system each morning at 9:45 am and extracted data about who had logged time. To those who hadn’t, it sent an email letting them know they had failed and issuing “strikes” (one per failure). With no official incentive scheme, the strikes were little more than empty threats and it showed. If the real-time data and immediate feedback affected behavior, it did so barely. Reporting was terrible: our first set of stats showed a failure rate of 21% over the 120 day trimester we measured.
With a little data under our belts we stepped up our game, introducing our first official incentive: the failure rate of the entire group would directly influence the amount of each individual’s bonus. If one person failed to Harvest for the day, the entire group failed. Individuals were encouraged to take responsibility for their failures in the form of a “mea culpa” email to the entire team. The data for the day and the week was publicly displayed on one of the office’s central computers.
With the introduction of monetary incentive and “public shaming,” we watched our fail rate fall to 6%, but we noticed an unintended consequence: the “public shaming” aspect was negatively affecting our culture. When they failed, people felt immense pressure from the group as their mistake reduced everyone’s bonus. A certain hopelessness started to rise as people who didn’t (or rarely) failed watched their bonuses drop through no fault of their own.
In an effort to remedy the negative cultural consequences, we have dropped the group incentive for our current iteration (currently in beta), replacing it with individual responsibility. Now an individual’s bonus is influenced only by their Harvest success or failure. Stats are still tracked publicly, but the sense of group responsibility is gone. We’re only halfway through this experiment, but if we continue at our current rate, we estimate a fail rate of 5%.
While the difference between the Peer Pressure Incentive fail rate and this one is negligible, the cultural differences between the two are appreciable. Morale is up and Harvesting has become a routine behavior rather than a shared group sore spot.
It’s clear incentives are powerful tools to encourage behavior changes, but the bigger insight is around how incentives fit into a business’s process as a whole. No matter how well thought out and positioned your incentives are, they can have unintended side effects that impact other areas of a business. While the Peer Pressure Incentives worked as well at changing our time tracking behavior as the Individual Incentives did, their negative impact on the culture eventually outweighed the benefits. More importantly, however, is that there is no real way to uncover these unintended consequences without testing the incentives scheme. Thinking about behavior change and incentives systems as an iterative process allows your business to continually test and tweak to find out what gets the desired results you want with the least amount of collateral damage.
This post was co-authored by Strategist Jim Babb.