Objectives and performance

How to ask the right questions to keep a team on target and push for performance

I was sitting in a meeting with a research group where we were talking about some performance issues and how to get people to be on the right track. And I recommended we look at some startup techniques because really startups are often some of the most productive units given what they have to deal with in terms of resources and time constrains. Here are some lessons from a startup that can be applied to a research group in thinking about performance:

  1. What were we trying to accomplish? Every meeting should start by restating the objectives you were trying to hit. The group should have agreed on clear objectives prior to taking action in the first place. If there’s lack of clarity here, the rest of the debriefing will be of little value because you won’t know how to judge your success. Teams often struggle to set clear objectives if the initiative is at a conceptual phase. Push yourself to set these early on to pave the way for your team’s learning and rapid improvement.

  2. Where did we hit (or miss) our objectives? With clear objectives, this is a pretty straightforward conversation as you either did or didn’t hit them. Review your results, and ensure the group is aligned. Another important factor is that you need to know what metrics you are using. This is critical because if the tool we use to measure is flawed, the results are bound to be flawed. We can easily fool ourselves that everything is going great when it really isn’t.

  1. What caused our results? This is the root-cause analysis and should go deeper than obvious, first-level answers. For example, if you were trying to generate fifteen wins and only generated five, don’t be satisfied with answers like we didn’t try hard enough. Keep digging and ask why you didn’t try hard enough till you can get an answer that aligns with the numbers or metrics that you are using. For example, were you overwhelmed because the team hadn’t prioritized work? Were incentives misguided so people didn’t feel motivated to try harder? Was the task too complex so people gave up too easily? There are many reasons why people don’t try hard enough. If you don’t get to the root cause, you can’t create actionable learning for the future. An effective tool for root-cause analysis is “5 whys.” For every answer you give, ask why that’s the case. By the time you answer the question five times, you’ve usually uncovered some fundamental issues that are holding you back.

  2. What should we start, stop, or continue doing? Given the root causes uncovered, what does that mean for behavior or plan changes? Specifically, what should we do next now that we know what we know?

Make sure you capture lessons learned in a useable format for later reference/use. At a minimum, this is taking notes and distributing them to the members present. Often the meeting minutes get distributed later on to all the members and this should follow the outline of a typical research paper discussing the data. Only difference here is that we have data that we internally collected. Additionally, this paper is only going to be distributed internally so it can follow a different format. One possible format is: Abstract, Results, Discussion and New Plan. This should be kept short around 3 pages or so and to the point. I believe that this format will allow research groups to apply some of the scientific thinking to performance and improving the likelihood of success especially for a new project that just got started under high risk factors. A lot of what I mentioned is not new stuff. We do this daily as is for the data we collect, but here it is just being applied in a way that can start to de-risk new projects and encouraging scientists to methodically devote a small portion of their time on new, experimental technologies. Who knows what they may uncover?

Search