Discover more from Kiko's Musings
On performance at work
Unlocking Potential is a newsletter by me, Francisco H. de Mello, CEO of Qulture.Rocks (YC W18)
After a long hiatus, about which I pretend to talk about later, I wanted to write about a topic I've been thinking about a lot: performance, more specifically at work.
Every entrepreneur on earth wants her company to perform at its best, or at its fullest potential. In order to make that happen, each and every person in the company has to perform at their best.
But what is performance? How can we influence performance? What are its components and determinants?
That's what I want to discuss in this essay. I hope you come out on the other side with a better mental model of what is performance and how to drive it.
Performance = NPV of behavior(s)
Let's start with a working definition of what performance actually means.
The title above is not quite true but points to the truth. A couple of folks (Motowidlo and Kell, 2013) defined performance as “the total expected value to the organization of the discrete behavioral episodes that an individual carries out over a standard period of time.” [1a] I've tried to dumb it down a bit and got to the following: Performance is the value of the behaviors of an employee in a given period of time. Let's break down the definition.
First, we talk about behaviors. It's stuff like:
writing lines of code, or a post for social media, or a 6-pager memo,
calling on a customer or prospect,
designing a UI,
building a financial model (or tweaking one that was built by somebody else),
running a webinar,
contributing an idea in a brainstorming session,
making a decision, etc.
In other words, behaviors are the actual work.
If we're going to evaluate somebody's performance in a given period, we have to look at all work done by someone during that period, and the work will be a collection of individual behaviors (discrete episodes).
Second, they talk about value. Here, it helps to think that each relevant behavior we take at work has a net present value. Borrowing on finance, it produces a tilt in the company's future cash flows, even if tiny. We hope, of course, that the value is positive, and not negative. But people work for organizations because they are able to offer value to the organization, ideally more value than they cost in terms of salaries (even tough this is most probably sometimes not possible).
If we analyze all these relevant behaviors for a given period - let's say a year - we can theoretically add up their NPVs, compare said NPVs with the NPVs of previous periods, compare the NPVs of different employees, and so on (we'll get more into performance measurement - e.g., assessments, reviews - in a bit.
Results: Abstracting performance
We frequently confuse the meanings of performance and results, but we shouldn't. Results are abstractions. And abstractions are created to make our lives easier, like shortcuts.
One example of such abstraction we frequently call a “result”: some measured improvement in a metric/KPI like “sales for product A” or “net income for year 2023.”
Results can be attributed (with its limitations, as we'll see) to some individual, as in “Erik is bringing in solid results. He sold 100 dollars in new MRR last quarter.” We can even compare actual results with expected or desired results (as in goals/quotas and goal/quota attainment), and say that for somebody as experienced as Erik-the-salesperson, this quarter's sales of 100 dollars are about right, and last quarter's sales of 90 meant some level of “underperformance.”
But just like the map is not the terrain, results are not performance per se. If Erik sells 100 dollars of new MRR, the amount sold is not his performance. We might tentatively say the present value of this result is his performance. But it's still not it. Erik's actual performance is the present value of all his behaviors at work. His sales - the result - is a proxy.
We have to be very aware of that proxy status and its potential limitations.
Some limitations of the results-performance proxy
There are two problems with blindly equating (specifically in, but not limited to, the case of salespeople) sales dollars with performance.
The first problem is that several other factors outside of Erik’s control may have contributed to this quarter's number being 100 (and not 50 or 120):
A warmer-than-average lead may have been referred to Erik by the CEO and converted into a 20 dollar deal.
The Government may have pushed a bill that created regulation in a territory/industry covered by Erik in a such a way as to make the product he sells required by regulation.
Erik may have caught COVID-19, and therefore spent two weeks at home, unable to work because of the symptoms.
The point here is that affirming Erik's 100-dollar production was strictly attributable to Erik is probably not true
The second problem is that many factors may influence how much value the 100 dollar sales figure actually created for the company. In other words, to presume that the 100 dollar figure is a good measure of how much value was added to the company is frequently dangerous. For example,
Erik may have - knowingly or not - sold a deal that is very likely to churn in the short-run
The majority of the deals sold by Erik may be into a low-growth industry, which is very unlikely to bring upsells throughout the lifetime of the contract
The average pricing practiced by Erik may just be off - too cheap, eating margins, or too expensive, causing reputational risk.
The point here is: the 100-dollar figure produced by two different salespeople may mean a very different value-added to the organization.
I know you must be thinking I'm going too deep into the sales examples, but it was intentional. I wanted to take down the case where equating performance with a result is the easiest and most tempting. If we move to other functions within a company, as we'll do, it's even trickier.
But bear in mind: Even though I'd say 99% of top sales organizations mix the concepts of performance and quota attainment so much that there seems to be no distinction between the two, I think it's ok to do so in the case of sales: weighing costs and benefits, you'll be fine. I just wanted to point out how severe the limitations of doing so are, especially given that doing so for most other functions shows even more severe limitations.
As an aside, let's think of what performance means in the reality of a developer, and most importantly, how limited the most obvious abstraction would be in measuring her performance.
Based on our definition, performance would mean the total value of the developer's behaviors. For example, shipping code that then becomes a feature that customers use could be an example of a performance sample, and the value of the use increment, the actual performance of the developer. Further, shipping that feature using clean, understandable, reusable code is an even better example since the developer prevents problems that will happen in the future (and subtract from future cash flows).
But would you feel comfortable equating the number of lines of code merged by the developer as value-added to the organization? How about if you threw in an additional code health? Or the number of comments said code gets from code reviewers within the team? Even then, I think you wouldn't want to do that. It's just too crude an abstraction. (Btw, it's not so distant an example. I've seen people argue for it. No kidding.)
A better proxy would be how much usage actually increased. Let's say the feature allegedly drove an uptick of 10% in MAUs/DAUs. Good? How much of the uptick was due to the developer, as opposed to the designer? Or the PM? How much have the holidays/weather/launch of the new Macbook Pro influenced the uptick?
Now that we've defined performance and discussed some of the risks in equating performance and the results - allegedly - produced by said performance, let's look at something that might be more interesitng: How can we enhance performance? What does it take for someone to perform? Or even better asked, what makes, et ceteris paribus, one person perform better and another perform worse?
That's the realm of performance determinants.
The determinants of performance
According to this guy Campbell (1990) [1c], there are three determinants of performance: declarative knowledge, procedural knowledge, and motivation.
In a nutshell, these mean, respectively, knowing what to do, knowing how to do it, and wanting to do it (the difference between what and how is sometimes blurry. I suggest just absorbing the geist).
Declarative knowledge, or knowing what to do, is a matter of education. Campbell defines it as “knowledge of facts, principles, and procedures— knowledge that might be measured by paper-and-pencil tests, for example”.
According to Wikipedia (which has some great easy-to-grasp examples):
In epistemology, descriptive knowledge (also known as propositional knowledge, knowing-that, declarative knowledge, or constative knowledge) is knowledge that can be expressed in a declarative sentence or an indicative proposition. "Knowing-that" can be contrasted with "knowing-how" (also known as "procedural knowledge"), which is knowing how to perform some task, including knowing how to perform it skillfully. It can also be contrasted with "knowing of" (better known as "knowledge by acquaintance"), which is non-propositional knowledge of something which is constituted by familiarity with it or direct awareness of it. By definition, descriptive knowledge is knowledge of particular facts, as potentially expressed by our theories, concepts, principles, schemas, and ideas. The descriptive knowledge that a person possesses constitute her understanding of the world and the way that it works.
I, for example, know what a DCF is, know how an income statement, cash flow statement, and balance sheet work, and so on and so forth). Erik-the-salesperson hopefully knows the product's features, the main objections he can face, and even where he can find the pricing calculator in the intranet.
But it’s also knowing what I'm expected to do: I need to perform a valuation analysis of Acme Inc., by Friday, with the goal of helping my boss make the case of whether our firm should invest or not in Acme Inc.). Erik must tackle objections as they arise, but never preempt it; he must always use the pricing calculator available on the intranet; his quota for the quarter is 100 dollars, 80 in product A and 20 in product B, all of it in multi-year contracts, etc.
Procedural knowledge, or knowing how to do it, is knowing how to apply declarative knowledge. Campbell, again, defines it as “facility in actually doing what should be done; it is the combination of knowing what to do and actually being able to do it. It includes skills such as cognitive skill, psychomotor skill, physical skill, self-management skill, and interpersonal skill and might be measured by simulations and job sample tests.” Having procedural knowledge means one is able to apply the declarative knowledge one has in actual work.
Again, Wikipedia does a great job of describing procedural knowledge:
Procedural knowledge (also known as knowing-how, and sometimes referred to as practical knowledge, imperative knowledge, or performative knowledge) is the knowledge exercised in the performance of some task. Unlike descriptive knowledge (also known as "declarative knowledge" or "propositional knowledge" or "knowing-that"), which involves knowledge of specific facts or propositions (e.g. "I know that snow is white"), procedural knowledge involves one's ability to do something (e.g. "I know how to change a flat tire"). A person doesn't need to be able to verbally articulate their procedural knowledge in order for it to count as knowledge, since procedural knowledge requires only knowing how to correctly perform an action or exercise a skill.
In my example, it means I know how to actually go about doing the DCF. Probably because I've done it before, I've faced some challenges and overcame them. I can open a spreadsheet, spread the financials and project them, calculate some metrics, etc. It means Erik knows how to answer the objections without losing the proper tone and voice, etc.
Motivation, to end our triad, means to want to do something. Campbell further breaks down motivation into three components. The first is the choice to do something. I either want to do it or not. It's binary. The second is the level of effort I want to expend in doing it (if the choice was “yes”, do I wan't to go all in?) The final one is the duration of the effort I want to expend: for how long I'm willing to sustain the level of effort chosen.
Another way to put it is direction, amplitude, and duration (Campbell 1993).
In my example, do I actually want to do the DCF my boss asked me to? That's a biggie. Do I admire her? Did she ask nicely? Can I foresee the impact of doing the DCF correctly and in time with advancing my career? Or getting a bonus? Or fulfilling my self-image of a top-level analyst? (This opens up a whole other avenue of inquiry on the determinants of motivation, an avenue where I won't go for now.) Further, how much am I willing to give this DCF? Have I decided to double-check everything to make sure there are no typos? If I face challenges, am I willing to stay late? To moonlight at work?
Companies and managers both have a huge impact on the determinants of performance. Knowledge (procedural and declarative) can be recruited for (previous experience is frequently used as an indicator of skill) or trained, via onboarding, via in person or remote/async training (using software such as Learning.Rocks), or via on-the-job training performed by supervisors (a lot more here). Motivation can be promoted by making sure people know how they can advance within companies, by having an inspiring mission, by having inspiring leaders employees wish to emulate, etc.
Now we get to an important consequence of our present discussion of what performance actually is: measuring performance. Measuring performance is really important because it allows us to a) improve performance and b) reward performance.
Measuring performance is the purview of a human resources discipline called performance management. Performance management programs are composed of set of processes based on the premise that helping individuals perform better will help the organization perform better (and based on our definition of performance, the assumption seems reasonable).
The cornerstone of a performance management program is the performance review. The performance review is a process (have you heard of Qulture.Rocks?) whereby employees' performances are measured alongside performance criteria. See, when a company measures performance, it doesn't actually measure the value of the sum of someone's behavior in a given time period, as our definition suggested. It measures some proxy of performance.
Most mature companies measure performance alongside two proxies, or let's call them “axes”: behaviors and results.
On the behaviors axis, they measure the presence of behaviors (according to people that work with the person whose performance is being measured) that are thought to be correlated with positive value-added for the company. It's a correlation, which is why it's a proxy. Anyway, most companies start to measure performance exclusively alongside the behaviors axis, because it requires less sophistication/ is easier to set up, with the frequent exception of salespeople, who can be measured with the help of some very easily collectable metrics such as, duhh, sales closed.
On the results axis, they measure the achievement of results against some expected level, where Achievement = Result / Target. (Each target is oftentimes articulated as a goal).
We'll get into more detail on the two axes.
Behaviors are usually evaluated using a set of behavior descriptions. These behavior descriptions might be general (everybody is evaluated with the same behaviors) or specific to different areas within a company (sales gets sales-relevant behaviors and finance gets finance-relevant behaviors).
Behaviors might be grouped according to the skills they require (e.g., “objection handling”,“written communication”, “financial modelling”) or to values the company (and its culture) requires everybody to show ("e.g., “don't be evil”, “do things that don't scale”, “move fast and break things”).
Results are proxies for the value said behaviors generated to the organization.
In some functions, this is easyish to set up. For example, sales, as we've seen extensively. In some functions, this is very hard. For example, in-house counsel, or accounts receivable (AR). When it's hard, it's usually for two reasons.
First because for some functions it's hard to find KPIs that are relevant performance proxies (is the number of lines of code a good proxy for a programmer?); and even if such KPIs are found, they might not be easily attributable to an individual (how much of the product's engagement uptick was due to the programmer and how much was due to the designer?). Second, because they might be super hard to measure (how practical is it to even measure product engagement? Do we have somebody that knows SQL and has access to the DB to query it?).
As we've discussed extensively, results can be pretty flawed proxies. It's tempting because results can seem much easier to analyze; they can seem much more objective. It's much less time-consuming to not have to shadow a salesperson all year long and observe her behaviors, opting instead to just gauge how much she actually sold in dollar terms (or how much she sold against her quota) and triangulate some measure of performance. It's much less daunting to shoot numbers back when someone questions why a colleague's performance was measured as better.
As an aside, HR professionals usually think these are actually two different aspects or dimensions of performance. Results are “what” someone does. Behaviors are “how” someone does the “what.” But understanding what performance really is shows how wrong this line of thought is, and allows for a much better perspective on this. I'll explain: behaviors and results are not “what” and “how,” or different aspects/ dimensions of performance, but actually different ways to measure of the same thing.
Pairing both doesn't mean looking at the “what” and the “how.” It means looking at performance with two different tools that aim to measure the same thing. In the sales example, that even makes sense: since both tools are flawed, how about pairing them and getting the average?
Anyway, despite the confusion, it might be net good that HR teams use both “axes” in their performance management programs.
Improving performance: working “harder” or “smarter”
How does performance improve? Two ways: effort or development.
This is really interesting.
A salesperson (let's call her A, for consistency's sake) can improve her production by working 30 minutes more every day to do an additional call, resulting, given a constant conversion rate of calls to deals done, in an additional amount of dollars or logos sold.
Salesperson A has another way to improve her performance: tweak her sales pitch in order to improve her conversion rates and, given a constant commitment of hours, sell more.
We could call these two alternatives "working harder" and "working smarter," to be aggressively simplistic.
You can argue that improvements by "working smarter" are more durable because it's harder for them to recede. Once you do your better pitch, why would you revert to the old one? "Working harder", on the other hand, is more volatile. If you're not feeling well, lost a bit of motivation, etc., you can just work less and then produce less.
This was supposed to be an essay, in the sense of an unstructured exploration of a topic without a clear thesis or conclusion. But if you could take one thing with you, I'd urge you to take this: performance is the value that people's behaviors add to an organization. Results and goal attainment are not performance, but proxies for performance. In order to really assess performance, you'd have to shadow people around on all work-related situations and then add the value-added up.
[1a] The book can be found here.
[1b] Most of what I discuss in this article can be traced back to Campbell (1993). It's a great piece of work (even tough I don't have the file) .
[1c] Campbell, J. P. (1990). Modeling the performance prediction problem in industrial and organizational psychology. In M. D. Dunnette & L. M. Hough (Eds.), Handbook of industrial and organizational psychology (2nd ed., Vol. 1, pp. 687–732). Palo Alto, CA: Consulting Psychologists Press.
 This example still isn't as crisp as I wanted it to be. It doesn't feel 100% right, especially the “duration of effort” part.
 I usually hate these “what” and “how” analogies. In the realm of OKRs, John Doerr explains that objectives are “what” we want to achieve, and key results are “how” we will achieve these objectives. So much trouble has been caused by this terrible explanation.
 Campbell, J. P., McCloy, R. A., Oppler, S. H., & Sager, C. E. (1993). A theory of performance. In N. Schmitt & W. C. Borman (Eds.), Personnel selection in organizations (pp. 35–70). San Francisco, CA: Jossey-Bass.