The Merit Systems Protection Board (MSPB) recently overturned the removal of a Department of Defense (DoD) employee for unacceptable performance. The reason was all too familiar to those of us who have worked in the area of Federal performance evaluations over the past 20-30 years. The agency, the Defense Security Service (DSS), failed to adequately inform its employee, Larry Van Prichard, of the difference between unacceptable and marginal performance as it pursued his eventual removal.
In an earlier FedSmith column, I was very critical of another performance case that resulted in a reversal by the MSPB due to a technicality. There, the issue dissected was Office of Personnel Management (OPM) approval of an agency’s appraisal system. Here is another case where a procedural issue trumped that of the employee’s performance. In Van Prichard v. Department of Defense, however, I’m much less critical of the Board whose opinion seems thorough and well-reasoned.
Oops, what I meant to say was…
From the Board’s decision, it appears as if DSS put Mr. Van Prichard on a performance improvement plan (PIP) due to genuine concerns about his competence to do the job he occupies. The agency’s undoing can be traced back to the beginning of their process. Mr. Van Prichard received a “Notification of Unacceptable Performance and Opportunity to Improve” which specified his failings and what improvements were needed for him to reach the Fully Successful level of performance.…and there’s where they made (and compounded) their error.
Like so many Federal agencies, DSS has a five-level appraisal system. In this case, the bottom two levels are Marginal and Unacceptable. In the context of school, if unacceptable performance is considered failing, then marginal performance (a “D”) is passing. If appellant had been told what it took to reach the safe haven of Marginal rather than Fully Successful, it appears as if DSS would have prevailed in this case.
When the Merit Board never gets to the merits
Does this case serve as an example of bad advice from human resources and their legal counsel, management incompetence, or system design failure? Perhaps the answer is all three. My concern is primarily focused on the repeated failing of Federal agency appraisal systems to achieve clarity. I don’t root for one side or the other in such matters. My concern is that such cases are decided on their merits, rather than procedural issues, as is the case here.
Over the 30+ years since I began my career in Federal personnel, agencies have stumbled over the notion of “minimal” or “marginal” performance time and again. In case after case, an absent or poorly written marginal standard has repeatedly invalidated hundreds of supervisory, legal, and HR work hours devoted to proving a Fed to be an unacceptable performer. When people keep tripping over a crack in the sidewalk, it’s time to acknowledge that the sidewalk may be more at fault than the pedestrians.
It’s the same old song
Eliminating this “D” level of performance was addressed in an article I posted to this site years ago titledUnmasking the Marginal Employee. Fellow FedSmith contributor Steve Oppermann also addressed the difficulties attendant to the minimal performance level in his piece titled When Mediocre is Good Enough. We’re still waiting on OPM and most Chief Human Capital Officers to catch up with our simple logic. I can’t imagine the press, politicians or public objecting to DoD abandoning the notion that demonstrating marginal or minimally acceptable performance can save a Fed’s job.
The problem that DSS encountered in this case also rested in their definition of minimal performance…or the lack thereof. While it may be subpar, it’s not failing. So how does one write standards to describe marginal performance? Over the course of many years (and many MSPB and court decisions), we in HR should have learned that written minimal/marginal performance standards can be interpreted as “backwards” and invalidate hundreds of hours invested in a performance case.
Back to the backwards standards
Invalid minimally acceptable standards are the result of attempts to describe how bad one has to be to earn a minimally acceptable rating. Below is an example from the Departments of Interior’s “Benchmark Standards” describing “Minimally Successful” performance?
The employee’s performance shows serious deficiencies that requires [sic] correction. The employee’s work frequently needs revision or adjustments to meet a minimally successful level. All assignments are completed, but often require assistance from supervisor and/or peers. Organizational goals and objectives are met only as a result of close supervision. On one or more occasions, important work requires unusually close supervision to meet organizational goals or needs so much revision that deadlines were missed or imperiled.
Employee shows a lack of awareness of policy implications or assignments; inappropriate or incomplete use of programs or services; circumvention of established procedures, resulting in unnecessary expenditure of time or money; reluctance to accept responsibility; disorganization in carrying out assignments; incomplete understanding of one or more important areas of the field of work; unreliable methods for completing assignments; lack of clarity in writing and speaking; and/or failure to promote team spirit.
If that leads a reader to consider the lowest performing Federal coworker they have ever fantasized, problems are sure to result. This was Interior’s Minimally Successful standard. In the Bureau of Indian Affairs, National Park Service, Bureau of Land Management, or Fish and Wildlife Service, you’re coworker had to be worse than this to be put on a Performance Improvement Plan (PIP) and removed. It’s backwards” – a term used by the US Court of Appeals way back in 1988 (Eibel v. Navy, 857 F.2d 1439).
What’s worse than “worse?”
Interior recently abandoned their benchmark for Minimally Successful performance without a substitution. That puts them in the same boat as the Defense Security Services in this recent MSPB case – having an unwritten minimal performance standard. In Van Prichard, the MSPB noted that DSS had only the most oblique reference to marginal performance in its performance appraisal directive. It was found in the “Definitions” section. The pertinent part reads:
“…less than Fully Successful and supervisory guidance and assistance are more than normally required. To be rated Marginal, any of the Critical Elements are rated ‘Marginal’ and no elements are rated ‘Unacceptable.’”
The only thing clear from this “definition” is that Marginal is less than Fully Successful, and marginal employees require more supervision. Technically, this is an “open-ended” standard, which is inappropriate to have at the “D” level. HR specialists should know this and be on the lookout for transgressions. In this case, they (or some consultant) probably wrote it.
The definition/standard doesn’t define how much of this guidance and assistance tilts the scales to render the offender unacceptable, or an “F”. If two hours/day of guidance is more than normal, then so is six hours/day. It’s analogous to telling a student that a “D” is a passing grade representing an average grade below 70%. Really? How about 53%… or 27%? An open-ended marginal standard (having an upper, but no lower limit) is commonly toxic when attempting to prove unacceptable performance.
The consternation of extrapolation
Compounding their problems, the Defense Security Service only required that the Fully Successful performance be defined in writing. This is a common feature of many four and five-level appraisal programs. If the only performance standard in writing is Fully Successful, however, it is impossible to distinguish between marginal and unacceptable. By analogy, if all you know is that a “C” ranges from 70 to 79%, what is an “A”… or an “F?”
This conundrum has resulted in judicial reversals over the course of 3 decades. DSS can now be added to a long list of managers and HR specialists whose cases failed over such a basic issue. Before the government fires someone for unacceptable performance, that employee should be given a clear picture of the difference between passing and failing. In the early 1980s, we were told by OPM that “extrapolating two levels” (up or down) renders an appraisal invalid. Decades later, confusion is still out there. DSS is not alone.
Forests and trees
For evaluations to be more than just a “necessary evil,” the Federal HR community needs to consider where and how they contribute or save real dollars. From where I sit, it seems we have lost sight of both the fundamental purpose and basic design of our performance appraisal system.
The Purpose: To enhance and improve mission accomplishment by motivating individual employees to perform more effectively and efficiently, and removing those who cannot be motivated to succeed.
The Design: Critical elements are what we rate. Performance standards are the yardsticks by which elements are rated.
At a time when lots consultants and human capitalists want to tie salary increases to appraisals (as if there were any money available for salary increases!), we continue to see cases where managers still cannot distinguish between passing and failing. Differentiating among good, better, and best employees is even more of a crap shoot. Various agency appraisal directives call for critical and non-critical elements, mandatory elements, sub-elements, generic performance standards, unwritten standards, GPRA tie-ins, and burdensome justifications if one tries to rate an employee toward the top of the scale. Complications lead to confusion, which leads to pre-determined distributions, favoritism, and other non-merit practices – all of which tells us that something’s broken and HR doesn’t seem to know how it should be fixed.
Ideas for avoiding such problems while focusing on the purpose of performance appraisals will be examined in a follow-on piece to be posted soon. For now, Mr. Prichard is back to work – hopefully demonstrating his competence to do the job. Whether the Department of Defense will eliminate the minimal or marginal performance level that exists in the evaluation systems for many of their component agencies remains to be seen.
I’m not holding my breath.