Categories
Better Measurement

Series: Rethinking the Likert-Scale, Part 1

Part 1. Surveys in Educational Settings

In education, we tend to use Likert-scale surveys to measure just about everything. 

In our quest to become more evidence-based in our planning and decision-making, we often resort to survey instruments, and in particular, Likert-scaled instruments, to measure key outcomes, get feedback from stakeholders, and evaluate programs and solutions. We use the Likert-scale instrument like a hammer that treats each measurement challenge like it is a nail. However, we have unfortunately reached a point in the arc of evidence-based practice where stakeholders are really starting to feel the effects of our use (and perhaps overuse) of “quick and easy surveys” in educational settings.  

“We use the Likert-scale instrument like a hammer that treats each measurement challenge like it is a nail.”

– My new mantra

The Consequences of Our Common Practices

As a result of our overuse of Likert-scale surveys, many of the practices we employ within each phase of the survey administration process lead to unintended consequences, many of which I am sure you have also experienced:

PhaseCommon PracticesUnintended Consequences
Instrument selectionWe use surveys to measure everything under the sun, such as engagement, attitudes, reactions to professional learning, and implementation of instructional practices.We begin to measure low-priority constructs, seeking “feedback” about trivial operational decisions instead of the key strategic levers that affect organizational performance.
Survey designWe design Likert-scaled instruments because they seem “easy” to write.We fail to consider item designs that may better measure what matters in leading transformation.
Survey administrationWe gravitate toward self-service tools that grant affordable access to efficient data collection.This luxury has led to survey ubiquity in our data-driven culture, which in some contexts, has minimized the value of self-report data.
Survey analysisWe compute and report mean scores and percent agreement for most indicators.We only skim the surface of the meaning and narrative that we desire from our data.
Survey interpretationWe evaluate results based on which items “scored the lowest or the highest,” and use these scores to set goals or track improvement over time.We fail to take into account the ways these items violate statistical assumptions, or meaningfully triangulate the results with other information and context.
ARKEN Research, 2020. www.arkenresearch.com

Over the last 10 plus years as a continuous improvement leader and evaluator, I know I have been guilty of these practices. In the past, Likert-scale instruments helped me obtain data where information was scarce, when outcomes seemed intangible, or when I desperately needed to verify the merit of a program I cared deeply about. But I have learned the hard way that the rush to collect such data led to downstream impacts I later came to regret.

While there are many factors that have contributed to the current decadence in survey management in schools, I believe that one of the key culprits that has had a trickle-down impact on all the other issues is our overreliance on the Likert-scale design.

The Diminishing Value of Self-Report Data

Of the myriad consequences of the perceived ease, affordability, and predictability of Likert-scale instruments, chief among them is the slow diminishing value of self-report data as a meaningful way to gather feedback about things that matter to leaders. Many educational leaders dismiss surveys in general as too “biased” or “unreliable” because they depend on self-report methods. 

But frustrations related to our undisciplined survey practices should not undercut the importance of self-report data.  

In a constructivist paradigm focused on improving the experiences of educators and learners, self-report measures still serve as the gold standard in evidence, and hold much unrealized potential as a key means to engage and empower stakeholders. Self-report instruments, when designed with intentionality and laser-like focus on the most important enablers of effectiveness, can provide a meaningful source of insight about practices and a great way to measure and track progress over time. 

While we may continue to rely on the Likert-scale design in some instances, I believe we have an opportunity to rethink the Likert-scale and consider alternative item types and models for measuring and evaluating what matters. In part 2 of this blog series, we discuss why and how this might be achieved.

Leave a Reply

Your email address will not be published. Required fields are marked *