Guest blog, Top tips

Blog – Self-Report: It’s not what you say, it’s how you say it

Blog from Rebecca Williams

Reading Time: 5 minutes

It’s four o’clock on a Friday, you’ve been working hard all week and now the sun is shining outside. You’re one frolic away from the weekend when you remember that you’ve recently signed up for a research study. All that’s required is to fill in a single questionnaire, which you open to discover is ten pages long. By the time you reach what feels like the thousandth question on page five, answering insightfully begins to slip down the list of priorities. You look outside at the deckchair awaiting you in the garden and start to implement a new strategy. Sure, answering A for every question might be a tad conspicuous, but what if you started strategically alternating between A and B, throwing in the occasional C for a bit of flavour.

Self-report scales are the bread and butter of psychological research. If you type “self-report” into Google Scholar, you’ll be met by over 3.5 million search results. However, we place a lot of trust in our participants when we hand them that questionnaire. We assume they’ll answer in a way that is meaningful. We assume that they won’t get distracted by the latest sitcom. We assume they won’t attempt to speed run our self-report battery to try and win the new world record for number of questionnaires answered in 5 minutes. The truth is that there’s a lot of evidence to suggest this assumption is flawed. Research on the topic deems this kind of responding as “careless”. In healthy individuals this pejorative term is likely to be a fair label (if a little harsh), but in patients there’s a host of other reasons why engagement is difficult beyond simply a lack of effort and engagement.

My research focuses on individuals with syndromes associated with frontotemporal degeneration, which can manifest as a spectrum of conditions, including frontotemporal dementia. This type of dementia is characterised by behavioural symptoms rather than memory loss, with hallmarks being apathy, impulsivity, and perseveration. This means that our patients don’t just get bored, they have a clinical lack of motivation. Our patients don’t just implement new strategies, they perseverate and answer every question in the same way or using recognisable patterns. What’s more, these individuals often have a very different insight into their illness than their caregivers and loved ones. In these cases, the assumption that participants engage meaningfully with self-report scales becomes even more tenuous. Given how commonly these scales are used, not just in research, but in clinical diagnosis, we wanted to explore the response strategies that our patients use and if these strategies themselves might be able to predict other clinical factors.

In our lab, we routinely use visual analogue scales in experimental medicine studies to assess mood. Pairs of antonyms are placed at opposite ends of a response line and participants are asked to mark on the line how they feel between these word pairs. Do they feel closer to happy or sad? Tense or tranquil? Attentive or withdrawn? I took these responses and began analysing them in a slightly unorthodox way… by ignoring the words. In this study I wasn’t interested in what the participants were saying, but rather how they were saying it. I found three different ways to measure potential response strategies. The simplest was counting how many responses in a row were the same. The questions on our scale are flipped for valence meaning you really can’t answer meaningfully by putting a straight line all the way down one side of the page. This was our ‘invariant’ response strategy. The second measure focussed on patterns in the responses. For this we used Lempel-Ziv compression which you may not have heard of, but you’ve most definitely used, as it’s the algorithm your computer implements to zip up files. The way it does this is to (quite cleverly) identify recurring patterns in the data and replace them with single markers. The more random the data, the more markers you need. In the most extreme case truly random data could not be compressed at all. This algorithm was perfect to quantify our ‘patterned’ response strategy. For the final measure we went back to take account of word meaning. We paired up our questions based on their semantic similarity and looked at the correlation between them. Simply put, questions that mean the same thing should have a strong correlation. Marking yourself as both relaxed and tense suggests your answers may not be very meaningful. And so we had strategy three, ‘internally inconsistent’.

A few months of data collection and analysis later, here we are. Our patients showed significantly more invariant, more patterned, and more internally inconsistent responses than our controls. Not only that, but some of these response strategies could predict cognition above and beyond the scale itself. So, what does this mean for the field of self-report scales? Does this mean we should discard every piece of science done with questionnaires in patients over the last hundred years? Unsurprisingly, no. For one, this study explored visual analogue scales and we just don’t know if the same problem will exist in other types of scale. But we also can’t ignore this violation in assumptions either. We test our statistical assumptions all the time, so in future let’s try to do the same for our self-report measures. Just because a scale has been validated doesn’t mean it is valid in the cohort you wish to study. There are no easy ways to avoid invariant and patterned responding in patients, but we should take it into account when we interpret our results.

So, what can we take away from all of this?

  1. Use caution when interpreting self-report measures,
  2. Check your assumptions
  3. If you ever take part in research, and you get bored mid-way through, please take a break before you finish the questionnaire.

If you’re interested in finding out more, you can read my latest paper on this topic: https://www.nature.com/articles/s41598-023-35758-5


Rebecca Williams Profile Picture

Rebecca Williams

Author

Rebecca Williams is PhD student at the University of Cambridge. Though originally from ‘up North’ in a small town called Leigh, she did her undergraduate and masters at the University of Oxford before defecting to Cambridge for her doctorate researching Frontotemporal dementia and Apathy. She now spends her days collecting data from wonderful volunteers, and coding. Outside work, she plays board games, and is very crafty.

Leave a comment

Your email address will not be published. Required fields are marked *

Rebecca Williams

Hello! My name’s Rebecca and I’m a second-year PhD student at the University of Cambridge. Though originally from ‘up North’ in a small town called Leigh, I did my undergraduate and masters at the University of Oxford before defecting/seeing the light (depends who you ask) to Cambridge for my doctorate. I now spend the majority of my days collecting data from our wonderful volunteers, and coding. I maintain that after spending entire days coding analysis pipelines I am very close to actually being able to see the matrix. In my spare time, I am a big fan of crafting in all its forms, and recently got a sewing machine to start designing my own clothes! I also greatly enjoy playing board games, and escape rooms.

Translate »