Background
As health care is increasingly becoming more patient-centered, patients are, sometimes reluctantly, expected to assume more responsibility for their care process. To allow them to take up this responsibility, it is important that they have the necessary competencies to take well-informed decisions [
3,
4]. In addition to reading and numerical skills applied in a medical context, this also involves more complex and interconnected abilities, such as acting upon written health information, communicating needs to health professionals, understanding health instructions, applying them correctly to their personal situations, and taking action if needed [
5].
These competencies are contained in the concept of health literacy (HL), which can be defined as a person’s knowledge, motivation and competencies to access, understand, appraise, and apply health information in order to make judgments and take decisions concerning health [
6]. The importance of HL for health care services and systems is recognized worldwide, and its reach is increasing: while originally the concept was mainly used within health care services, in recent years it is also gaining grounds in public health [
3,
7]. A growing number of studies show that people with low HL not only have a lower adherence to medication, poor self-care and worse treatment outcomes, but are also less likely to engage in health promoting behaviour, participate in screening programs, or make use of preventive services [
8‐
11].
Over the past years, a range of HL measurement tools have been developed, which vary in their approach, design, and purpose [
7,
12]. Some tools focus on HL related to specific conditions, such as diabetes, cardiovascular disease, cancer, oral health, or mental health, or on specific population groups, while others are more generic. Some have been developed for the purpose of screening functional HL problems in clinical settings, and are consequently short and easy to use, while others provide an in-depth assessment of HL and its dimensions at population level, using a broader approach [
12]. Examples of the first kind of tools are the Rapid Estimate of Adult Literacy in Medicine (REALM) [
13,
14], the Newest Vital Sign (NVS) [
15], and the Short Assessment of Health Literacy (SAHL) [
16]. Examples of the second kind are the Critical Health Competence Test (CHC) [
17], the Swiss Health Literacy Survey [
13,
18], the Health Literacy Management Scale (HeLMS) [
19] and the Health Literacy Questionnaire (HLQ) [
20] .
A relatively new measure, which can be used both for screening and for a more in-depth investigation of HL, is the European Health Literacy Survey (HLS-EU-Q). A consortium of European organisations developed this instrument guided by a conceptual model of HL derived from a systematic literature review [
7]. The original tool (HLS-EU-Q47) consists of 47 items addressing self-reported difficulties in accessing, understanding, appraising and applying information in tasks concerning decision-making in health care, disease prevention, and health promotion, and was designed for face-to-face or telephone interviewing [
21]. A short form (HLS-EU-Q16), comprising 16 of the 47 items, was developed for quicker screening of HL, either through interview or via self-report (pen and paper or online) [
22]. Its validation was done via Rasch modelling (1-parametric dichotomous model) and item selection on the basis of content and face validity (item relevance) [
22‐
24]. Although no statements regarding sub-dimensions of HL are allowed, HLS-EU-Q16 is a good approximation of the 47-item version, with a high correlation (
r = .82) with the general HL score of the HLS-EU-Q47, and a 75.8% concurrent classification of respondents as having insufficient, limited and sufficient HL. HLS-EU-Q16 has been used in several countries, including Belgium [
25], the Netherlands [
16,
26] and Germany [
27,
28].
A major advantage of short self-report measures of health literacy such as HLS-EU-Q16 is that they are relatively easy to administer, both in a clinical setting and at population level, while allowing a comparison with other (patient or general) populations. Some scholars have also used online versions of the tool [
25], which is particularly useful for population research involving large samples. However, the downside is that completing a self-report questionnaire requires a certain level of literacy from the respondents. Research by Van der Heide et al. using the HLS-EU-Q47 revealed that limited HL is frequently reported in those with lower levels of education [
21,
29]. To allow the inclusion of low literate persons in the survey, the original HLS-EU-Q47 underwent a comprehensibility test for all languages involved, but a similar test was not performed on the short self-report version [
21]. Given the difference between a computer-assisted interview and a self-report questionnaire, it is important to ascertain that HLS-EU-Q16 can also reach and detect people with low literacy, as an instrument that is judged to be comprehensive and easy to execute enhances the participation of people and the quality of the measurement.
The aim of this study was to examine the suitability of HLS-EU-Q16 for use in a population of people with low literacy in vulnerable conditions. As this group of people can be expected to have limited health literacy, it is of paramount importance to measure their health literacy adequately. The validation of HLS-EU-Q16 will not be discussed in this research, as it is documented elsewhere.
Discussion
When developing questionnaires to measure HL, it is important to consider respondents’ level of education. Therefore, feasibility of a HL-instrument (HLS-EU-Q16) was examined in people with low literacy. The objective was to determine how respondents experienced length, comprehension and layout (overall: ‘suitability’) of this instrument. In this feasibility assessment, a variety of factors (difficulty of a text, layout...) can be addressed. Moreover, this is a subjective approach, as opposed to – in literature sometimes contested – readability tests in which texts are quantitatively assessed [
37]. Moreover, interviewing the intended users of the questionnaire is helpful to gather in-depth information [
38].
The results of this study indicate that HLS-EU-Q16 is a questionnaire suitable to determine the level of HL in a population of people of low literacy on account of their lower income and education. Overall, the questions of the instrument are well understood. Moreover, cognitive interviews regarding participants’ opinion about the questionnaire revealed that less items of HLS-EU-Q16 were perceived to be difficult or required a revision than suggested by the researcher’s informal test on B1-level for documents. Based on presented findings it can be concluded that changing the vocabulary is insufficient to improve the suitability of HLS-EU-Q16.
Reported difficulties
When referring to the initial framework of HLS-EU questionnaire [
3], questions often reported to be less comprehensive were those about ‘disease prevention’ (domain) or ‘appraisal’ of information (competency). Indifferent of domain or competency, all questions referring to a potential role of the media in health care seemed hard to relate to. Referring to one’s general practitioner as a source of health information was an apparent and frequently mentioned response. This statement was most often accompanied by the comment that respondents had difficulties deciding how to respond. Indirectly, these findings revealed the trust in general practitioners and consequently their crucial position in health care, particularly for vulnerable groups (people with low levels of income and education) [
39‐
41].
Questions the least understood in both questionnaires resulted in high number of non-responses. Although reliability of HL-scores is irrelevant because of the research design, non-responses illustrated limited comprehension of HLS-EU-Q16. However, respondents reporting incomprehension of this instrument most often answered at least half of the items. Hampered comprehension of questions could be attributed to converging difficulties: vocabulary that was perceived as difficult (‘to judge’ (Dutch ‘beoordelen); ‘media’), abstraction, as well as participants being indecisive on the appropriate response.
Some difficulties persisted even when HSL-EU-Q16-EZ (the modified questionnaire) was presented to participants. Because data collection with HLS-EU-Q16 was not carried out face-to-face, as opposed to HLS-EU-Q47 in which Computer Assisted Personal Interviewing (CAPI) was used, people of low literacy groups could benefit from a minor adjusted HLS-EU-Q16 [
21]. To resolve both comprehension and decision process problems, some questions would benefit from an additional explanation. Providing extra information in short, explanatory sentences can bypass problems concerning the meaning of words. Moreover, it may rule out confusion amongst respondents by giving more context and making it easier for people of low literacy groups to respond. The suggestion for contextual information is made based on reports of confounding questions: some questions were perceived as too similar, leaving respondents confused about their objective. To situate questions, providing information with regard to the ‘competencies’ to which questions are referring to (for instance, distinguishing questions about ‘appraising’ from those about ‘applying’) could facilitate interpretation. In conclusion, it is advisable to add information similar to the information that would be provided by the interviewer when collecting data based on CAPI (HLS-EU-Q47). Although occurring rarely, questions can truly be irrelevant because of very specific situations: for instance, someone who needs to rely on health care professionals’ (or informal caregivers’) decision because of their medical condition, might find some questions irrelevant; someone with no or limited access to 'the media'. In these rare cases, people would benefit from an additional category to report accordingly; adversely, it increases the likelihood of non-responses. When the questionnaire would be used with forced answering (electronically), benefits and disadvantages of an extra category should be outweighed.
The following changes might improve the understanding of the questions: basic instructions can allude on how questions should be interpreted, guidance on answering options can be provided in case people would feel to only ‘rely on their general practitioner or other (non-)medically trained people’; questions can be broken down into different components, so it can be avoided that people find the portrayed situation in a part of the question irrelevant to them; the meaning of terminology might be explained through an example, for instance of a preventive examination, or questions might be put into a different, more clarifying order (per domain or competency surveyed) [
42]. These suggestions should be implemented cautiously, avoiding narrowing the questionnaire to specific health-related topics. However, some additional information might increase comprehensibility and potentially response rate.
Implications
In this research, the focus was on HLS-EU-Q16 (in Dutch). Because HLS-EU questionnaires are available in different languages, but not all of them were tested on ‘cultural applicability’, it would be helpful to determine feasibility of HLS-EU questionnaire beforehand, preferably by surveying or interviewing the intended users [
7,
38,
43]. Evaluating the questionnaire’s suitability and eventually readjusting it, might be particularly useful if questionnaires are intended to be used in a paper-pen version, without assistance from a researcher [
44]. Because assessing feasibility is subjective, adjustments made in correspondence with respondents’ feedback are specific for those interviewees and therefore not generalizable.
Strengths and limitations
A strength of this study is the insight that was gained with regard to the cognitive process of respondents when completing HLS-EU-Q16. Moreover, this study was specifically targeting a population considered to be vulnerable because of low levels of education and income. To the best of our knowledge, this was the first study targeting this population to examine feasibility of a questionnaire. Similar studies have been carried out in a peadiatric setting or targeting vulnerable groups, such as refugees. [
43,
45]. The research of Wångdahl et al. examining HL in refugees, indicated sufficient language proficiency – preferably respondents’ mother tongue – and modification of questions or answering options to be contributing to questionnaires’ accessibility. In their research, HLS-EU-Q16 (in Swedish) was modified based on data collected through cognitive interviews [
43].
The main limitations of this study are related to the vulnerable character of the population. Recruitment was challenging: an unusual, yet inclusive approach was used to reach eligible participants in order to avoid stigmatization. Moreover, reaching people of younger age meeting the education requirement seemed even more challenging, which was to be expected given the compulsory education until the age of 18 years old in Belgium. Despite efforts to reach this population, it still resulted in relatively small sample size, but as testing suitability is a qualitative approach, it is not based on representative samples. Data saturation was reached when 13 participants were interviewed, a sample regarded as sufficient [
46].
Although a test-retest design would have allowed to generate interesting data, reproducibility was not feasible. Participants seemed to find it difficult to distinguish between answering the 16 questions and answering on the feasibility of these questions (interviewer’s asking about how the questionnaire was perceived often resulted in respondents reporting about their answers). Asking them to participate in a second, similar survey would probably lead to more confusion.
Interviews were conducted using the Verbal Probing technique: the researcher attempted to minimize the use of ‘leading’-probes to reduce bias. However, as the purpose of this interview was to analyze comprehension of questions, an example or clarifying sentence was provided, with the intent to get insight in the difficulties reported by participants and potential solutions, but never from the start. Ideally, the interview would have been conducted during the completion of the questionnaire. However, the time between completing the questionnaire and the interview was minimalized to 1 h maximum. A final limitation is that this research was carried out with paper-based questionnaires only. The findings presented in this study are not necessarily transferable to the same instrument in a web-based version. Working with digital surveys requires a specific set of skills: these were not taken into account in this research.
Future research
Based on reported difficulties, an altered HLS-EU-Q16 can be developed to determine HL in people with low literacy. Based on the findings of this research, some people with potential limited HL might benefit from some questions being formulated more clearly, for instance by adapting wording or by providing contextual information to facilitate interpretation [
42]. Moreover, a complementary, modified questionnaire could be provided or it might be useful to determine HL through interviews as opposed to surveying with pen and paper [
47].
Acknowledgements
Not applicable.