Background
Due to the growing availability of medical and health information through various sources and forms of mass media [
1], more and more people are becoming consumers of health information [
2]. However, this accumulation of knowledge does not necessarily result in adequate information about one's health or illness. In fact, the public health literature suggests that a large percentage of individuals are unaware of the symptoms, mechanism and management of their illness, and many feel that their level of knowledge is unsatisfactory [
3]. Therefore, delivering systematic, comprehensive and reliable information about the nature, prevention, detection and self-management of illnesses is imperative.
Governments and health professionals are recognizing this gap in knowledge, and increasing amounts of money and time have consequently been spent on designing and developing health education interventions. Health education interventions usually provide individuals with the necessary knowledge and/or skills regarding the nature of an illness; its mechanism, signs or symptoms; consequences of the illness; prevention, detection techniques or self-monitoring practices; and appropriate self-care, management and treatment methods. For instance, self-management education for chronic illnesses have been developed to empower participants, teach them skills and techniques, and improve their interaction with the healthcare systems to enhance the management of their chronic condition [
4]. Interventions can either target specific groups of individuals such as individuals with specific illnesses; caregivers, family, and friends of those with illnesses; healthcare professionals; or the general public.
Interventions are usually evaluated by surveying or interviewing participants. Information such as their experiences, acceptability of the program, and relevant information about individuals before and after the completion of the education sessions are usually collected and analyzed. Evaluation or follow-up surveys provide important data regarding the effectiveness and limitations of the interventions.
In addition to the reported benefits of health education interventions across various health issues [
5‐
10], the key to program effectiveness is participation and retention [
11‐
15]. Unfortunately, not everyone is willing to participate in health interventions and its evaluation upon invitation. In fact, health education interventions are vulnerable to low participation rates [
12,
16‐
18]. Low participation not only limits these interventions from being effective and reaching their target population and goals, but also impedes the gathering of information necessary to further improve these programs. The amount of research on this issue is limited, and even fewer investigations look at the factors that could increase participation. We believe that intervention factors such as convenience and accessibility [
19], length and frequency, and the characteristics of the setting or organization offering the intervention [
20] can offset the perceived benefits and decrease participation of health education interventions [
21].
The purpose of this study was to identify the preferred design features of health education interventions and intervention evaluation surveys in a general ambulatory population. By determining the extent to which people are willing to participate in education interventions based on various design features, program managers, educators and researchers can develop education initiatives that are more appealing, convenient and accessible to their population of interest, while at the same time increase program participation.
Discussion
Given that many health education interventions are provided in ambulatory settings, the goal of this study was to inquire about preferences regarding settings, survey methods, duration and frequency for interventions and evaluation survey from an ambulatory-based population. Our study findings suggest that research settings do influence people's willingness to participate in a health education intervention. Most individuals are willing to participate in only one education session per year for 30 to 60 minutes, preferably in hospitals, either in a group or one-on-one format. In addition to already being at the hospital for other reasons, hospital settings may be more comforting to individuals due to the presence of other people, or they may feel that hospital settings legitimize the proposed intervention or survey [
20]. In fact, individuals may expect to receive some education about their illness while they are in the hospital. Also, individuals with chronic illnesses often feel isolated in their management of the disease [
22], and family members may feel powerless in their support due to shortage of knowledge regarding the experience. In such cases, a group environment may provide participants with an opportunity to interact with others who have similar illnesses or concerns, resulting in a slightly higher popularity for group sessions over the one-on-one in-hospital format. However, some participants may dislike a group format for various reasons [
21] and may want more interaction with knowledgeable health professionals [
23], thus preferring a one-on-one setting. Providing individuals with a choice between individual and group education is ideal, but not always feasible, particularly when working with limited resources.
The majority of respondents are not willing to participate in education sessions in their home, which showed a different pattern from the responses towards in-hospital settings, possibly because this format may represent an invasion of their privacy and security. Certain individuals may also be discouraged from participating in home interviews due to fear of strangers, violence or abuse [
24]. However, people who are financially or physically unable to travel to the location where a health education intervention is being offered [
13,
25‐
27], or who simply dislikes hospital settings [
19], may prefer in-home education sessions or sessions in local community settings.
The type of survey methods used also influence how often and how long respondents are willing to spend completing an evaluation survey. In general, most people are willing to spend 20 to 30 minutes completing one to two surveys in hospital or by mail, but lower enthusiasm was observed for in-home or telephone methods. Surveys at hospitals can be completed while waiting for a medical appointment or immediately after the intervention, and mail surveys may be more popular because they allow participants to complete the survey at their convenience. Reasons for lower willingness to complete in-home surveys may be similar to those previously noted for in-home education sessions. Since telephone surveys usually require immediate completion, devoting a continuous period of time may be tiresome to some unless one is very motivated.
The preference to participate in studies by mail, as opposed to at home with an interviewer or by telephone, is problematic for program managers and researchers because mail surveys tend to have low response rates [
28]. In addition, individuals with less education tend to be underrepresented in mail surveys, which may introduce a non-response bias [
29]. Furthermore, people may forget or may feel less obliged to fill out mail surveys compared to other, more interactive modalities. However, mail surveys may provide greater and more accurate data when sensitive information is being sought, given the lack of an interviewer [
24,
30]. They are also more practical and inexpensive, costing much less per person than telephone or in-person interviews [
30‐
32]. Several combined approaches can be used to enhance response rates from mail surveys, such as sending a reminder to participants within a month [
28], re-mailing the survey [
33,
34], or including small monetary incentives [
33‐
37]. In any case, using both mail and a subsequent telephone follow-up, with the option of completing the survey by telephone, may be most effective [
38].
Currently, the Internet is not universally accessible and not all users are proficient at it [
39], which may explain its unpopularity relative to hospital and mail surveys. Younger age and higher level of education is identified as the strongest predictors of Internet access and use [
40]. Although most respondents had an education level of more than high school, the mean age of our sample population was 44.5, with 83.2% of respondents being over 30 years of age. This middle-aged population may be less receptive to computer technology and less likely to be Internet-orientated compared to a younger 18–29 age group [
40], and thus might also account for the relative low preference for surveys via the Internet. However, as people become more Internet-orientated, this method may become more popular and practical [
1]. Moreover, recent studies have shown an increasing preference for computer-based surveys because they offer more privacy and are easy to use [
39,
41]. One study showed that although e-surveys resulted in a lower response rate than postal surveys, the data quality was equivalent and was obtained in a shorter average response time, indicating that e-surveys could be a more feasible evaluation method in the future [
42]. There is also exploration into the development of electronic and Internet-based health education intervention or support programs, which are getting favorable responses among participants in other research studies [
43].
Due to the overall shortage of information regarding whether questionnaire length affects the response rate, this factor is still being debated among the academic literature. Some studies show that the response rate and the number of missing responses in a questionnaire are not related to questionnaire length, provided the questionnaire is well designed and easy to complete [
28,
31,
44]. Other studies show that increasing the length of a questionnaire results in an increased burden on completion and a decreased response rate [
45‐
48]. In our study, the number of respondents who were willing to participate in education intervention or evaluation surveys differed greatly between settings, especially at a lower frequency or for a shorter time-duration. Nevertheless, the majority of the respondents were agreeable to spend 30–60 minutes participating in an education session and 20–30 minutes completing a survey; however, as the number and length of education sessions and evaluation surveys increased, less people were willing to participate and less of a preference for any particular setting or survey method was observed. This inverse relationship demonstrates that most respondents are only willing to allocate up to a limited time when participating in education sessions and surveys. Individuals who are willing to participate in many education sessions or surveys, and for longer durations, may be motivated by different factors, and thus the location or method of the intervention or evaluation surveys may not be an issue. Consequently, designing short and convenient education programs and evaluation surveys appears to be important. For instance, an inverse relationship was found between program intensity and retention in health promotion work-site programs [
11]; suggesting that intensive programs that demand more time, effort, and commitment may generate substantially lower participation and retention rates [
49]. Furthermore, overly structured or inflexible interventions may not be optimal [
50] and providing participants with options to choose from various settings or delivery methods that can easily fit into their schedules are likely more effective in retaining patients. Short and convenient scheduling of intervention sessions may be a key element in increasing participation, especially among full-time working individuals [
11].
There are a few limitations associated with this study. Firstly, our study participants were recruited from hospital diagnostic testing centres and may not represent the general population. Our study participants may frequent the hospital more often than the general population, particularly since the majority of participants favoured a hospital-based setting for both the intervention and the evaluation survey. These findings may also imply a potential selection bias given the study population and setting were ambulatory-based. Nevertheless, this was our targeted population and the goal of this study was to identify their preferences. Furthermore, a normal distribution was observed among participants' self-reported health status, where 77 percent reported being in good to excellent condition, which is similar to the self-reported health status distribution observed among the general populations of Canadians, where 66 percent reported being in very good to excellent condition [
51]. Our sample may not represent the non-English speaking population, who may adhere to different cultural beliefs that could influence one's motivation and decision to participate in education programs [
52]. Yet, 33 percent of our study sample was born outside of Canada, and represented various non-western ethnic minority backgrounds including African, Asian, South American and East Europeans, thereby representing the opinions of a diverse and multicultural population. Lastly, this survey measured respondents' intentions to participate in a health education intervention and an evaluation survey based on various research design features; these intentions may not translate into actual participation if respondents were presented with the opportunity to take part in an education intervention or survey [
53,
54]. Nevertheless, our study findings capture a legitimate perspective (i.e., potential participants) on how to design better health education interventions and evaluation surveys to meet a population's preference.
Competing interests
The author(s) declare that they have no competing interests.
Authors' contributions
EG was involved in the conception and design of the research idea, coordinated the acquisition of the data, analyzed and interpreted the data, and drafted, reviewed and revised the manuscript. CDL analyzed and interpreted the data, draft, reviewed and revised the manuscript. AP was involved with the conception and design of the research idea, acquisition of the data, analyzed and interpreted the data, and drafted and reviewed the final manuscript. JC was involved with the conception and design of the research idea, coordinated the acquisition of the data, provided guidance with the analyses and interpretation of the data, and reviewed the manuscript. All authors read and approved the final manuscript.