Background
Appropriate dietary intake can improve athletic performance, enhance adaptations to training and augment recovery from exercise [
1,
2]. However, athletes have been known to consume diets that do not to meet their energy and nutrient needs [
3], and a mismatch between contemporary expert recommendations and athletes’ dietary practices have previously been demonstrated [
4]. Nutrition education programs improve nutrition knowledge [
5‐
7] and higher levels of knowledge are correlated with better diet quality [
7‐
9]. Accordingly, professionals working with sports people often provide nutrition advice [
10]. Parks et al. [
11] reported that the number of dietitians employed by collegiate athletic departments has quadrupled since 2010. However, globally there is limited information regarding athletes’ access to relevant and appropriate nutrition advice; ostensibly, this may vary according to the level of professionalism of their respective sport and their immediate support network. Hamilton et al. [
12] reported that elite athletes in New Zealand had higher levels of knowledge than non-elite athletes. In contrast, Andrews et al. [
13] found no differences between sub-elite and elite Australian soccer players. Trakman et al. [
14] conducted a systematic literature review on nutrition knowledge of athletes and coaches and reported a possible relationship between athletic calibre and knowledge. However, the authors concluded that due to the heterogeneity and poor quality of Nutrition Knowledge Questionnaires (NKQ’s), athletes’ nutrition knowledge (and the factors that influence this) are difficult to ascertain [0–22]. The poor quality of NKQs is also likely to influence researchers’ ability to accurately quantify the correlation between knowledge and dietary intake [
8,
15] and impact practitioners’ ability to evaluate nutrition education programs.
Trakman et al. [
14] noted that a key factor affecting the quality of NKQs was a lack of adequate validation. The maximum validation score of a sports nutrition knowledge questionnaire (SNKQ) used with athletes was three out of six. More recently, Furber et al. [
16] developed an SNKQ for British track and field athletes undertaking four of the six recommended validation methods; face validity testing and item analysis were not performed. Of note, the rating system used by Trakman et al. [
14] was based solely on classical test theory (CTT). The CTT framework focuses on the questionnaire as a whole. It is based on correlations and assumes that all questions are equal indicators of an individual’s nutrition knowledge [
17]. A key aspect of CTT is the use of the Cα statistic to measure internal reliability; however, Cα is only suitable for scales with 20 or fewer items and is frequently incorrectly used on much longer questionnaires [
18]. Moreover, it is not an inherent property of a questionnaire and needs to be re-assessed each time a new sample completes the tool [
18].
An alternative to CTT is Rasch analysis. Rasch analysis is a technique that was first developed in education, has been utilised to develop psychological assessment tools [
19] and health related patient-reported outcomes (HR-PRO) [
12,
20], and more recently has been utilised to validate questionnaires that assess knowledge of the energy content of meals and balanced meals [
21,
22]. Rasch analysis offers several advantages over CTT; it allows shorter scales with multiple response formats to be developed, and because it does not rely on measures of central tendency, it is said to be more ‘stable’ across varying populations [
23]. The aim of Rasch analysis is to create a unidimensional (i.e. assessing one concept) questionnaire. During Rasch analysis it is necessary to test that the questionnaire concurs with the assumptions that (1) difficult items are less likely to be answered correctly, and (2) individuals with higher levels of knowledge are more likely to answer questions correctly. These expectations are tested by assessing a range of statistics which provide feedback on: the differences between observed and expected responses; whether the difficulty of items is consistent across participants (i.e. whether items are good at discriminating between well-scoring and poor-scoring respondents); and whether items are answered consistently on the basis of participant characteristics, such as age and gender. The present study will use a novel method that evaluates items based both on CTT and Rasch analysis. To our knowledge, no SNKQ has been validated using Rasch analysis.
In addition to issues pertaining to validation, many existing SNKQs have problems with their actual content. While 13 (out of 36) studies in the review by Trakman et al. [
14] covered 75% of the nutrition sub-sections that were deemed relevant, the comprehensiveness assessment was limited because the researchers did not assess the extent to which each topic was assessed or the quality of individual items. Indeed, many items appear to test out-dated dietary recommendations that are not in line with recently published guidelines such as the American College of Sports Medicine (ACSM), the International Olympic Committee (IOC), and the International Society for Sports Nutrition (ISSN) review on Sport Nutrition [
24] and the multiple ISSN, IOC review papers and consensus statements on nutrition and athletic performance [
1,
2,
25‐
33]. As above, current guidelines expound that carefully choosing the amount, type and timing of foods and fluids will optimise an athlete’s adaptations to training, performance outcomes, and recovery from exercise. They emphasise the importance of individualising nutrition, especially with regards to carbohydrate intake and hydration, and acknowledge that some supplements (e.g. creatine, caffeine, and bicarbonate) can enhance athletes’ performance, but encourage a prudent approach to supplementation [
1,
2,
25,
26,
34]. The present study has based questions on these recommendations.
Further to the issues pertaining to the quality and content of existing SNKQ’s, many tools have limited cultural applicability and/or focus on measuring the knowledge of a single sport. This limits the ability of tools to be used to compare knowledge of athletes from different countries and knowledge of athletes between sports.
The aim of this study was to address the deficiencies in existing SNKQ’s by developing a new SNKQ that:
(1)
Has been validated using a robust methodology that includes both CTT techniques and Rasch analysis
(2)
Assesses knowledge of current consensus recommendations on sports nutrition
(3)
Assesses knowledge of all relevant aspects of sports nutrition and is generalizable to multiple sports
(4)
Is likely to be understood by individuals from various cultural backgrounds
It was hypothesized that the questionnaire would represent a significant improvement on currently available measures. From a research perspective, a high-quality nutrition knowledge measure will allow for more accurate assessment of factors that influence knowledge and a more a more reliable assessment of the impact of nutrition knowledge on diet quality. Moreover, for individuals working with athletes, a quality measure is likely to have practical implications, allowing for the evaluation of nutrition education programs and therefore development of more targeted education strategies that are based on gaps in knowledge.
Discussion
The questionnaire
Due to the limitations with existing SNKQs, researchers have raised concerns regarding the accuracy of previous reports of athletes’ nutrition knowledge, and have postulated that the relationship between nutrition knowledge and dietary intake may have been misjudged [
8,
14,
41,
42]. The aim of this study was to create an SNKQ that tested awareness of current consensus recommendations, was adequately validated and could be used with athletes from a range of sports. The newly developed tool, the
Nutrition for Sport Knowledge Questionnaire (NSKQ), has 89 individual items (44 questions, with some having multiple parts) covering six distinct subsections The questionnaire takes around 25 min to complete and is comparable in length to the GNKQ [
40], which has 113 items and the SNKQ developed by Zinn et al. [
43], which has 88 items. Since the questions are based on consensus guidelines, several items assess theoretical knowledge. Questions that assess practical knowledge have also been included. The tool is more comprehensive than existing measures as it includes an alcohol sub-section and adheres to a detailed test plan. The questions are based on current guidelines. For example, rather than ask about carbohydrate requirements as % total calorie intake, we have included a question on requirements in g/kg/day and specified the type (‘endurance’) and intensity (‘moderate to high’) of activity. Likewise, in contrast to other existing tools, our hydration question reflects findings that using thirst to judge fluid needs can maximise performance and current recommendations that hydration plans should be tailored to the individual [
31,
44]. Moreover, questions on the timing of the pre-completion meal and recovery snack have intentionally been omitted. It is too difficult to assess the correct answer to these questions given the increasing evidence to support the positive benefits of periodizing nutrition based on the goals of individual sessions and overall training schedule [
4,
28].
Validity and reliability
The questionnaire has demonstrated face and content validity based on student-athletes and sports dietitians’ judgements. In contrast to previous tools, the content validity has been assessed quantitatively, using a CVI [
36]. Individuals who reported undertaking studies in human nutrition achieved higher scores across all sections, except alcohol, indicating that the questionnaire has good construct validity. The group with a nutrition education were younger and were more likely to be female and tertiary educated. In contrast to previous reports, there was no significant difference in performance based on age or gender [
41,
40]. Therefore, the variations in knowledge between groups are unlikely to be due to underlying differences in participant characteristics. Test-retest reliability was assessed based on the correlation between individuals’ (who repeated the questionnaire) test scores. A limitation of this method is that motivated individuals may upskill between attempts. The average total score was higher for attempt two. Nevertheless, overall test-retest reliability was high, and all sub-sections, except ‘supplements’, achieved (or approached) adequate test-retest reliability. Participants performed most poorly on the supplement section. Therefore, it is feasible that the supplement test-retest result occurred due to participants guessing answers. The overall internal reliability was very high, and the internal reliability of most sub-sections (except alcohol) reached or approached the requisite 0.7 value. As expected, there appeared to be a relationship between the number of items and KR-20. Streiner [
18] recommends that KR-20 be interpreted with caution if there are more than 20 items; the overall scale and macronutrient sub-section exceeded this value.
Based on the Rasch analysis, the overall Item/person interaction statistics were adequate, indicating compliance with the expectations that difficult questions were less likely to be answered correctly, and individuals who performed well overall were more likely to answer individual questions correctly. The questionnaire as a whole was multi-dimensional, but with problematic items removed, each section was shown to be unidimensional. Therefore, sections can be used independently, as required. Where the whole tool is used, sub-sections rather than total score should be reported.
All items were written so that units and food names were generic and likely to be understood by individuals of varying cultural backgrounds; however, additional evaluation is required to confirm the functionality of the tool in groups who differ from the present cohort. The fact that country of birth did not influence scores, and the use of Rasch analysis, which produces questionnaires that are independent of the sample used for validation, give some indication that the tool is likely to also be valid in other groups.
Limitations
A limitation of this study is that we were unable to calculate response rates because we distributed the questionnaire using Facebook groups and online athlete forums, making total exposure unclear. The completion rates for step 7 (where item analysis and Rasch analysis was undertaken) were relatively low (~45%), but there did not appear to be any relationship between participant characteristics (other than sport played, country of birth) and completion rate (S2). The completion rate for step eight (~66%) was adequate [
45]. The sample size is another potential limitation. For step seven, there were 188 responses for a 176 item measure; for step eight, there were 181 responses for a 100 item measure. The target sample size for both studies was 200. However, there is some evidence to recommend that samples as small as 30–50 are appropriate for CTT [
46]. Similarly, Chen et al. [
47] modelled Rasch with varying sample sizes and found that stable results can be achieved with samples of around 100.
A limitation of the questionnaire itself is that the length may be prohibitive, especially for athletes balancing training and work/study who are often time-poor. In addition, some items were poor discriminators. This was reflected by low item discrimination in CTT and the significant chi-square probability of the micronutrient, macronutrient and alcohol sections. For several questions, the poor item discrimination can be explained by the item’s relatively high or low difficulty index. That is, when a question is answered correctly (or incorrectly) by a large proportion of individuals, the overall range of responses is minimal, and therefore it is hard to achieve a meaningful difference between high and low scoring individuals. Many of these items were kept because they tested important concepts, providing valuable feedback on gaps in knowledge. Item discrimination is worth re-evaluating using larger samples of predominantly athletes (not including nutrition students). Likewise, future studies may focus on creating a short-form tool that can be used for rapid assessment of nutrition knowledge. A short-form tool would be useful in research settings where the correlation between knowledge and other factors is being assessed. A short-form tool may also have utility in the elite setting as a ‘screening’ tool for professionals working with athletes, i.e. to identify individuals who need nutrition education and extra support.
At present the NSKQ has only been validated in an Australian population. Future studies could focus on validation to confirm reliability and validity in other regions.
Strengths
A key advantage of the questionnaire is that it has been validated using a robust methodology. To our knowledge, this is one of very few NKQ to be assessed against the Rasch Model. Likewise, it is the only tool to assess content validity qualitatively and to assess distractor utility - a distractor that is too obviously wrong will significantly increase the chances of respondents guessing a correct answer; this type of analysis is valuable. Importantly, the authors have considered the limitations of the statistics and accordingly made decisions that focused on the quality of the overall tool. In addition, the questions (and their correct answers) are based on the most recent evidence and recommendations with regards to sports nutrition; they are generalizable to most sports and enable comparison across disciplines. The tool uses food terms and measurement units that are likely to be understood by athletes from a range of countries. Moreover, the tool is detailed and therefore can assess gaps in knowledge. The NSKQ has been designed to be administered online and can provide participants with immediate feedback with regards to correct answers to questions. This is likely to be especially helpful for athletes who do not have access to professional support. The online format provides unique opportunity to direct participants to reputable and relevant resources based on their outcomes.
Conclusions
An 89-point general and sports nutrition knowledge questionnaire with six distinct sub-sections has been developed and validated using multiple relevant methods. Three (weight management, sports nutrition, supplements) of the six sub-sections fit the Rasch model. The steps the researchers have taken to ensure the tool is current and adequately validated were robust, and the questionnaire represents an improvement on available measures. Coaches, scientists and nutrition counsellors will benefit from this tool because it will allow them to target their education based on gaps in athletes’ knowledge. In a team sports setting, the NSKQ may also be useful as a screening tool, to identify players who require additional educational support. Widespread utilisation of the tool in the long-term will allow for more accurate evaluation of nutrition knowledge, education programs and comparisons across athletes of varying genders, ages, education levels, and calibres.