Background
Dementia is a growing problem worldwide in both the numbers of afflicted individuals and the cost of their care. In the US alone, there is an estimated 5.4 million individuals with Alzheimer’s disease (AD) at a healthcare cost of $236 billion dollars [
1]. Perhaps an additional 3–22% of those over 60 years of age may meet criteria for mild cognitive impairment (MCI) [
2‐
5].
Evidence is mounting, especially for AD, that early treatments and potential new disease modifying therapies are most successful in the earliest stages [
6,
7]. We unfortunately have a situation where individual patients with cognitive impairment, MCI, and early dementia are typically not diagnosed or identified in a timely fashion [
8,
9] to take full advantage of these medications. Therefore, improving the early identification of cognitive impairment must be a priority. Conversations between primary care providers and their patients and families regarding cognitive changes need to start much earlier in the disease course. However, there are many barriers in achieving this goal. Many patients reside in regions with few resources and with limited dementia-knowledgeable healthcare providers, clinical staff, or advocates. Providers may not be sophisticated or experienced in knowing how to screen those with cognitive complaints, which tools to use, or how to administer them. More than 40% of patients with mild dementia are not detected and diagnosed by their healthcare provider [
10‐
14]. In addition, many patients with MCI or early dementia have impaired insight [
15] and do not seek early medical intervention, typically only presenting to their family doctor an average of 3–4 years after cognitive symptoms are noticed by others [
9,
16,
17]. There are also some family members who explain away the patient’s symptoms, reluctant to accept that their cognitive changes are meaningful. Other barriers include issues of limited reimbursement by Medicare for brief cognitive screening evaluations [
18]. Providers and health systems may also have decided that too much time or personnel resources are required to administer cognitive testing more routinely.
The use of easily administered, brief, reliable, validated, practical, and inexpensive screening tools is critical in overcoming the many obstacles in identifying early cognitive changes in individuals. Screening Americans for cognitive impairment at their Medicare Annual Wellness Visit has been encouraged [
19] and may provide a baseline prior to potential future decline in their cognitive abilities. Every individual has different natural abilities and so will have different baseline scores on their cognitive testing. There are many excellent cognitive screening tests with good sensitivity and specificity that can differentiate demented subjects from normal individuals [
20‐
35]. Often they are underutilized due to the demand for personnel time and resources needed to administer them [
36]. Many have not been evaluated for efficacy for MCI detection or have shown insensitivity in differentiating normal aging from MCI [
37‐
41]. Informant-based assessments [
42‐
45] may be limited due to lack of a readily accessible informant. Simpler cognitive tests that measure one or two cognitive domains, such as animal fluency, list learning tests, or the Mini-Cog test [
46], have been advocated and used in primary care settings as a cognitive screen to be followed by more sensitive tests if impairments are noted [
47].
We developed the Self-Administered Gerocognitive Examination (SAGE), a valid and reliable, 22-point traditional pen and paper multidomain cognitive assessment tool to reduce the typical delay in identifying individuals with MCI or dementia (available for download at sagetest.osu.edu) [
48]. Our 2010 paper [
48] describes in detail the reliability and validity study of the SAGE test. It establishes inter-rater and test-retest reliability, and equivalence of the four different versions of the test. It also correlated well with other cognitive measures of the same construct. The SAGE test was shown to have high sensitivity and specificity in distinguishing between normal, MCI, and dementia groups. The self-administered feature with age and education norms and four equivalent interchangeable forms of SAGE allows it to be given in almost any setting [
49]. It takes on the average 13 min to complete and 30–60 s for it to be scored. It is sensitive enough to distinguish between MCI and dementia conditions and has been compared with other commonly used office-based multidomain brief cognitive tests [
48,
50].
In recent years, as more individuals gain access and become comfortable with the Internet, they are also accessing medical information online from wherever they live in the world. Online information provides critical knowledge to consumers who wish to improve their health. Many have a great worry about developing dementia and AD and so the time is right for new digital solutions to cognitive testing. Computerized cognitive testing has been available for years. Most are developed as stand-alone formal neuropsychological test batteries designed to aid diagnosis particularly for those patients with subtle or atypical patterns of cognitive impairment [
51‐
53]. Some computerized tests have been shown to distinguish between MCI and normal subjects and demonstrate potential for use in a primary care setting with a completion times of 30 min or less [
54‐
56]. Most do not have equivalent paper versions which would allow flexibility for individuals taking the test. Digital translations of brief paper cognitive assessments have been developed and require validation in their own right [
57].
In the present study, to provide a practical digital solution to early cognitive detection, a digital version of the paper SAGE test (eSAGE) made for tablet use is evaluated. The questions used in SAGE and eSAGE are identical. We evaluated eSAGE by comparing it to the validated SAGE, Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and a battery of neuropsychological tests. We measured the ability of eSAGE to detect MCI and early dementia against standard clinical assessment and neuropsychological evaluation. Having a validated online cognitive screening test may be very helpful for individuals to identify their cognitive issues, to prompt physician evaluation earlier than normally occurs, and to provide reassessments of their cognitive status.
Discussion
There are clinical and research advantages to having both validated paper and electronic formats of the same test. The brief office- or bedside-based cognitive tests and the longer traditional neuropsychology batteries are both typically administered with pen and paper. These paper tests have inherent familiarity and understandability for the clinician, clinical researcher, and neuropsychologist, and face validity is high. A validated digital cognitive assessment tool with its equivalent validated paper version brings with it the familiarity of the assessment. It would be very useful to be able to use equivalently scored paper or digital formats to be flexible in having the test be given in the office, in the community, using mobile technology, or on a home tablet. This flexibility is enhanced when both paper and digital tests are self-administered. The practicality of using either paper or electronic versions of a self-administered cognitive assessment tool may increase the number of individuals evaluated for early identification of cognitive impairment, may ease the ability to provide repeated testing to monitor cognitive change over time, and may potentially provide progression prediction.
The digital format has its own set of unique advantages. eSAGE allows the ability to time responses, which may provide enhanced functionality. eSAGE cannot only automatically time how long it takes to answer a question, it can determine how often the individual went back to previous pages, or how often they corrected answers. We plan to evaluate the utility of these metadata in the future. eSAGE will also allow individuals to evaluate themselves online and receive their remotely scored results online to be able to deliver them directly to their healthcare providers. This might make it easier to obtain that baseline cognitive assessment or that follow-up cognitive evaluation to assess for changes in their cognitive abilities in a timely fashion prior to their appointment. Individuals taking eSAGE are instructed to provide their baseline and subsequent test scores to their physician for monitoring progress. Obtaining results of their cognitive status online may be particularly useful to those individuals living in underserved or rural regions with few resources and who do not have easy access to dementia-knowledgeable healthcare providers or advocates. The self-administered digital format may also help reduce the typical stress people experience when taking pen and paper tests in a doctor’s office by a healthcare worker. The digital format also allows providers, researchers, and individuals the ability to store results electronically and thereby avoid having to store or lose paper forms. Digital transfer of information is fast and reliable compared to paper information.
Since the identification of the pre-dementia state of MCI was one of the driving forces behind the development of SAGE, in our study we carefully ensured that a broad range of eSAGE scores in the normal to early dementia range (generally eSAGE scores of 10–22) would be measured against the other neuropsychological tests and against SAGE. The age range and sex distribution of our subjects are typical of the population at risk for MCI and dementia. Our well-educated cohort, typical of individuals willing to participate in studies, does limit what we can conclude about those who are less educated using eSAGE.
Tablet-based eSAGE correlates well with the 7-item total of a battery of neuropsychological tests and performs similarly to the validated SAGE. As would be expected, SAGE and eSAGE scores are highly correlated with each other. SAGE and eSAGE have similar correlation values with MMSE and MoCA. eSAGE has not been compared to other computerized tests. However, based on our results, eSAGE has the qualities to be useful in primary care settings as a brief computerized cognitive assessment tool. It does a fair job in differentiating MCI from both normal and dementia subjects. Its classification accuracy compares well to MoCA with comparable AUC and better specificity but worse sensitivity than MoCA. Effect sizes comparing score means are also similar between eSAGE and MoCA. eSAGE has a practical advantage in the primary care setting over most other computerized and paper tests in that it has both paper and tablet versions.
When translating a written paper test into a digital format given on a tablet device, one cannot assume that the resultant digital cognitive test is identical to the same test administered on paper. There have recently been digital translations of traditional brief pen and paper cognitive tests [
57,
76]. It is clear from these attempts that the digital translation needs to be separately validated from the pen and paper version [
57]. Factors that can influence differences between computerized cognitive test results and those performed with pen and paper include the individual’s experience and familiarity with digital technology. In addition, administering a test using auditory means by a human (with paper recording) or visually by computer involve separate brain pathways and may lead to different scoring results for the exact same question.
In this study, we are therefore pleased to note that eSAGE not only correlated well but also showed no scale bias compared to SAGE. Since SAGE is self-administered, there are no auditory commands or requests, and only visually read questions and paper responses. eSAGE likewise on the tablet is performed by visually reading the questions, and it does not have aurally provided questions. The main difference between the two tests is the individual’s comfort and experience with the technology and the tablet. It turned out that, in the population tested, on average subjects performed one point worse on eSAGE compared to SAGE. They scored one point less on average when using the digital version of SAGE whether they had normal, mildly impaired, or moderately impaired scores. When we separated out our subjects (47%) who had never had experience with smartphones or tablets, as expected they experienced more difficulties with eSAGE and scored, on average, 1.65 points less whether they had normal, mildly impaired, or moderately impaired scores. Those with experience in using tablet or smartphone devices also scored worse on eSAGE but only by 0.83 points, again without a scale bias. Since an individual’s digital proficiency can be difficult to determine, we suggest adding one point to everyone’s digital scores to get an equivalent paper score. Digital unfamiliarity is likely to fade away over time, as newer generations of individuals will have more exposure and proficiency using digital devices.
When we divided our sample based on clinical diagnosis, we found statistically significant differences between both eSAGE and SAGE mean scores for normal subjects and MCI subjects. We also found statistically significant differences between both eSAGE and SAGE mean scores for MCI subjects and dementia subjects. Cohen’s effect size d for eSAGE between the normal and MCI groups (1.0), normal and dementia groups (2.82), normal and cognitively impaired (MCI + dementia) groups (1.91), and MCI and dementia groups (1.82) are all considered large and were slightly higher than those of SAGE. This suggests that eSAGE, like SAGE, does well in differentiating both normal from MCI groups and MCI from dementia groups.
eSAGE is not diagnostic for any specific condition. However, in our sample, eSAGE also had a high level of sensitivity and specificity in distinguishing normal from MCI and mild dementia. This self-administered instrument can be utilized to identify potentially clinically relevant cognitive changes that would then warrant further investigation. What one wants from a case finding tool is a high specificity, and one is more willing to accept false negatives (cognitively impaired that test as normal) rather than risk false positives (normal that test as cognitively impaired). eSAGE combines high specificity with reasonable sensitivity and would work better as a case finding tool than as a screening tool. Over time, if the condition is progressive, the false negatives will convert to true positives and these could be picked up by repeat testing. As might be expected, since subjects performed one point worse on eSAGE compared to SAGE, we found a cutoff score of 16 and above for normal subjects taking eSAGE and 17 and above for normal subjects taking SAGE. Consistently, the best cutoff score for specificity and sensitivity for SAGE from our current sample is the same cutoff value we had in our initial validity study [
48]. For eSAGE, in differentiating dementia from nondementia a score of 13 or less for dementia subjects gave the best sensitivity and specificity. Evaluating nondementia subjects alone with eSAGE, a score of 17 or higher for normal subjects provided the best sensitivity and specificity. This suggests that MCI subjects would fall typically in the range of 14–16 on eSAGE. Cutoff total scores are useful as guidelines. Clinicians may gain more clues as to the etiology of cognitive loss by looking at the specific pattern of cognitive deficits in instruments such as eSAGE. Additional helpful information may be obtained from the self-report items in the nonscored part of eSAGE. If the patient scores well on eSAGE, the clinician may determine that no further evaluation is indicated, potentially saving costs for the patient and time for the physician. For patients scoring less well or borderline on eSAGE, the practitioner may wish to continue with a staged screening process such as assessment with an informant screen or further evaluations. We hope eSAGE will allow earlier identification of cognitive impairment so that proper diagnosis and treatment may begin sooner.
Limitations
Specific limitations related to the SAGE test have been described in previous publications [
48,
50].
There are also significant limitations with this study. We extensively studied only 66 subjects, the majority of whom were Caucasian and highly educated. Some caution is needed regarding the interpretation of those with below high school education and also with minorities as they were few in number and not fully represented in our sample. Low-educated subjects have high misclassification rates with other commonly utilized cognitive screening tasks [
77]. Results may also be limited based on where the patients were recruited and the range of their cognitive abilities. We attempted to get a broad cross-section of a clinic and a community population. The ADCS-ADL scale, initially designed as an informant-reported measure, was used as a self-report in some subjects when their study partner could not be interviewed. While none of those subjects were believed to have dementia, this could have impacted their ADL scores. In order to test a wide distribution of eSAGE scores, we included eSAGE scores in the normal, MCI, and mild-to-moderate dementia range. Additional longitudinal studies in the future will be very important in evaluating the ability of eSAGE in accurately measuring cognitive change over time. This will help determine if it could help identify conversions from normal to MCI, or MCI to dementia. Further research is also required to determine if eSAGE has utility in identifying early cognitive decline in any specific neurocognitive conditions such as AD, vascular dementia, Parkinson’s disease dementia, dementia with Lewy bodies, frontotemporal dementia, endocrine/metabolic/toxic/oncologic conditions, sleep apnea, or acute confusional states.
Thus far, no large randomized trial has demonstrated a correlation between screening and improved outcomes. This would need to be performed to gain widespread acceptance of screening programs. While this study was primarily looking at correlations between eSAGE and other neuropsychological tests, it is clear that eSAGE will be used primarily by individuals with cognitive concerns or complaints as a way to assess cognition and aid diagnosis. Unless provided by a physician, eSAGE would typically not be taken to obtain a cognitive baseline not in response to a cognitive concern. A self-administered test and a digital test, like eSAGE, would ease the time burden of physicians who desire to incorporate yet another screening evaluation in their clinic setting. The advent of disease-modifying treatments may further justify such screening. Positive screens, however, also impact patients and families who may worry about their future, potential stigma, long-term care, insurance issues, and loss of employment, driving, and independence. A staged screening approach reducing the number of false-positive screens would improve the comfort level of physicians and patients with cognitive screening programs.
Acknowledgements
We thank all subjects for their participation in the study. We are grateful to Jennifer Icenhour (Department of Neurology, The Ohio State University Wexner Medical Center) for assisting with research coordination and Aash Bhandari (Department of Neurology, The Ohio State University Wexner Medical Center), Jacob Goodfleisch (Department of Physical Medicine and Rehabilitation, The Ohio State University Wexner Medical Center), and Jennifer Icenhour for psychometric testing. We thank BrainTest, Inc. for providing the length of time it took for participants to complete eSAGE.