In the past few decades, the use of computerized neuropsychological assessment devices (CNADs) has received increasing attention in both clinical and research settings, because it is considered as a more advantageous alternative to conventional examiner-based tests [
1]. The interest in CNADs has exponentially increased in response to the COVID-19 pandemic, which has forced significant changes in clinical practice. Indeed, one of the properties of CNADs is to allow remote encounters between clinicians and patients [see
2 for review]. Furthermore, focusing on the fact that neuropsychological tests on digital platforms are sensitive and cost-effective, several studies have already used these tools with many clinical populations [
3]. However, to date, "paper-and-pencil" tests remain the gold standard for neuropsychological assessment [
4,
5]. The American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology, in a joint document, defined CNADs as any tool that administers, evaluates, or interprets tests of brain function using a computer, portable devices (such as digital tablets or smartphones), or any other digital interface instead of a human examiner [
3]. However, such a position statement, that emphasizes the administration of cognitive tests by a computer "
instead of" a human examiner, can be extended to all computerized neuropsychological platforms, including a broader spectrum of ever-growing technology and, therefore, include tools that may allow a better presentation of the stimuli and data recording through computer presentation (such as immersive and non-immersive Virtual Reality) [
4]. As aforementioned, CNADs allow to present with extreme accuracy a wider range of tasks including, for example, those involving multi-tasking, multiple measurements, divided attention, speed processing and response times for multiple measurements. Furthermore, CNADs provide new parameters, more easily and quickly available [as, for instance, recording of response latencies, see
6], leading to greater accuracy in detecting cognitive changes [
7‐
9], in both the clinical and research fields [
10,
11]. Finally, computerized tests can be administered to many subjects in shorter times and with lower costs; the evaluation of the responses can be more precise, enabling automatic administration and scoring procedures, and thus reducing the probability of human error [
3]. On the other hand, in a conventional neuropsychological assessment, there can be a great deal of variation between different examiners or between the same examiner along an assessment session, in the manual use of tools (such as a stopwatch), or in the manual presentation of material (such as stimulus cards) [
12]. Furthermore, while in an examiner-centred paper-and-pencil assessment the patient interacts with an examiner who presents stimuli, records responses, notes key behavioral observations and also understands the patient’s level of effort and motivation, in CNADs, the patient interacts with a computerized test station or a tablet through one or more alternative input/output devices (keyboard, voice, mouse, joystick, touchscreen, head-mounted display), sometimes without any supervision or observation. Additionally, computerized assessment does not permit examiners to introduce flexibility into their evaluation or provide any structured encouragements to the examinee. For those reasons, the use of CNADs may require familiarity with technical skills that can significantly impact on relevant evaluation parameters. Patients, especially the elderly, may have limited familiarity with computer interfaces, as well as negative attitudes and anxiety about computers [
13]: as a result, their performance may differ significantly from pencil-and-paper measurements [
14‐
16]. On these assumptions, computerized tests are not simply the replacement of paper-and-pencil for a computer screen and electronic response capture and they could not be considered
tout court directly comparable to an examiner-administered evaluation [
17‐
19]. Rather, a traditional test administered by computer becomes a new test, different from the conventional one both qualitatively and technically. Indeed, several studies have proved that there are substantial differences between the computerized measurements and the corresponding examiner administered tests in different samples [
20‐
23]. In addition, the "replication crisis" [
24] has increased the need for appropriate and robust statistical inference: when examiner-based tests are proposed in a computerized version, new psychometric data should be provided according to the same standards of psychometric tests. It cannot be assumed a priori that the normative data for a test administered by the examiner apply equally well to a computerized version of the same test, due to changes in the administration method and patients’ familiarity with the computer. Thus, if a computerized test adapted from a conventional test is not simply a slightly different format for an existing test, it becomes essential that equivalence testing between the paper-and-pencil and the computerized versions of that test is provided. New normative data for computerized tests must be collected when no equivalence with the paper-and-pencil version of the tests is proved. Several tests have already been adapted for computerized presentation, such as the Montreal Cognitive Assessment – MoCA [
25], the Clock Drawing Test, the Pentagon Drawing Test [
26], the Rey Complex Figure copy task [
27], the Bells Cancellation, Line Bisection and Five Elements Drawing Tests [
28]: these tests are, of course, critical in the assessment of cognitive functioning.
Among neuropsychological patients, memory problems are one of the most reported symptoms and, due to growing life expectancy, differentiating early neurodegenerative disorders from normal aging is for sure one of the greatest challenges for the future. Notably, despite the extensive body of literature supporting the use of recognition memory paradigms to enhance differential diagnosis between neurological patients, depressed ones and healthy controls [
29], its clinical assessment is still limited due to the paucity of validated, clinical tools available. As for the Italian population, only the verbal and non-verbal recognition memory test (RMT) [
30] and the visual long-term recognition memory test [
31] are available. The RMT represents a relatively fast screening tool, which may be useful for the early detection of memory disorders; two parallel versions are also available for multiple testing [
32]. However, despite its promising features, the original examiner-centred version of the RMT may be prone to human error: clinicians are indeed required to control for several factors, namely the duration of the stimuli presentation and the registration of patients’ responses and behaviours [see 30, methods section]. For all these reasons, we decided to provide a validation of a computerized version of the verbal and non-verbal recognition memory test (RMT—Form A) for words, unknown faces, and buildings [
32], comparing it to the results of the original paper-and-pencil test. To this end, a computerized version of the RMT was built and we administered both the pencil-and-paper and the computerized version to different age-balanced groups of healthy participants, with medium–high education. Furthermore, in order to evaluate the possible impact of different neurological conditions on patients' ability to interact with the computer interface, the two version of the RMT were also administered to a small sample of neurological patients with mixed aetiology.