Introduction
A major objective of clinical trials, particularly randomized controlled trials (RCTs) is to identify which of two or more therapies is most effective. However, people often differ in their response to the same intervention. When a treatment that works for most people based on an RCT is not effective for a particular patient, in clinical practice the next step typically is to try something else. The next choice in this “trial and error” process would, ideally, be informed by evidence. However, clinical trials in which individuals are randomized to sequences of treatment strategies are seldom used [
1].
An alternative to an idiosyncratic series of choices are decision rules such as those embodied in guidelines developed by medical professional organizations: a combination of expert opinion, behavioral, psychosocial and biological theories, and observational studies to formulate adaptive treatment algorithms, or adaptive interventions (AIs) [
2,
3]. While clinical guidelines may reduce variability from practice to practice, they do not alleviate the scientific uncertainty about which sequence is actually optimal. The recommendations become the subject of potential future research.
Experimental trial designs have been proposed for development and optimization of treatment sequences. One such design is the Sequential Multiple Assignment Randomized Trial (SMART) [
2,
4]. Adaptive interventions are treatment algorithms wherein treatment is sequentially modified over time based on individual’s response. The rationale is that by adjusting the treatment type and level as a function of time-dependent measures such as response to the past treatment, the long-term outcome is optimized [
2,
5].
Most experience with SMARTs has been limited to mental health and behavioral sciences [
2,
4], and Phase 2 trials in oncology [
6]. SMART is particularly attractive in cancer therapy as sequential treatment based on intermediate response is already well-established. However, SMART has potential value to scientifically address problems in a wide range of contexts, including the use of technology such as telemedicine to encourage health-promoting behaviors [
7].
Telemedicine is the provision of healthcare services and the exchange of healthcare information using information and communication technology across distances [
8,
9]. It is used in multiple areas of clinical practice, e.g., surgical practices [
10‐
12], management of chronic diseases [
13], addiction management [
14] and palliative care [
15,
16]. The necessity for and utilization of telemedicine has significantly accelerated, when many in-person clinical activities are deferred or suspended, as a result of the on-going coronavirus disease of 2019 (Covid-19) pandemic [
17,
18]. What is becoming evident in this field is that “one size does not fit all”. Studies have shown that telemedicine interventions are more likely to have a positive effect on users’ self-efficacy, knowledge relevant to their condition, and behavioral and clinical outcomes [
19]. However, not all patients are receptive to a particular mode of delivery. A key to establishing the effective and cost-effective application of telemedicine is understanding how these approaches fit into real-world care, in particular as part of a sequence that maximizes the proportion of patients who ultimately respond to good effect.
With this in mind, we sought to examine the value of a SMART design compared to an RCT for two telemedicine strategies to support titration of insulin therapy for Type 2 Diabetes Mellitus (T2DM) patients new to insulin: (1) a largely self-contained smartphone app, Diabetes Pal [
20] and (2) a nurse-based telephone consultation service, SingHealth Polyclinics’ (SHP) Insulin Initiation Telecare Program (see the Methods section for details about these two telemedicine modalities). For comparability, the SMART and an RCT designs were constructed to allow comparison of various sequences of the two telemedicine strategies. The basis for this comparison is microsimulation using data derived from a pilot clinical trial of Diabetes Pal [
20]. We sought to demonstrate the impact of the two trial designs on improvement in chronic blood glucose control as measured by change in glycated hemoglobin (HbA1c), and trial cost for the study population. In sensitivity analysis we examined how these measures of value were affected by various aspects of trial design, including the operating characteristics of the measure of responsiveness to initial treatment measure used to determine whether to continue or switch treatment.
Discussion
In this study, we examined the value of the SMART design relative to a comparable RCT design of two telemedicine interventions for insulin initiation: a largely self-contained smartphone app [
20] and a nurse-based telephone consultation service. The designs were comparable in that both had the aim to evaluate the optimal sequencing of these two interventions, including the potential for combining interventions. We did this evaluation using microsimulation drawing on empirical data from a prior conventional trial. Simulation allowed us to perform sensitivity analysis of how diabetes control (as assessed by HbA1c) and trial costs were impacted by various aspects of trial design, including the operating characteristics of the intermediate measure used in the SMART and RCT designs to continue or switch treatment. It should be noted that the RCT design used as the comparator was unconventional, involving both multiple arms and treatment switching based on interim assessment of responsiveness to the initial treatment.
While both designs provide information on the optimal sequencing of therapies, we demonstrated some notable benefits of SMART compared to RCT. First, the SMART design from the perspective of trial population, had consistently smaller variance in the mean HbA1c per AI, which was especially evident at smaller sample sizes, at approximately equivalent cost. For the same sample size, the SMART design has higher probability of identifying the best AI.
Another advantage of SMART is that the design offers the potential to personalize treatment sequences by evaluating features predictive of responsiveness by treatment order. In our present simulation study, this feature of SMART was not examined as subjects were simulated as identical with regard to all features except for responsiveness to one intervention or the other. However, there is a sizable statistical literature that offers methodologies (e.g., Q-learning) for doing such personalization as secondary analysis of SMART data [
5,
27]. This aspect can be pursued in simulations as an important future work.
In sensitivity analysis, the observed benefits were robust. However, we did note that the value of both designs depended on the threshold value for defining response to treatment at the end of first stage. Average HbA1c control for trial subjects was optimal at an intermediate threshold value: too low and subjects who were unresponsive to their initial treatment were incorrectly maintained on an ineffective therapy; too high and subjects who were responsive to initial therapy would be incorrectly switched from an effective therapy. This suggests that the sensitivity and specificity of the threshold value can be important parameters to consider in SMART design and that the value of the design can be much diminished if the first stage evaluation does not have good operating characteristics.
Most clinical trials aim to conduct formal hypothesis tests in order to determine the superior interventions. However, in case of telemedicine, it may often be of more interest to find out if a cheaper or less burdensome intervention (e.g. App) is non-inferior to an established but more expensive intervention (e.g. Nurse). Such non-inferiority testing methodologies have been applied to conventional RCTs for many years [
28]. Very recently, such non-inferiority testing methods [
29] along with free web-based software [
30], have also been developed in the SMART design context. Availability of such methodology and software tools brings SMARTs to an even playing field as RCTs, in terms of flexibility of hypothesis testing and data analysis. We have not considered non-inferiority testing in the current manuscript.
The primary goal of SMART is to learn – through
within-patient adaptation of interventions over stages – an optimal strategy that can benefit future patients beyond the trial, not the trial participants
per se. As such, it does not allow
between-patient adaptation of interventions within the trial, because the randomization probabilities in a SMART are pre-specified. This fixed allocation scheme in a SMART design (as in conventional RCT) is motivated by the aim to maximize statistical power in order to maximize the scientific information gained from the trial. However, there are settings (e.g., implementation studies) where there is urgent need to translate emerging evidence from ongoing trials into practice, including the remainder of the trial participants, in order to maximize the benefit to the overall population of interest [
20]. This need can be accommodated in both a SMART and an RCT through the machinery of response-adaptive allocation. Such an adaptive SMART or adaptive RCT design would allow modification of the randomization probabilities based on observed outcome data, favoring the treatment sequences that empirically look better (even though not statistically significant), at pre-set interim times during the trial [
6,
31,
32]. For simplicity, we chose not to consider such a response-adaptive SMART or RCT in our current simulation study. However, we feel that such designs can potentially be even more attractive in the telemedicine context, optimizing welfare of trial participants while also finding optimal care strategies for future patients. We view more in-depth study of such designs in the telemedicine arena as an important future work.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit
http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (
http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.