01.12.2014  Research article  Ausgabe 1/2014 Open Access
Adaptive list sequential sampling method for populationbased observational studies
 Zeitschrift:
 BMC Medical Research Methodology > Ausgabe 1/2014
Wichtige Hinweise
Electronic supplementary material
The online version of this article (doi:10.1186/147122881481) contains supplementary material, which is available to authorized users.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors participated in the design of the study. MHH performed the statistical analysis and drafted the manuscript. All authors read and approved the final manuscript.
Background
Populationbased observational studies are frequently used to measure the prevalence of characteristics such as diseases by means of a sample from a population [
1]. Two important problems that arise when a sample is recruited is that (
i) not everyone in the population has the same willingness to participate in the study [
2–
6], and (
ii) after inviting an individual, it might take some time before we receive a response.
Variation in the willingness to participate may bias the results of the study. To deal with this problem, we could invite more individuals from groups related to a low willingness to participate [
7]. However, this approach requires that the participation probability per person or group is known before the sampling procedure starts. Unfortunately, this detailed knowledge on the willingness to participate among subgroups in the population is often not available. If the willingness to participate is less than assumed we will invite too few individuals, which leads to a too small sample and a decreased precision. On the other hand, inviting too many individuals will lead to extra costs. Generally, we invite too many individuals when we underestimate the willingness to participate and there is delayed response to the invitation. In general, not accounting for delayed response will lead an unexpected number of extra individuals in the sample at the end of the recruitment period.
Anzeige
An example of a complex sampling problem is observed in the HELIUS study [
8]. One objective of the HELIUS study is to measure ethnic inequalities in the incidence and prognosis of major diseases in the population of Amsterdam. The desired sample should have approximately 5000 individuals in each ethnic group, and should be representative for the population of Amsterdam. This is achieved by stratifying on the auxiliary variables: place of residence (spatial), age, (continuous), gender (categorical), and social economic status (categorical) available from municipal registries.
Unfortunately, it is not straightforward to implement stratification when we have a large number of auxiliary variables of mixed types [
9]. In this case, too small or even empty strata might be obtained when we cross all strata from all variables. An alternative variance reduction technique, proposed by Grafström et al., is to obtain a well spread set of participants [
10,
11]. Basically, a set of participants is well spread when the number of participants is close to what is expected on average, for every set of auxiliary variables. Grafström et al. showed that the variance of commonly used estimators is usually low with a well spread set of participants.
In this paper, we use the list sequential method, developed by Bondesson and Thorburn [
12] to obtain a well spread set of participants without replacement from a finite population. Instead of trying to cross all strata from all auxiliary variables, our approach is based on a distance function between individuals. Similar or almost similar individuals should seldom be invited both to participate in the study. In its current form, the list sequential sampling method cannot be used to recruit sets of participants for populationbased observational studies because the list sequential sampling method assumes that (
i) everyone participates in the study and that (
ii) there is no delayed response to the invitation.
We developed approaches to correct for nonparticipation and delayed response to the invitation when we use a list sequential sampling method. The list sequential sampling method evaluates individuals from the population in a sequential order, and uses a random process to decide whether or not an individual should be invited to participate in the study. In this decision we have to correct for any nonparticipation. An approach is to weigh the probability of being invited with the (estimated) participation probability. When there is no or partial apriori knowledge on the participation probability, we can estimate this probability during the recruitment period using the information from already invited individuals. To combine both prior information and information that is generated during the recruitment period, we developed a Bayesian approach to estimate the participation probabilities. Moreover, to deal with the delayed response to the invitation, we use the expected response of an individual when we have no answer yet.
Anzeige
We performed a simulation study to illustrate the performance of the adapted list sequential sampling method, when we have unknown heterogeneous participation probabilities and delayed response to the invitation.
Methods
Problem description
We consider a finite population
D containing
n individuals, where each individual
i is described by a vector
x
_{ i } of auxiliary variables. The auxiliary variables
x
_{ i } are known for each individual before the recruitment period starts. Usually
x
_{ i } is available from municipal or national personregistries. Examples of these variables are gender, age, place of residence, and social economic status. In addition to
x
_{ i }, each individual
i has an unobserved outcome of interest
y
_{ i }. The goal of this paper is to obtain a sample of size
m (
m<
n) from
D, in which we can observe
y
_{ i }.
A sample is described by the vector
s=(
s
_{1},…,
s
_{ n }), where
s
_{ i } takes the value 1 if individual
i is in the sample and 0 otherwise [
13,
14]. With this representation there are 2
^{ n } possible samples. Before the recruitment period starts we need to determine
π
_{ i }, which is the probability that individual
i is included in
s (i.e.
p(
s
_{ i }=1)=
π
_{ i }). We want to recruit a sample of
m individuals and therefore
$\underset{i=1}{\overset{n}{\sum}}{\pi}_{i}=m$, where
m is a positive integer.
Different choices can be made for the inclusion probabilities
π
_{ i }. For instance, we can assign equal inclusion probabilities to all individuals, i.e.
π
_{ i } =
m/
n. In this case, the sample
s is expected to be a ‘miniature’ version of the population
D, because we expect
s to have approximately the same composition of auxiliary characteristics as
D. In this case, the sample is referred to as a representative sample [
11]. However,
π
_{ i } is frequently chosen to be proportional to
x
_{ i }. For example, by oversampling a rare subgroup we could increase the precision of the result for that particular subgroup [
15].
List sequential sampling method
To obtain the sample we use the list sequential method based on sampling without replacement developed by Bondesson and Thorburn [
12]. To illustrate the list sequential method, we first consider the situation in which all invited individuals will participate in the study.
During the recruitment period, we sequentially decide for each individual
i from
D whether we include this individual in the sample (
s
_{ i }=1) or not (
s
_{ i }=0). After this decision, the probability of being included in the sample for the remaining noninvited individuals from
D is updated. Let
${\mathit{\pi}}^{\left(0\right)}=\left({\pi}_{1}^{\left(0\right)},\dots ,{\pi}_{n}^{\left(0\right)}\right)$ be the vector of initial inclusion probabilities which is determined
before the sampling procedure starts, i.e.
${\pi}_{i}^{\left(0\right)}={\pi}_{i}$. We sequentially evaluate each individual
i from the population and update the inclusion probabilities of all nonevaluated individuals after each evaluation. For the first individual, we have
$p\left({s}_{1}\right)={\pi}_{1}^{\left(0\right)}$. Depending on whether individual 1 is included in the sample or not, the inclusion probabilities of all other, nonevaluated, individuals are updated. This gives us the vector
π
^{(1)}, from which we use
${\pi}_{2}^{\left(1\right)}$ to determine
s
_{2}; i.e. decide whether to include the second individual in the sample or not. The updating scheme can be represented as
$\begin{array}{cc}{\mathit{\pi}}^{\left(0\right)}& =\left({\pi}_{1}^{\left(0\right)}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{\pi}_{2}^{\left(0\right)}\phantom{\rule{1em}{0ex}}{\pi}_{3}^{\left(0\right)}\phantom{\rule{1em}{0ex}}{\pi}_{4}^{\left(0\right)}\phantom{\rule{1em}{0ex}}\dots \phantom{\rule{1em}{0ex}}{\pi}_{n}^{\left(0\right)}\right)\\ {\mathit{\pi}}^{\left(1\right)}& =\left({s}_{1}\phantom{\rule{1em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{\pi}_{2}^{\left(1\right)}\phantom{\rule{1em}{0ex}}{\pi}_{3}^{\left(1\right)}\phantom{\rule{1em}{0ex}}{\pi}_{4}^{\left(1\right)}\phantom{\rule{1em}{0ex}}\dots \phantom{\rule{1em}{0ex}}{\pi}_{n}^{\left(1\right)}\right)\\ {\mathit{\pi}}^{\left(2\right)}& =\left({s}_{1}\phantom{\rule{1em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{s}_{2}\phantom{\rule{1em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{\pi}_{3}^{\left(2\right)}\phantom{\rule{1em}{0ex}}\phantom{\rule{0.3em}{0ex}}{\pi}_{4}^{\left(2\right)}\phantom{\rule{1em}{0ex}}\phantom{\rule{0.3em}{0ex}}\dots \phantom{\rule{1em}{0ex}}{\pi}_{n}^{\left(2\right)}\right)\\ {\mathit{\pi}}^{\left(3\right)}& =\left({s}_{1}\phantom{\rule{1em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{s}_{2}\phantom{\rule{1em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{s}_{3}\phantom{\rule{1em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{\pi}_{4}^{\left(3\right)}\phantom{\rule{1em}{0ex}}\phantom{\rule{0.3em}{0ex}}\dots \phantom{\rule{1em}{0ex}}{\pi}_{n}^{\left(3\right)}\right)\end{array}$
Generally, when we evaluate individual
i, we will use the inclusion probability
${\pi}_{i}^{(i1)}$ to determine
s
_{ i }. After the evaluation of individual
i, we update all probabilities
${\pi}_{j}^{\left(i\right)}$, for
j>
i with
${\pi}_{j}^{\left(i\right)}={\pi}_{j}^{(i1)}\left({s}_{i}{\pi}_{i}^{(i1)}\right){w}_{ji}^{\left(i\right)},$
(1)
where
${w}_{ji}^{\left(i\right)}$ are weights that may depend on
s
_{1},
s
_{2},…,
s
_{ i−1}. Note that
${w}_{ji}^{\left(i\right)}$ determines how
${\pi}_{j}^{\left(i\right)}$ is affected by the sampling outcome from the individual
i, since
${w}_{ji}^{\left(i\right)}$ influences the second order inclusion probability
p(
s
_{ i }=1,
s
_{ j }=1). The sampling scheme gives a sample of size
m, when the weights are restricted to sum up to one, i.e.
$\underset{j=i+1}{\overset{N}{\sum}}{w}_{ji}^{\left(i\right)}=1$. To guarantee that
$0\le {\pi}_{j}^{\left(i\right)}\le 1$, all weights should satisfy
$\begin{array}{ll}& min\left(\frac{1\underset{j}{\overset{(i1)}{\pi}}}{1\underset{i}{\overset{(i1)}{\pi}}},\frac{\underset{j}{\overset{(i1)}{\pi}}}{\underset{i}{\overset{(i1)}{\pi}}}\right)\le \underset{ji}{\overset{\left(i\right)}{w}}\le \\ min\left(\frac{\underset{j}{\overset{(i1)}{\pi}}}{1\underset{i}{\overset{(i1)}{\pi}}},\frac{1\underset{j}{\overset{(i1)}{\pi}}}{\underset{i}{\overset{(i1)}{\pi}}}\right)\end{array}$
(2)
Anzeige
Within these bounds, we can impose different restrictions on
${w}_{ji}^{\left(i\right)}$, resulting in samples with certain characteristics. Generally, when
${w}_{ji}^{\left(i\right)}0$ we have corr(
s
_{ i }=1,
s
_{ j }=1)<0 (i.e. a negative correlation between the sampling indicators of individuals
i and
j), whereas with
${w}_{ji}^{\left(i\right)}0$, we have corr(
s
_{ i }=1,
s
_{ j }=1)>0. For more detail about the list sequential method, we refer the reader to respectively theorem 1 and remark 1 from Bondesson and Thorburn [
12].
Well spread samples
We are interested in recruiting a well spread sample with the list sequential sampling method. Usually, a well spread sample leads to parameter estimates with low variances. Before we can introduce the definition of a well spread sample, we require the concept of coherent subsets. Let
d(
i,
k) be the distance between individuals
i and
k. A subset
D
^{′} from the population
D is coherent if the following holds. First, let some individual
i∈
D
^{′}. Individual
k is included in
D
^{′} if and only if
d(
i,
k)≤
r, where
r≥0. Consequently,
D
^{′} can be constructed by including all individuals within a ball of radius
r around individual
i.
Grafström and Schelin considered a sample to be well spread with respect to the inclusion probabilities
π when, for every coherent subset
D
^{′}⊂
D,
${n}^{\prime}\approx \underset{i\in {\mathbf{D}}^{\prime}}{\sum}{\pi}_{i}.$
(3)
A smaller distance to individual
i increases the probability of being included in the coherent subset
D
^{′}. To satisfy (3), it is clear that the inclusion probability of individual
i should be more influenced by the sampling indicators
s of individuals with a smaller distance. We propose to measure distance between individuals with the auxiliary variables
x, where
d(
x
_{ i },
x
_{ k }) is the distance between individual
i and
k. Based on the types of auxiliary variables, we can choose, for instance, the Mahalanobis or the Manhattan distance.
Anzeige
To obtain a well spread sample with the list sequential sampling method, we will use preliminary weights which are specified
before the recruitment period starts. The preliminary weight
${\stackrel{~}{w}}_{k}^{\left(i\right)}$ reflects the effect of
s
_{ k } from individual
k on the inclusion probability of individual
i. The weights are referred to as preliminary because the upper bound from (2) has an effect on the conditional inclusion probabilities.
The preliminary weights are constructed in the following way. Let
${c}_{k}^{\left(i\right)}$ be the rank of the distance of the
k
^{ t h } individual to individual
i, where
k≠
i. We rank the distances in ascending order, where we assign
c
^{(i)}=1 to the closest individual,
c
^{(i)}=2 to the second closest individual, and so on. To construct the preliminary weights, we could use the linear function
${\stackrel{~}{w}}_{k}^{\left(i\right)}=\mu +{c}_{k}^{\left(i\right)}\lambda ,$
(4)
where the weights
μ and
λ≤0 are arbitrarily chosen weights. The sampling indicator
s
_{ k } of individual
k has a larger effect on individuals at smaller distance, whereas it has less effect on individuals at further distance. To recruit a set of approximately
m individuals, we restrict the weights to satisfy
$\underset{k\ne i}{\sum}{\stackrel{~}{w}}_{k}^{\left(i\right)}=1$.
Heterogeneous participation probabilities
A problem of sampling from population
D is that individuals that are invited to participate in the study can decline the invitation. Let
b=(
b
_{1},…,
b
_{ n }) be the vector that indicates whether an individual
i is invited to participate (
b
_{ i }=1) or not (
b
_{ i }=0). When individual
i refuses to participate in the study, we have
s
_{ i }=0 and we do not observe
y
_{ i }. Let
ϕ=(
ϕ
_{1},…,
ϕ
_{ n }) be the vector that contains the participation probability per person in the population, where
ϕ
_{ i }=
p(
s
_{ i }=1
b
_{ i }=1). Note that when every invitee participates (i.e.
ϕ
_{ i }=1, for
i=1,…,
n), we have
s=
b.
Anzeige
Let
${\stackrel{~}{\pi}}_{i}^{(i1)}$ be the inclusion probability
${\pi}_{i}^{(i1)}$ corrected for nonparticipation, i.e. the probability of being invited to participate in the study for individual
i from
D. When
ϕ
_{ i } is known before the recruitment period starts, nonparticipation can be dealt with by using
${\stackrel{~}{\pi}}_{i}^{(i1)}={\pi}_{i}^{(i1)}/{\varphi}_{i}$ as probability to invite individual
i. Moreover, we can use the updating rule from (1) to update the inclusion probabilities of the nonevaluated individuals
${\pi}_{j}^{i}$,
j>
i, after individual
i responded to the invitation. This will give us a sample that approximately satisfies the inclusion probabilities
π.
The following small sampling problem illustrates this modification. Consider that, for the first individual, we have
${\pi}_{1}^{\left(0\right)}=0.25$ and
ϕ
_{1}=0.5. The probability to invite this individual is therefore
${\stackrel{~}{\pi}}_{1}^{\left(0\right)}=0.5$. Using this strategy there might be some individuals
i with
${\stackrel{~}{\pi}}_{i}^{(i1)}1$. This means that the participation probability of individual
i is too low with respect to
${\pi}_{i}^{(i1)}$; the desired probability to be included in
s for individual
i cannot be reached. For instance, this would happen in the example above for individual 1 when
ϕ
_{1}=0.1 and consequently
${\stackrel{~}{\pi}}_{1}^{\left(0\right)}=2.5$. This means that we have to invite individual 1 two and a half times to satisfy
${\pi}_{i}^{\left(0\right)}$. Because we can only invite an individual once, we restrict all values
${\stackrel{~}{\pi}}_{i}^{(i1)}$ to be one or lower.
Adaptive list sequential sampling method
Usually,
ϕ
_{ i } is not known before the recruitment period starts. In this section we suggest how
ϕ
_{ i } can be estimated adaptively during the recruitment period. In addition, we consider delayed response to the invitation.
For each individual, we have some knowledge about the willingness to participate before the recruitment period starts. For example, we might have participation estimates from a small pilot study or from previously performed studies. In addition, information from the invited individuals becomes available during the recruitment period. Therefore, we propose to use a Bayesian method to estimate the participation probability of individual
i during the recruitment period, in which we use both the available prior knowledge and the information that becomes available during the recruitment period.
Let
z
_{ i } be the vector of all observed characteristics of individual
i, which are related to the participation probability. We assume a missing at random type of mechanism for the participation probabilities, where the participation probability of individual
i
only depends on observed characteristics
z
_{ i }, i.e.
p(
s
_{ i }=1
b
_{ i }=1,
z
_{ i }). The participation probability can be written as
$p(s=1{b}_{i}=1,{\mathbf{z}}_{i},\alpha ,\mathit{\beta})=\frac{exp\{\alpha +f({\mathbf{z}}_{i},\mathit{\beta}\left)\right\}}{1+exp\{\alpha +f({\mathbf{z}}_{i},\mathit{\beta}\left)\right\}},$
(5)
where
α is the intercept term, and
f() is a function of the observed characteristics
z
_{ i } and the regression weights
β. Because more information becomes available during the recruitment period, the participation probability estimates become more accurate. The vector of estimated participation probabilities of all
n individuals
after the evaluation of individual
i is denoted as
${\stackrel{\u0302}{\mathit{\varphi}}}^{\left(i\right)}=({\stackrel{\u0302}{\varphi}}_{1}^{\left(i\right)},\dots ,{\stackrel{\u0302}{\varphi}}_{n}^{\left(i\right)})$. We then adapt the inclusion probabilities as
${\stackrel{~}{\pi}}_{i}^{(i1)}={\pi}_{i}^{(i1)}/{\stackrel{\u0302}{\varphi}}_{i}^{(i1)}$.
After an invitation has been send to an individual, it might take some time to get a response. Let
${u}_{j}^{\left(i\right)}$ be the indicator whether individual
j has responded to the invitation
before individual
i is evaluated, where
${u}_{j}^{\left(i\right)}=1$ when we observe
s
_{ j } and
${u}_{j}^{\left(i\right)}=0$ when we do not observe the participation indicator
s
_{ j } during the evaluation of individual
i. Note that when individual
j has not been invited (i.e.
b
_{ j }=0),
s
_{ j }=0 since individual
j is not included in the set of participants. A problem of delayed response is that we cannot use the update rule from (1) to determine
${\pi}_{j}^{\left(i\right)}$, when the participation indicator of the previous individual is not observed. Consequently, we cannot update
${\pi}_{j}^{\left(i\right)}$ which means that our sampling method is less successful in recruiting a well spread sample. As a solution, we propose to use the data from
all previously invited individuals, and replace the nonobserved participation indicators with their estimated expected value. We use this approach in step 1 of the adaptive list sequential sampling method listed below.
Before we start the adaptive list sequential sampling method, we specify the vector
π
^{(0)}=
π, which contains the initial probabilities of being included in
s for every individual
i in
D. The desired number of individuals in
s is
$m=\underset{i=1}{\overset{n}{\sum}}{\pi}_{i}^{\left(0\right)}$, where
m is a positive integer. The first individual from
D is invited with the probability
${\stackrel{~}{\pi}}_{1}^{\left(0\right)}={\pi}_{1}^{\left(0\right)}/{\stackrel{\u0302}{\varphi}}_{1}^{\left(0\right)}$, where
${\stackrel{\u0302}{\varphi}}_{1}^{\left(0\right)}$ is an initial guess of the participation probability of the first individual. All other individuals from
D are invited in a sequential way, where the steps of the adaptive list sequential sampling method for individual
i=2,…,
n are
1. Calculate
${\pi}_{i}^{(i1)}$To deal with delayed response to the invitation, we propose to use a modified version of the columnwise updating rule proposed by Bondesson and Thorburn [
12]. We calculate
${\pi}_{i}^{(i1)}$ by iterating over
k=1,2,…,
i−1, where
${\pi}_{i}^{\left(k\right)}={\pi}_{i}^{(k1)}\left({s}_{k}{\pi}_{k}^{(k1)}\right){w}_{k}^{\left(i\right)},$
(6)
and
${w}_{k}^{\left(i\right)}$ is calculated as
${w}_{k}^{\left(i\right)}=min\left({\stackrel{~}{w}}_{k}^{\left(i\right)},\frac{{\pi}_{i}^{(k1)}}{1{\pi}_{k}^{(k1)}},\frac{1{\pi}_{i}^{(k1)}}{{\pi}_{k}^{(k1)}}\right).$
The weight
${w}_{k}^{\left(i\right)}$ determines the effect of
s
_{ k } on
${\pi}_{i}^{\left(k\right)}$ and therefore also
${\pi}_{i}^{(i1)}$. The choice of preliminary weights
${\stackrel{~}{w}}_{k}^{\left(i\right)}$ is discussed in the previous section. Because (6) still requires the
observed indicators
s
_{1},
s
_{2},…,
s
_{ i−1}, we modify (6) to deal with delayed response to the invitation. When
${u}_{k}^{\left(i\right)}=0$, we replace
s
_{ k } with its estimated expectation
${\stackrel{\u0302}{\varphi}}_{k}^{(i1)}{b}_{k}$, where
${\stackrel{\u0302}{\varphi}}_{k}^{(i1)}$ is the participation probability estimate of individual
k from the previous evaluation
i−1. The delayed response adjusted columnwise updating rule from (6) is
${\pi}_{i}^{\left(k\right)}=\left\{\begin{array}{ll}{\pi}_{i}^{(k1)}\left({s}_{k}{\pi}_{k}^{(k1)}\right){w}_{k}^{\left(i\right)}\phantom{\rule{1em}{0ex}}& \text{if}\phantom{\rule{2.77626pt}{0ex}}{u}_{k}^{\left(i\right)}=1,\\ {\pi}_{i}^{(k1)}\left(\left\{{\stackrel{\u0302}{\varphi}}_{k}^{(i1)}{b}_{k}\right\}{\pi}_{k}^{(k1)}\right){w}_{k}^{\left(i\right)}\phantom{\rule{1em}{0ex}}& \text{if}\phantom{\rule{2.77626pt}{0ex}}{u}_{k}^{\left(i\right)}=0.\end{array}\right.$
2. Calculate
${\stackrel{~}{\pi}}_{i}^{(i1)}$Decide whether individual
i should be invited to participate in the study, where
b
_{ i }=1 if the individual is invited and
b
_{ i }=0 if not. This decision is based on the probability of being invited,
${\stackrel{~}{\pi}}_{i}^{(i1)}=\frac{{\pi}_{i}^{(i1)}}{{\stackrel{\u0302}{\varphi}}_{i}^{(i1)}},$
(7)
where
${\stackrel{\u0302}{\varphi}}_{i}^{(i1)}$ is the participation probability estimated from the previous evaluation
i−1. We draw the decision to invite individual
i from a Bernoulli distribution with
$p({b}_{i}=1)={\stackrel{~}{\pi}}_{i}^{(i1)}$.
3. Update the vector
ϕ
^{ (i) } Let
R
^{(i)}={
r;
b=1,
u
^{(i)}=1,
r∈
D} be the set of all
m
_{ i } individuals that responded to the invitation to participate. Each individual from
R
^{(i)} is described by
r=(
s,
z), where
s=1 when invitee
r participates and
s=0 otherwise, and
z is a vector of known characteristics. The participation probability of individual
k is defined as (5). Because we might have some apriori knowledge about the intercept
α and the regression weights
β, we use Bayesian inference to estimate the posterior distribution
g(
α,
β
R
^{(i)}), i.e.
$g\left(\alpha ,\mathit{\beta}{\mathbf{R}}^{\left(i\right)}\right)=\frac{h\left({\mathbf{R}}^{\left(i\right)}\alpha ,\mathit{\beta}\right)f(\alpha ,\mathit{\beta}\mathit{\theta})}{{\int}_{\left(\alpha ,\mathit{\beta}\right)}h\left({\mathbf{R}}^{\left(i\right)}\right\alpha ,\mathit{\beta}\left)f\left(\alpha ,\mathit{\beta}\mathit{\theta}\right)\partial \right(\alpha ,\mathit{\beta})}.$
(8)
where
θ is a vector of parameters, and
f() is the prior distribution of (
α,
β). The likelihood of
R
^{(i)} given (
α,
β) is
$h\left({\mathbf{R}}^{\left(i\right)}\right\alpha ,\mathit{\beta})\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\underset{\ell =1}{\overset{{m}_{i}}{\prod}}p{\left({s}_{\ell}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}1{\mathbf{z}}_{\ell},\alpha ,\mathit{\beta}\right)}^{{s}_{\ell}}{\left\{1p({s}_{\ell}=1{\mathbf{z}}_{\ell},\alpha ,\mathit{\beta})\right\}}^{1{s}_{\ell}}$
where
p(
s
_{ ℓ }=1
z
_{ ℓ },
α,
β) is given by (5). Following (8) we update the vector of estimated participation probabilities
${\stackrel{\u0302}{\mathit{\varphi}}}^{\left(i\right)}$, where for individual
k=1,…,
n
${\stackrel{\u0302}{\varphi}}_{k}^{\left(i\right)}={\int}_{(\alpha ,\mathit{\beta})}p({s}_{k}=1{\mathbf{z}}_{k},\alpha ,\mathit{\beta}\left)g\right(\alpha ,\mathit{\beta}\left{\mathbf{R}}^{\left(i\right)}\right)\partial (\alpha ,\mathit{\beta}).$
To estimate
${\stackrel{\u0302}{\varphi}}_{k}^{\left(i\right)}$, we can use quadrature or MCMC methods. The values of
θ depend on the amount of prior knowledge that is available before the recruitment period starts. For instance, we can assume that (
α,
β) is sampled from some flat distribution with large variance when no prior knowledge is available.
Simulations
We illustrated the performance of the adaptive list sequential sampling method with two simulations. In these two simulations, we created populations with
unknown heterogeneous willingness to participate and delayed response to the invitation. The first simulation was focused on recruiting a well spread, representative set of participants. In the second simulation, we investigated stratified sampling from a population in which some subgroups were overrepresented.
Simulation 1
Consider a population
D of size
n=4000 from which we drew a random sample without replacement of size
m=400 with the adaptive list sequential sampling method. To recruit a representative sample from the population, we assigned equal inclusion probabilities to all individuals from the population; i.e.
${\pi}_{i}^{\left(0\right)}=m/n=0.1$ for
i=1,…,
n. When the sample is well spread, the distribution of the auxiliary characteristics
x should be approximately similar in the population and the sample.
The data was generated as follows. The vector
z
_{ i } was drawn from a multivariate normal distribution with means zero, and covariances zero. The probability of positively responding to the invitation was
p(
s
_{ i }=1
b
_{ i }=1,
z
_{ i })=invlogit[
α+
z
_{ i }
β], where invlogit denotes the inverse logit transformation,
α=1, and
β=(0.3,−0.7,0.1,0.4). The response was drawn from a Bernoulli distribution with
p(
s
_{ i }=1
b
_{ i }=1,
z
_{ i }). In addition, for individual
i, delayed response to the invitation was simulated by drawing time
t
_{ i } from a Poisson distribution with expectation 15. Individual
i responded to the invitation
after the evaluation of individual
i+
t
_{ i }. Thus if
t
_{ i }=0, individual
i responded immediately to the invitation.
For individual
i, the characteristics
x
_{ i } were drawn from a multivariate normal distribution with means zero, variances one, and covariance matrix
$\left(\begin{array}{cccc}1.00& 0.20& 0.50& 0.30\\ 0.20& 1.00& 0.20& 0.40\\ 0.50& 0.20& 1.00& 0.20\\ 0.30& 0.40& 0.20& 1.00\end{array}\right).$
To obtain a well spread and representative sample, we used the adaptive list sequential method. To satisfy (3), we used the Mahalanobis distance to quantify the distance between individuals. We ranked the distances in ascending order and used the order to determine the preliminary weights
${\stackrel{~}{w}}_{k}^{\left(i\right)}$, for
i=1,…,
n and
k≠
i. Using (4), we specified the following adaptive list sequential sampling methods with different characteristics
Simple random sampling: Assign zero to all weights
${\stackrel{~}{w}}_{k}^{\left(i\right)}$. Consequently,
${w}_{k}^{\left(i\right)}=0$ and therefore
${\pi}_{i}^{(i1)}={\pi}_{i}^{\left(0\right)}$. With these weights, we used the initial inclusion probability
${\pi}_{i}^{\left(0\right)}$ to determine whether we should invite individual
i.
Adjusted sampling 1: The inclusion probability of individual
$i\phantom{\rule{0.3em}{0ex}}{\pi}_{i}^{(i1)}$ was equally influenced by all
n−1=3999 other individuals by using the preliminary weights
${\stackrel{~}{w}}_{k}^{\left(i\right)}=1/3999$.
Adjusted sampling 2: Only the 50 nearest neighbors of individual
i influenced the inclusion probability
${\pi}_{i}^{(i1)}$ by using the preliminary weights
${\stackrel{~}{w}}_{k}^{\left(i\right)}=\left\{\begin{array}{ll}1/50\phantom{\rule{1em}{0ex}}& \text{if}\phantom{\rule{1em}{0ex}}{c}_{k}\le 50,\\ 0\phantom{\rule{1em}{0ex}}& \text{otherwise}.\end{array}\right.$
We used an estimated participation probability to deal with nonparticipation. Two different approaches to estimate the participation probability were evaluated. The first approach was to use all available data to estimate the participation probability, i.e.
${\stackrel{\u0302}{\varphi}}_{i}^{(i1)}=p({s}_{i}=1{b}_{i}=1,{\mathbf{z}}_{i})=\text{invlogit}[\phantom{\rule{0.3em}{0ex}}\stackrel{\u0302}{\alpha}+{\mathbf{z}}_{i}\stackrel{\u0302}{\mathit{\beta}}]$. With the second approach, we assumed that
z
_{ i } had no impact on the participation probability, i.e.
${\stackrel{\u0302}{\varphi}}_{i}^{(i1)}=p({s}_{i}=1{b}_{i}=1,{\mathbf{z}}_{i})=\text{invlogit}[\phantom{\rule{0.3em}{0ex}}\stackrel{\u0302}{\alpha}]$. The second approach was used to investigate whether the impact of missspecifying
${\stackrel{\u0302}{\varphi}}_{i}^{(i1)}$ had a large impact on how well the sample was spread.
We assumed that we had no prior knowledge about the participation probability before the recruitment period started. Therefore flat, noninformative priors were used for
α and all regression weights
β by assuming they followed normal distributions with means zero and variance 100. Because we assumed zero means, the initial estimated participation probabilities were 50%, i.e.
${\stackrel{\u0302}{\varphi}}_{i}^{\left(0\right)}=0.5$ for
i=1,…,
n.
We quantified how well a sample was spread with the following measure based on Voronoi polytopes, suggested by Grafström and Lundström[
10]. Let individual
i∈
s, i.e. individual
i is included in the set of participants
s. The Voronoi polytope
v
_{ i } consists of all individuals
j from the population
D for which
d(
x
_{ i },
x
_{ j })≤
d(
x
_{ k },
x
_{ j }), for all other individuals
k∈
s. Note that when
d(
x
_{ i },
x
_{ j })=
d(
x
_{ k },
x
_{ j }), individual
j is included in both polytopes
v
_{ i } and
v
_{ k }, but weighted with 1/2.
Let
q
_{ i } be the sum of initial inclusion probabilities of the individuals in
v
_{ i },
${q}_{i}=\underset{j\in {v}_{i}}{\sum}{\pi}_{j}.$
Grafström and Lundström showed that a sample can be considered to be well spread if
q
_{ i } is one or close to one for all polytopes
v
_{ i }. Therefore, a measure to quantify how well spread a sample is
$R=\frac{1}{n}\underset{i\in \mathbf{s}}{\sum}{({q}_{i}1)}^{2},$
where a low
R corresponds to well spread sample. To investigate how well the adaptive list sequential sampling methods performed in recruiting a well spread sample, the simulation was performed 1000 times. We calculated the mean and variance of
R, and the average sum of recruited participants. Note that the best adaptive list sequential sampling method should give us a set of approximately 400 participants with a low
R in
every simulation.
Simulation 2
In simulation 2, we considered a population
D of size
n=5000, in which each individual was described by a categorical auxiliary variable
x
_{ i } and a unobserved binary outcome of interest
y
_{ i }. The auxiliary variable
x
_{ i } had five possible values
g. The main goal of this simulation was to estimate the sum of the outcome
y in the population, denoted as
$Y=\underset{i=1}{\overset{n}{\sum}}\phantom{\rule{0.3em}{0ex}}{y}_{i}$, with a set of participants in which we can measure
y. Moreover, we had resources to measure
y in a set of participants of size
m=500. The set of participants was obtained with an adaptive list sequential sampling method where we dealt with nonparticipating during the recruitment period.
Individuals in different subgroups had different participation probabilities and different frequencies of the outcome
y. The characteristics of the populations were
$\begin{array}{rlllll}g=& (1& 2& 3& 4& 5)\\ p({x}_{i}=g)=& (40\%& 20\%& 20\%& 10\%& 10\%)\\ p({s}_{i}=1{b}_{i}=1,{x}_{i}=g)=& (50\%& 60\%& 70\%& 80\%& 90\%)\\ p({y}_{i}=1{x}_{i}=g)=& (10\%& 20\%& 30\%& 40\%& 50\%)\end{array}$
where
p(
s
_{ i }=1
b
_{ i }=1,
x
_{ i }=
g) was the participation probability of individual
i given
x
_{ i }=
g, i.e. for individual
i the probability of participating depended on
x
_{ i }. The response to an invitation was drawn from a Bernoulli distribution with probability
p(
s
_{ i }=1
b
_{ i }=1,
x
_{ i }=
g). Moreover,
$E\left(Y\right)=n\underset{g=1}{\overset{5}{\sum}}p({y}_{i}=1{x}_{i}=g\left)p\right({x}_{i}=g)=1150$.
The individuals in the set of participants
s were used to estimate
Y, denoted as
${\u0176}_{\mathit{\text{HT}}}$, where we used the HorvitzThompson estimator and its variance [
14–
16] to determine
${\u0176}_{\mathit{\text{HT}}}$. The estimate
${\u0176}_{\mathit{\text{HT}}}$ was calculated as
${\u0176}_{\mathit{\text{HT}}}=\underset{j\in \mathbf{s}}{\sum}\frac{{y}_{j}}{{\pi}_{i}^{\left(0\right)}}$
(9)
where
${\pi}_{i}^{\left(0\right)}$ was the desired probability of being included in the set of participants
s, specified before the recruitment period started. The variance of
${\u0176}_{\mathit{\text{HT}}}$ was approximated with
$\stackrel{\u0302}{V}\left({\u0176}_{\mathit{\text{HT}}}\right)=\underset{i\in \mathit{s}}{\sum}\underset{j\in \mathit{s}}{\sum}\frac{{\pi}_{\mathit{\text{ij}}}^{\left(0\right)}{\pi}_{i}^{\left(0\right)}{\pi}_{j}^{\left(0\right)}}{{\pi}_{\mathit{\text{ij}}}^{\left(0\right)}}\frac{{y}_{i}}{{\pi}_{i}^{\left(0\right)}}\frac{{y}_{j}}{{\pi}_{j}^{\left(0\right)}}$
where
${\pi}_{\mathit{\text{ij}}}^{\left(0\right)}$ is the second order jointinclusion probability of the
i
^{ t h } and
j
^{ t h } individuals in
s, i.e.
${\pi}_{\mathit{\text{ij}}}^{\left(0\right)}=p({s}_{i}=1,{s}_{j}=1)$. To determine
${\pi}_{\mathit{\text{ij}}}^{\left(0\right)}$, we used the sample based approximation technique proposed by Hájek [
17,
18].
The set of participants
s was obtained with the adaptive list sequential sampling method. Before the recruitment period started, we specified the vector
π
^{(0)}. We considered a vector
π
^{(0)}, in which the probability of being included in
s was proportional to the size of group
g in the population. Because not all groups were observed with the same frequency in
D, we oversampled the smaller subgroups in such a way that each group
g was observed with similar frequency in
s. For each invited individual with
x=1, we have to invite 2, 2, 4, and 4 individuals with respectively
x=2,3,4,5 to obtain an equal number of individuals from each group in
s. Therefore, depending on the value of
x
_{ i }, we used the following probabilities for individual
i
${\pi}_{i}^{\left(0\right)}=\left\{\begin{array}{cc}0.05\phantom{\rule{1em}{0ex}}& \text{if}\phantom{\rule{1em}{0ex}}{x}_{i}=1\\ 0.10\phantom{\rule{1em}{0ex}}& \text{if}\phantom{\rule{1em}{0ex}}{x}_{i}=2\text{or}\phantom{\rule{1em}{0ex}}{x}_{i}=3\\ 0.20\phantom{\rule{1em}{0ex}}& \text{if}\phantom{\rule{1em}{0ex}}{x}_{i}=4\text{or}\phantom{\rule{1em}{0ex}}{x}_{i}=5\end{array}\right.$
Note that we could also use stratified sampling to get our desired set of participants because we only have five disjoint groups. However when we have a large number of groups, stratification becomes impracticable. A large number of groups is no problem for the (adaptive) list sequential sampling design, if it is possible to specify a distance measure between individuals (see (3)). With
π
^{(0)}, we expected to have an equal number of individuals for each subgroup
g in the set of participants.
We considered two adaptive list sequential methods to recruit the sample.
Simple random sampling: Assign zero to all weights
${\stackrel{~}{w}}_{k}^{\left(i\right)}$. Therefore
${\pi}_{i}^{(i1)}={\pi}_{i}^{\left(0\right)}$.
Adjusted sampling: To recruit a well spread sample, the inclusion probability of individual
i should
only be influenced by individuals located in the same group. Therefore, we used the following preliminary weights
${\stackrel{~}{w}}_{k}^{\left(i\right)}=\left\{\begin{array}{ll}1/({n}_{g}1)\phantom{\rule{1em}{0ex}}& \text{if}\phantom{\rule{1em}{0ex}}{x}_{i}=g\text{and}\phantom{\rule{1em}{0ex}}{x}_{k}=g,\\ 0\phantom{\rule{1em}{0ex}}& \text{otherwise},\end{array}\right.$
where
n
_{ g } is the number of individuals in group
g.
For both adaptive list sequential sampling methods, we used the following model to describe the participation probability
$p({s}_{i}=1{b}_{i}=1,{\mathbf{x}}_{i}=g,\mathit{\beta})=\frac{exp\left[{\beta}_{g}\mathbf{I}({\mathbf{x}}_{i}=g)\right]}{1+exp\left[{\beta}_{g}\mathbf{I}({\mathbf{x}}_{i}=g)\right]}$
where
β
_{ g } is the regression weight for group
g. Because we assumed we had no apriori information about the participation probabilities, we used noninformative priors for
β by sampling all five parameters
β
_{ g } from a normal distribution with mean zero and variance 100. For individual
i, delayed response to the invitation was simulated by drawing time
t
_{ i } from a Poisson distribution with expectation 15. Individual
i responded to the invitation
after the evaluation of individual
i+
t
_{ i }.
The simulations were performed 1000 times and we calculated the bias, MSE, and coverage of
${\u0176}_{\mathit{\text{HT}}}$ for both adaptive list sequential methods.
Results
Simulation 1
The results from simulation 1 have been summarized in Table
1. The results showed that the adaptive list sequential sampling method with the adjusted sampling 2 performed best. In this approach, the participation probability of individual
i was only influenced by the participation indicator of the 50 nearest neighbors. The recruited sets of participants better spread than with the other sampling approaches, reflected by the lower median and spread of
R.
Table 1
95% Confidence interval of
R
and the number of participants in simulation 1
Estimated participation probability:
$\mathit{\text{invlogit}}[\phantom{\rule{0.3em}{0ex}}\stackrel{\u0302}{\mathit{\alpha}}+{z}_{\mathit{i}}\stackrel{\u0302}{\mathit{\beta}}]$



Sampling

Measure
R

Number of participants


method

2.5%

50%

97.5%

Mean

Standard deviation

Simple random sampling

0.192

0.238

0.304

401

18

Adjusted sampling 1

0.199

0.241

0.298

397

11

Adjusted sampling 2

0.157

0.189

0.225

397

11

Estimated participation probability:
$\mathit{\text{invlogit}}\mathbf{\left[}\phantom{\rule{0.3em}{0ex}}\stackrel{\mathbf{\u0302}}{\mathit{\alpha}}\mathbf{\right]}$


Sampling

Measure
R

Number of participants


method

2.5%

50%

97.5%

Mean

Standard deviation

Simple random sampling

0.188

0.230

0.304

405

18

Adjusted sampling 1

0.197

0.238

0.291

400

11

Adjusted sampling 2

0.154

0.184

0.225

400

11

Using all the auxiliary characteristics
z
_{ i } to estimate the participation probability of individual
i, the simple random sampling approach resulted in a median
R of 0.238 (95% confidence interval: 0.192–0.304). The mean number of participants with the simple random sampling approach was about 401 (95% confidence interval: 365 – 436). For the adjusted sampling 1 approach, approximately similar results were found for
R, i.e. on average, the set of participants obtained with the simple random sampling approach and the adjusted sampling 1 approach were comparable in how well they were spread. With the adjusted sampling 1 approach, the average size of the set of participants was 397 (95% confidence interval: 376 – 418). However, compared to the simple random sampling approach, the variation in the size of the set of participants was considerably lower with the adjusted sampling 1 approach (respectively standard deviations of 18 and 11).
On average, a set of participants recruited with the adjusted sampling 2 approach was better spread than with the other two approaches. Not only was the median
R 0.189, the spread around the median was also smaller than with the other two approaches (95% confidence interval: 0.157–0.225). The mean size of the set of participants with the adjusted sampling 2 approach was 397 (95% confidence interval: 376 – 418), which was comparable to the adjusted sampling 1 approach.
Interestingly, the performances of all three approaches remained similar when we ignored the auxiliary characteristics
z
_{ i } in the estimation of the participation probability of individual
i. Since fitting a model with just an intercept gave comparable results to the more complicated model where we also included
z
_{ i }, the results suggested that the adaptive list sequential sampling method was robust to missspecification of the participation probability model.
Simulation 2
The results from simulation 2 have been summarized in Table
2. Using the set of participants obtained with the simple random sampling approach resulted in a biased estimate of
${\u0176}_{\mathit{\text{HT}}}$. With the adjusted sampling approach,
${\u0176}_{\mathit{\text{HT}}}$ was more accurately estimated. This was reflected in the bias (+31 for simple random sampling and +1 for adjusted sampling), and the variance of the estimate (7995 for simple random sampling and 7817 for adjusted sampling). Consequently, the coverage of the 95% confidence interval was better when we used the adjusted sampling approach (0.86 for simple random sampling and 0.92 for adjusted sampling).
Table 2
Estimated frequency
derived from the set of participants in simulation 2
Sampling

Estimated

Bias

Variance

Mean squared error

Coverage of the


method

$\mathit{E}\left({\mathit{\u0176}}_{\mathit{\text{HT}}}\right)$

$\mathit{E}(\mathit{Y}{\mathit{\u0176}}_{\mathit{\text{HT}}})$

$\mathit{E}\left[\phantom{\rule{0.3em}{0ex}}\stackrel{\u0302}{\mathit{V}}\right({\mathit{\u0176}}_{\mathit{\text{HT}}}\left)\right]$

$\mathit{E}\left[\phantom{\rule{0.3em}{0ex}}{({\mathit{\u0176}}_{\mathit{\text{HT}}}\mathit{Y})}^{\mathbf{2}}\right]$

95% confidence interval

Simple random sampling

1181

31

7995

14457

0.86

Adjusted sampling

1151

1

7817

10288

0.92

Discussion
In this paper, we developed an adaptive list sequential sampling method when a random sample from the population is required and the willingness to participate varies between individuals and is not known beforehand. Our adaptive list sequential sampling method requires that the characteristics that are related to the participation probability are known of all individuals. With simulations, we showed that the adaptive list sequential sampling method could successfully deal with unknown heterogeneous participation probabilities.
In our adaptive list sequential sampling method, we evaluate each individual from the population only once. Therefore we only have one opportunity to decide whether to invite an individual or not. When we overestimate the participation probability for all individuals from the population, we end up with a too small set of participants. A simple solution for this problem would be to reevaluate noninvited individuals until the desired size of participants in the study has been reached.
The simulations suggested that the adaptive list sequential sampling method is robust to missspecification of the participation probability model. Just using an intercept term to describe the participation probability seems to work quite well. However, to what extent the adaptive list sequential sampling method can deal with wrong participation probability estimates was not investigated in this paper. In addition, extreme delayed response to the invitation has influence on the performance of the list sequential sampling method. Further research is necessary to determine in which situations the adaptive list sequential sampling method succeeds and fails to recruit a well spread set of participants.
A problem that was not considered here was the use of multiple invitation techniques in sampling designs. For instance, there could be individuals in the population that have a low willingness to participate when they are invited by a letter, but a much larger willingness when invited by telephone. Our method can be adopted by introducing multiple participation probabilities by extending step 3 of our algorithm and estimate multiple logistic regression participation probabilities.
Conclusions
We showed that correcting for heterogeneity in the participation probability during the recruitment period is an effective approach when we have no or partial knowledge on the willingness to participate in population studies. By inviting individuals from the population in stages, the participation probability can be estimated and used in the sampling procedure.
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (
http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (
http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors participated in the design of the study. MHH performed the statistical analysis and drafted the manuscript. All authors read and approved the final manuscript.