Study design and data collection
To identify DMAs for the study, FDA developed a list of the 47 markets with the largest populations of youth who were at risk of or experimenting with SLT. To ensure that the campaign could reach a sufficiently large population nationwide, we excluded the 12 most populous markets from the randomization. We then randomly selected 15 intervention (markets received campaign ads) and 15 control (did not receive campaign ads) markets for the longitudinal study from the next most populous 30 markets, which served as the primary sampling units. Although respondents in control markets were not exposed to campaign ads, all respondents were shown the campaign’s video ads in the survey to assess campaign awareness. The campaign targeted rural segments (defined as C and D Nielsen counties) of the intervention DMAs. To ensure we would have sufficient sample to detect the influence of the campaign on campaign-targeted beliefs, we conducted a power calculation that indicated we would need 1008 youth by the final wave of data collection. Once we factored in anticipated longitudinal retention, our goal was to complete 1969 baseline surveys. However, a sampling error at baseline led to the inclusion of a small fraction of households that were more suburban than rural. As a result, we increased the baseline sample to 2200, including 1895 from rural counties. We used address-based sampling, drawing household addresses from Census Block Groups (the secondary sampling units) in the 30 markets. The groups were allocated proportionally to the size of the DMAs. We selected the address samples from the Census Block Groups and used the number of boys aged 11 to 16 years old as the size measure. Our third stage sampling units were addresses from the Computerized Delivery Sequence file. We sampled approximately 100 addresses per selected Census Block Group.
In January 2016, we sent paper and pencil household screeners with a $2 pre-paid incentive to identify households with potentially eligible boys 11 to 16 years old (allowing more than one boy per household). We then sent field interviewers to households to conduct in-person baseline surveys with youth and parent/guardians.
Field interviewers obtained parental permission and youth assent, provided youth with instructions on how to complete the interview on a laptop, and were available to answer questions during the self-administered survey. Youth who completed the baseline survey received a $20 cash incentive. Once each youth respondent started taking the survey, field interviewers provided the parent/guardian with instructions on how to complete the parent/guardian survey on a tablet. Parents were not offered an incentive for the baseline survey.
We followed up with youth every 8 months from the start of the previous wave with the final wave of data collection ending in December 2018. At each follow-up, we contacted parents of youth (and youth directly who were at least 18 years old) by mail and email and invited them to provide permission for their child to complete the survey on the web. For youth who completed the survey within the first 4 weeks, we offered an additional $5 “early-bird” incentive (total of $25 by check). Field interviewers contacted youth who did not respond to the web survey during the early-bird period and reminded them to complete the web survey or scheduled an in-person interview if youth were not able to complete the survey online. The incentive after the early-bird period was a $20 check if they completed the survey online or $20 in cash if they completed the survey in-person.
At each wave of data collection, we monitored responses for quality and removed responses that did not meet our quality standards. We reviewed the data to ensure that respondents did not speed through the survey (complete a survey in less than 5% of the mean time), fail both attention check questions, and straight-line (i.e., choose the same answer in a column of questions) more than 66% of possible items that could be straight-lined. Respondents who failed our quality controls received an incentive but were not invited to subsequent waves of data collection.
Measures
The key outcome measures include beliefs and attitudes toward SLT that were targeted (explicitly or implicitly) by campaign messages. We analyzed 25 beliefs and attitudes to determine if they were related, or unrelated, to the campaign (list of beliefs shown in Supplement
1). Three coders reviewed “The Real Cost” Smokeless ads to identify belief items that explicitly targeted, implicitly targeted, or were not related to the campaign messages. Rater agreement was high (overall: Gwet’s AC [
8] = 0.84; ads ranged from 0.65 to 0.91). We found that 12 beliefs were messaged explicitly in “The Real Cost” Smokeless, 6 were covered implicitly, and 7 beliefs that we presented to respondents were unrelated to the campaign messages. As there were no statistically significant changes in the latter two categories, we focus the results on the explicit messages. However, we do not present results from one of the twelve campaign-targeted beliefs (Lose my jaw) because we did not collect data for that belief until third follow-up. The beliefs corresponding to these explicit messages related to nicotine dependence (e.g., unable to stop when I want to), short-term health effects (e.g., Develop red or white patches in the mouth), long-term health effects (e.g., Develop cancer of the lip, mouth, tongue, or throat), and social influences (e.g., Miss out on things I enjoy doing). Implicit beliefs included social influences (e.g., fit in) and perceived risk (e.g., safe to use SLT for a year or two). Unrelated beliefs included health consequences (e.g., get sick more often) and perceived benefits (e.g., using SLT relieves stress). Study participants indicated their agreement with belief and attitude statements on a 5-point Likert scale from strongly agree to strongly disagree. For analysis we dichotomized these items as strongly agree/agree (1) vs. other responses (0). The key intervention variable is a dichotomous variable for being in a treatment DMA (1) or in a control DMA (0).
The constructs from the baseline survey of parents/guardians that we use in the analysis include race (White, non-Hispanic, all other races/ethnicities (referent)), education (less than high school/high school diploma or more (referent)), and household income (less than $30,000, $30,000–$49,999, $50,000–$69,999, $70,000 or more). We also asked parent/guardians about their employment status (employed/unemployed (referent)), marital status (married/not married (referent)), and about their relationship to the child (biological parent/else (referent)). We asked youth about their sensation seeking behaviors at baseline. To measure this construct, we created a composite dichotomized scale derived from a 5-point Likert scale (strongly agree to strongly disagree) of the following: explore strange places, do frightening things, break the rules to do new and exciting things, and prefer friends who are exciting and unpredictable [
9]. We took the average response among those four items and dichotomized those averages at the mean, where sensation seekers included the mean response and above, for analysis [
10].
In addition to baseline sensation seeking behavior, our analyses include several youth measures asked at each wave of data collection and therefore responses varied over time. Measures include media use/exposure: how often they watched TV at least once a day vs. less than once a day (referent); used any one of four social media platforms at least once a day including Facebook, Instagram, Twitter, or Snapchat vs. less often (referent); used any one of five streaming services at least once a day including YouTube, Twitch, Netflix, Hulu, or Amazon Prime vs. less (referent); played video games (at least once a day vs. less often (referent)); frequency of watching R-rated movies (sometimes or more vs. less); parents have lots of rules about computer use, video games, and type of music vs. few or no rules (referent); their awareness of the truth tobacco prevention campaign (aware vs. not aware (referent)); and a fake tobacco prevention campaign (aware vs. not aware (referent)) to account for false reporting.
Other youth variables include use of SLT and cigarettes by family members in past 30 days vs. no use in past 30 days (referent), school performance (much better than average, better than average, average or below (referent)), and church attendance (at least once a week vs. less often (referent). We measured school environment using an average of three measures: feeling close to people at school, happy to be at school, and feeling like they were a part of their school. We divided those averages into tertiles using the bottom tertile as the referent. Finally, we captured the influence of their friends with two separate measures: “I do what my friends want me to do, even if I don’t want to” (Strongly agree/agree vs. else) and “To keep my friends, I’d even do things I don’t want to do” (strongly agree/agree vs. else).
Analytic methods
We analyzed data between 2019 and 2020 starting with an examination of changes in agreement (strongly agree and agree) with campaign-targeted beliefs and attitudes from baseline to each follow-up for the treatment and control groups. We reverse coded variables that contained affirmative beliefs (e.g., If I use smokeless tobacco, I will fit in) such that disagreement (strongly disagree and disagree) was coded as 1 to make comparisons with negative messages easier to interpret. We then calculated the difference pre-campaign to post in agreement for the treatment group (Tx), the difference pre to post in agreement for the control group (Cx), and the difference of these differences (DID).
$$\left(\mathrm{Tx}\ {\mathrm{Agreement}}_{\mathrm{Post}}-\mathrm{Tx}\ {\mathrm{Agreement}}_{\mathrm{Baseline}}\right)-\left(\mathrm{Cx}\ {\mathrm{Agreement}}_{\mathrm{Post}}-\mathrm{Cx}\ {\mathrm{Agreement}}_{\mathrm{Baseline}}\right)$$
DID isolates the changes over time that are associated with the campaign. We did this for the overall sample and stratified by age (11–13, 14–16 at baseline) to test if boys of different ages reacted differently to the campaign. To examine early campaign results, we also report changes from the baseline to the second follow-up. We conducted multivariate DID models with a treatment group indicator, pre-post campaign indicator, and the interaction between the two indicators (i.e., treatment*pre-post) with the full sample and stratified by age. We then used margins [
11] to estimate the DID in percentage point terms for each outcome variable. Our models include control variables described above and dropped 81 observations for missing responses in the multivariate models vs. models without control variables. Finally, we tested if being in the treatment group, along with model covariates, was associated with attrition by creating an indicator variable for respondents who dropped out after any wave of data collection and did not return to complete a subsequent survey.