We hypothesized that Medicaid expansion would be associated with increases in Medicaid visits without decreases in visits for other insurance types; as more individuals were enrolled in Medicaid, the demand, and subsequent utilization for healthcare services would increase. Additionally, Medicaid visits in expansion states would be associated with increases in both high- and low-value care. To test these hypotheses, we utilized visit-level data from the National Ambulatory Medical Care Survey (NAMCS), standardized with state-level U.S. Census population estimates. We used prespecified difference-in-differences (DinD) analytical approaches to assess changes in physician office visits and high- and low-value care use between expansion states and non-expansion states before and after Medicaid expansion across all-payers.
Data source and collection
NAMCS is a nationally representative cross-sectional survey of visits to office-based outpatient practices. The National Center for Health Statistics (NCHS) oversees NAMCS using a complex, multistage probability design detailed elsewhere [
23]. For sampled visits, NAMCS collects information from the medical record, including patient demographics, payer, reasons for the visit, diagnoses, comorbidities, procedures, diagnostic tests, and medications. NAMCS includes survey weights that allow for national and regional estimates as well as state-level estimates during 2012–2015. We did not include the separate NAMCS community health center (CHC) sampling files in this analysis, which samples Federally Qualified Health Centers.
A DinD analysis using only NAMCS would not be able to account for changes in the underlying state population size as a potential confounder associated with changes in access to care unrelated to Medicaid expansion among expansion and non-expansion states. Therefore, we used U.S. Census Bureau state-level population estimates for 2012–2015 to account for time-varying population changes.
Study sample
NAMCS provides state-level estimates for only years 2012–2015 and only for the most populous U.S. states (34 states in 2012, 22 in 2013, 18 in 2014, 16 in 2015). We included adult (age > 17) visits from 2012 to 2015 occurring in states (13) with state-level estimates available for all study years (expansion states: AZ, CA, IL, MA, NJ, NY, OH and WA; non-expansion states: FL, GA, NC, TX, and VA). We excluded non-adult visits and those occurring in states without state-level estimates. In addition to these adult visits, we identified subpopulations of visits by payer (Medicaid, Medicare, and commercially insured – defined as charges paid in-part or in-full by a private insurer (e.g., Blue Cross/Blue Shield) either directly to the physician or reimbursed to the patient; includes charges covered under a private insurance sponsored prepaid plan), and an additional subpopulation of “new” Medicaid patients. As NAMCS does not allow for individual identification of patients who enrolled in Medicaid as newly eligible under Medicaid expansion, we defined a new Medicaid visit as an adult visit with insurance type Medicaid, with a patient who has not been seen in that clinic before, to a clinician accepting new Medicaid patients.
Because Medicaid expansion was limited to the low-income adult population, we expected that this would be less likely to differentially affect visit rates or visit-level quality for older adults in the Medicare population (except the small number of dually-eligible enrollees in our sample) or for the commercially-insured population in expansion vs. non-expansion states. Therefore, these groups could theoretically serve as an alternative within expansion-state control group, in addition to the non-expansion states when considering results in the Medicaid population.
Primary accessibility and quality of care outcome measures
Our analysis had two categories of outcomes: access to care measured by physician office visit volume and quality of care measured by widely accepted high- and low-value care metrics. To measure visit volume, we counted all physician office visits in a survey year and summarized for each insurance type of interest (Medicaid, Medicare, and commercial). To assess quality of care, we used two composite outcome measures defined as receipt of any high-value care and receipt of any low-value care. (We pre-specified a composite given concern about sample size for individual measures within each subpopulation). We selected these individual measures based on prior literature and studies using NAMCS (including our own work) and professional physician practice guidelines, such as those developed by the USPSTF [
25], National Committee for Quality Assurance [
26], or Choosing Wisely
[Ⓡ][
27]. (See
Appendix). [
28‐
41]. The high-value care outcome included ten measures: prescriptions for antiplatelet, statins, and beta blockers in coronary artery disease; beta blockers, angiotensin-converting enzyme inhibitor (ACEi) or angiotensin receptor blockers (ARB) in heart failure, anticoagulants in atrial fibrillation, antiplatelet agents in cerebrovascular disease, statins in diabetes mellitus, treatment for depression, and treatment for osteoporosis. The low-value care outcome included seven measures: screening for asymptomatic bacteriuria, screening for cardiovascular disease in low-risk patients, antibiotics for upper respiratory tract infections (URIs), opioid prescriptions for headache, opioid prescriptions for neck/back pain, advanced imaging for headache, and advanced imaging for neck/back pain. For each measure comprising the outcome, we followed previously established inclusion and exclusion criteria relying on presence of reason for visit codes, diagnostic codes, and comorbidity indicators to identify eligible visits. We determined whether a patient received any of the above high or low-value services in a similar way using prescription drug codes and whether the clinician ordered the related diagnostic imaging or lab.
Statistical analysis
To mitigate publication and/or selective reporting bias, the study protocol was pre-registered on clinicaltrials.gov (NCT05319743). We first compared visit-level characteristics between Medicaid expansion and non-expansion states in the pre-expansion and post-expansion periods using descriptive statistics. We organized the subsequent analysis into three parts, in which the units of analysis for the first and third parts are state-year and the units of analysis for the second part are individual visits. All analyses use cluster-robust standard errors at the state-level to account for state-level clustering and sample weights to account for the complex survey design and non-response bias in accordance with NCHS guidelines [
23].
First, we assessed differences in access to care by quantifying the number of Medicaid visits at the state-level, standardizing for state population. As such, we analyzed and reported visit changes as the number of visits per 100 adults. We then calculated survey-weighted Medicaid visit rates and parameters for expansion states and non-expansion states for each year. We then performed a DinD linear combination calculation of the rates and parameters to assess for significant changes pre and post expansion between expansion states and non-expansion states.
Second, we examined differences in the visit-level receipt of high or low-value care using multivariable logistic regression models, accounting for non-linearity of the logit-model. The model included binary indicators for state expansion status, pre- versus post-expansion period, the interaction of the expansion status and timeframe indicators, and visit-level adjustment variables. This model adjusted for visit-level characteristics that may confound the relationship between Medicaid expansion and quality measures including patient age, sex, race and/or ethnicity, rural versus urban location, and number of chronic conditions. To facilitate interpretability of the regression results, we reported average marginal effects (predicted probabilities). To account for the possibility that differential changes in diagnoses associated with visits over time could bias quality of care in expansion versus non-expansion states, a visit was only included in this analysis if it had the potential to result in a high- or low-value care. For example, visits for heart failure were included given the potential to prescribe several high-quality drugs; visits for back pain were included but excluded if there was a diagnosis of osteomyelitis requiring MRI; whereas visits for hand laceration were excluded as there is no corresponding high or low-value service to be performed in that visit.
Third, we examined differences in state-level rates of low-value and high-value care. We calculated our outcomes of interest for each state-year as a rate per 100 adults, summarizing the number of visits in which high and low-value care was provided, divided by the state adult population, and multiplied by 100. We again generated survey-weighted, visit rates and parameters for expansion states and non-expansion states for each year, followed by a DinD linear combination calculation of the rates and parameters to assess for significant changes pre- and post-expansion between expansion states and non-expansion states.
We repeated the above analyses substituting the Medicaid visits with all adult visits (Medicaid, Medicare, and commercially insured), Medicare visits, and commercially insured visits.
The DinD design has been used in multiple evaluations of Medicaid expansion and a key assumption for DinD analysis is that the time trend in the outcome variables during the pre-policy implementation period are parallel [
42]. We successfully verified parallel trends using visual plots and formal placebo testing of the assumption by restricting the sample to the pre-expansion period and interacting expansion status with a linear term for year during 2012 versus 2013, which was non-significant.
We also accounted for multiple testing by applying the Benjamini-Hochberg step up procedure with a false discovery rate at the 5% level, [
43] a level previously designated in published analyses [
44]. We applied this correction to Medicaid and new Medicaid outcomes given multiple testing in these populations as our primary population of interest. We reported the p-values and whether they maintain significance following the multiple testing correction. We performed all analyses in Stata SE, version 17.0 (StataCorp, TX).