Appendix

Model simulation details

The first stage of the model generates a simulated patient population. A user-specified number of full time FTE MD’s are generated and each MD has a user-specified patient panel size. Default values are five FTE MD’s and 1,500 patients per MD. Users can additionally specify variation in panel size for each simulated MD (i.e., to capture variation in panel size within a practice). Given this estimate of total patient population size, the model accepts either user-specified input data on the age, sex, race/ethnicity, insurance status and state of residence of the patients, or constructs a default population based on state of residence. Age is organized into several classes (<5, 5-13, 14-17, 18-24, 25-44, 45-64, 65-84, and >84 years old), sex is dichotomous, race/ethnicity is classified into standard U.S. Census categories (non-Hispanic White, non-Hispanic Black, Hispanic, and Other), and income is expressed in quintiles 1.

The model simulates a primary care practice give a user-specific numbered of full time equivalent primary care physicians who each provide care for a user-specified number of patients. The model simulates a representative population of patients that would be expected in the practice based on practice location, or given user-specified information on patients’ demographics and insurance status from a given practice. The model then estimates the frequency of different diagnoses among the assigned patient population based on practice location and patient demographic characteristics (age, sex, race, income and insurance status), as well as estimating the patients’ expected practice utilization and the practice’s revenues generated through that utilization using data from the Medical Expenditure Panel Survey (2012, the most recent available survey wave) 2.

To perform these estimates, individuals in the model are assigned demographic features in a probabilistic manner to match US Census tables specifying the covariance between the following characteristics 1: age (organized in cohorts of <5, 5-13, 14-17, 18-24, 25-44, 45-64, 65-84, and >84 years old), sex, race/ethnicity (in standard Census categories of non-Hispanic White, non-Hispanic Black, Hispanic, and Other), and income (expressed as a poverty income ratio to correct for household size, in five standard categories of <100% federal poverty level, 100-138% FPL, 139-250% FPL, 251-400% FPL, and >400% FPL). To assign simulated individuals these characteristics, we Monte Carlo sample from the joint probability distributions of these demographic features using the Census data for each state, constructing a demographically-representative state population. The joint probability distributions are captured using a copula function, which allows the covariance between variables to be taken into account 3. Based on these demographic features and state of residence, individuals are similarly assigned an insurance status (private, Medicare, Medicaid/CHIP, or self-pay) based on their demographic characteristics and their state of residence, again using Monte Carlo sampling from each state’s distribution of insurance among each demographic group 1. The data are freely-accessible online (https://www.census.gov/cps/data/).

Insurance status assignments were updated to reflect recent Medicaid expansion decisions among states and the anticipated enrollment into private insurance due to the Affordable Care Act, based on Medicaid expansion decisions declared as of April 2014 by the Centers for Medicare and Medicaid Services (CMS) 4. Specifically, the baseline model includes the CMS adjustment for Arkansas and Iowa Medicaid participation rates to account for their Section 1115 waivers for Medicaid expansion; at the time of this writing, Indiana, Minnesota and Pennsylvania have pending waivers for Medicaid expansion and were similarly included as expansion states under the assumption of expansion approval, also using the CMS adjustment estimate 4. Wisconsin was not included as an expansion state because it amended its Medicaid state plan and existing Section 1115 waiver to cover adults up to 100% FPL, but did not adopt the expansion at the time of this writing. These state-specific adjustments can be easily updated by users of the model as future changes occur.

The second stage of the model assigns diagnoses to individuals by ICD-9 code based on MEPS 2, which is linked to the number of primary care medical visits and reimbursements associated with those visits given patient demographics, insurance, and diagnoses. As with the demographic assignment, we used Monte Carlo sampling to assign each simulated individual a diagnosis and number of practice visits per year by sampling from that individual’s demographic and insurance group in MEPS. By summing these visits across each patient panel and overall practice, we simulated total practice revenues. The data for this stage are also freely-accessible online (http://meps.ahrq.gov/mepsweb/data_stats/download_data_files.jsp). By relying on MEPS-based coding, this portion of the module can also be easily updated for ICD-10 and future codes once those are integrated into the MEPS database.

The third stage of the model estimated practice expenses. The model calculates practice-level expenses in a separate modules reflecting both personnel and overhead expenditures. The model first calculates personnel expenses by either allowing the model user to specify the staff workforce, salaries and benefit costs for a given practice, or allows the user to resort to the default values. For MDs and NPs, the data by state are specified in Appendix Table 1, based on estimates from the Bureau of Labor Statistics 5. For other support staff, staffing ratios per MD and detailed compensation data were not available from the Bureau of Labor Statistics; we therefore used nationally-representative data from the Medical Group Management Association and Kenexa Compensation Analysis for support staff 6,7, which is summarized in Appendix Table 2. The support staff include clerical staff; medical assistants, technicians, and licensed practical nurses; registered nurses; registered nurse care managers; physician assistants; health coaches; pharmacists; social workers including mental health; mental health providers; nutritionists; and practice data analysts. However, as shown in Appendix Table 2, the baseline model did not include all of these components, and used the MGMA data on typical staffing ratios to simulate a nationally-representative practice; users can then experiment with additional staff as desired using the default compensation data in the model to, for example, simulate the addition of a nutritionist to the practice. Additional overhead expenditures were taken from the MGMA DataDive database, and include physical and service infrastructure costs, liability insurance, and information technology and telecommunications expenses including electronic medical record expenditures, all of which can be customized and augmented by the user to account for additional expenses as desired, or left at their default values, which are itemized in Appendix Table 3. All costs and revenues were adjusted to 2014 U.S. dollars using the Consumer Price Index 8.

For the NP versus new MD demonstration analysis, we compared the baseline scenario with a five FTE MD practice against the counterfactual scenario in which we added an NP or new MD to the practice, and distributed patients to this new hire dependent on their user-specified hours of work per week and patient visits per hour. In addition to adding the salary and benefit costs associated with this new hire (Appendix Table 1), we computed the revenues generated by these additional patients seen, adjusting for differential NP reimbursement versus MD’s given a state-specific database of current reimbursement rates 9. In sensitivity analyses for the NP simulation, we allowed one MD of the practice to “lose” patients per hour based on the assigned probabilistic risk and time required for each NP patient requiring consultation, which was varied as shown in main text Figure 3A. The timing of the consultation and associated loss is also varied as shown in main text Figure 3A. The probabilistic risk of consultation was simulated using a standard binomial probability distribution function, which allowed the MD to be redirected and associated MD visit to be unbilled, but the NP consultation resulted in MD billing of the NP consult case. The risk function can be adjusted by the user to simulate other types of arrangements, such as having each MD cleared of their half-day schedule and performing other administrative work, with subsequent revenue and cost implications added to the above modules.

To simulate waiting times, a standard Erlang-type queuing system was derived 10. The proportion of patients per unit time who utilize the practice will be the rate of utilization e multiplied by the minimum of either the entire population requiring appointments (the population who can potentially utilize the practice at any given time) or the fraction of that population for which there is practice capacity during any given period of time. If a total of Y patients can take an appointment slot at a given time, then the number given an appointment will be e min(Y, b - N), where b is the number of appointment slots available (i.e., the number of providers multiplied by the time they work and the number of patients they book during that timer), and N is the number of occupied slots. To determine waiting times among queuing patients, we observe that for a queue of Q patients who dropout from the queue (i.e., cancel or do no show up for appointments) at rate d given by the MGMA data as an average of 0.38 per hour per MD FTE (SD=0.06) 7, the proportion dropping out rather than utilizing the practice will be 1 – e-dW, where W is the waiting time in the queue. Hence e-dW are the fraction of patients who utilize the practice. If eY persons per unit time enter the queue and the rate of utilization from the queue is s, then the fraction of patients who utilize the practice are sQ/eY. Hence e-dW = sQ/eY and the waiting time W = (1/d)ln(eY/sQ).

Systematic review search strategy

To briefly review NP effects on utilization and revenue for comparison against model outcomes from the NP demonstration analysis described in the main text, we searched PubMed and Google Scholar using the following combination of terms:

"primary health care"[MeSH Terms] AND “nurse practitioner”[MeSH Terms] AND (“utilization”[Text] OR “cost”[Text] OR “revenue”[Text] OR “demand”[Text] OR “reimbursements”[Text]) AND NOT (Letter[ptyp] OR Editorial[ptyp]).

The search was performed in March 2014 and included English-language articles from January 1980 through March 2014. Appendix Figure 1 provides the PRISMA flow diagram detailing the research results, screening strategy for relevance, and numbers of articles deemed eligible and included in the analysis. Initial screening for relevance was performed using an automated content filter, then screened articles were evaluated for inclusion by title and abstract review. Inclusion criteria are provided in Appendix Table 5. The total included references and their effect sizes are summarized in Appendix Table 6. Email inquiries were sent to some authors of included studies to clarify ambiguities in reported data or obtain additional data reported as effect size estimates in the Table.


References

1. U. S. Census Bureau. Current Population Survey: Annual Social and Economic Supplements. Washington D.C.: U.S. Census Bureau; 2013.

2. Agency for Healthcare Research and Quality. Medical Expenditure Panel Survey. Washington D.C.: AHRQ; 2012.

3. Hofert M, Mächler M. Nested Archimedean copulas meet R: The nacopula package. J Stat Softw. 2011;39(9):1–20.

4. Centers for Medicare & Medicaid Services. Medicaid Moving Forward 2014. Washington D.C.: CMS; 2014.

5. U.S. Bureau of Labor Statistics. Occupational Outlook Handbook. Washington D.C.: BLS; 2014.

6. Kenexa. CompAnalyst Market Data. Wayne: Kenexa; 2013.

7. Medical Group Management Association. DataDive. Englewood: MGMA; 2013.

8. Bureau of Labor Statistics. Consumer Price Index (CPI) [Internet]. 2014 [cited 2013 Sep 26]. Available from: http://www.bls.gov/cpi/

9. Kaiser Family Foundation. KCMU Benefits Database. Menlo Park: KFF; 2013.

10. Basu S, Friedland GH, Medlock J, Andrews JR, Shah NS, Gandhi NR, et al. Averting epidemics of extensively drug-resistant tuberculosis. Proc Natl Acad Sci U S A. 2009 May 5;106(18):7672–7.

11. Design Cost Data. NHBC Database. Valrico: DCD; 2007.

12. Pociask S. A Survey of Small Businesses’ Telecommunications Use and Spending. Washington D.C.: United States Small Business Administration; 2004.

13. Wang SJ, Middleton B, Prosser LA, Bardon CG, Spurr CD, Carchidi PJ, et al. A cost-benefit analysis of electronic medical records in primary care. Am J Med. 2003;114(5):397–403.

14. Keleher H, Parker R, Abdulwadud O, Francis K. Systematic review of the effectiveness of primary care nursing. Int J Nurs Pract. 2009;15(1):16–24.

15. Horrocks S, Anderson E, Salisbury C. Do nurse practitioners working in primary care provide equivalent care to doctors? Bmj. 2002;324:819–23.

16. Laurant M, Reeves D, Hermens R, Braspenning J, Grol R, Sibbald B. Substitution of doctors by nurses in primary care. Cochrane Database Syst Rev [Internet]. 2004 [cited 2014 Apr 10];4. Available from: http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD001271.pub2/pdf/standard

17. Salkever DS, Skinner EA, Steinwachs DM, Katz H. Episode-based efficiency comparisons for physicians and nurse practitioners. Med Care. 1982;143–53.

18. Roblin DW, Howard DH, Becker ER, Kathleen Adams E, Roberts MH. Use of midlevel practitioners to achieve labor cost savings in the primary care practice of an MCO. Health Serv Res. 2004;39(3):607–26.

19. Venning P, Durie A, Roland M, Roberts C, Leese B. Randomised controlled trial comparing cost effectiveness of general practitioners and nurse practitioners in primary care. Bmj. 2000;320(7241):1048–53.

Appendix Page 1 of 1